squeeze

A static site generator that can put the toothpaste back in the tube.
git clone https://git.mulligrubs.me/squeeze
Log | Files | Refs | README | LICENSE

commit c4a31f5246099389cb86d8dfb9df2b496586dde6
parent 407f91ea668e0e3cf4a2ba7b405423a8029d796a
Author: St John Karp <stjohn@fuzzjunket.com>
Date:   Sun, 17 May 2020 13:29:04 -0500

Generate RSS from HTML output

Generate the RSS feed by parsing the HTML output, which allows
us to get the generated HTML instead of posting raw Markdown.

Diffstat:
Mgenerate_rss.pl | 18+++++++++---------
Mhtml.pl | 3++-
Msqueeze.sh | 5+++--
3 files changed, 14 insertions(+), 12 deletions(-)

diff --git a/generate_rss.pl b/generate_rss.pl @@ -4,7 +4,7 @@ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% :- include('helpers.pl'). -:- include('markdown.pl'). +:- include('html.pl'). :- include('rss.pl'). % generate_rss(+BuildDate, +Filenames). @@ -29,12 +29,12 @@ files_to_articles([], []). files_to_articles([Filename|Filenames], [article(Date, Title, Link, Description)|Articles]):- open(Filename, read, Stream), - read_file(Stream, Markdown), + read_file(Stream, HTML), close(Stream), % Grab the link. get_link(Filename, Link), - % Extract the title, entry, etc. from the Markdown. - markdown(Entry, Title, _, Date, Markdown, []), + % Extract the title, entry, etc. from the HTML. + page(Entry, Title, _, Date, HTML, []), % XML escape the description. replace("&", "&amp;", Entry, EntryAmp), replace("<", "&lt;", EntryAmp, EntryLT), @@ -48,18 +48,18 @@ get_link(Filename, Link):- atom_codes(Filename, FilenameCodes), % Just assert that this is an index file before we go further. % Backtracking after this point will take us down a rabbit hole. - append_lists(_, "index.md", FilenameCodes), + append_lists(_, "index.html", FilenameCodes), site_url(URL, []), - append_lists(_, "/source", StartPath), + append_lists(_, "/output", StartPath), append_lists(StartPath, Path, FilenameCodes), - append_lists(PathWithoutFile, "index.md", Path), + append_lists(PathWithoutFile, "index.html", Path), append_lists(URL, PathWithoutFile, Link). get_link(Filename, Link):- atom_codes(Filename, FilenameCodes), site_url(URL, []), - append_lists(_, "/source", StartPath), + append_lists(_, "/output", StartPath), append_lists(StartPath, Path, FilenameCodes), - append_lists(PathWithoutExtension, ".md", Path), + append_lists(PathWithoutExtension, ".html", Path), append_lists(PathWithoutExtension, "/", PathWithSlash), append_lists(URL, PathWithSlash, Link). \ No newline at end of file diff --git a/html.pl b/html.pl @@ -7,7 +7,8 @@ page(Entry, Title, Subtitle, Date) --> doctype, newline, - html(Entry, Title, Subtitle, Date). + html(Entry, Title, Subtitle, Date), + newline. html(Entry, Title, Subtitle, Date) --> html_open, diff --git a/squeeze.sh b/squeeze.sh @@ -58,6 +58,9 @@ ARTICLES=$(grep --recursive --include=\*.md "^Date: " "$SITE_PATH/$SOURCE_DIR" | cut --fields=2 | # Get the last (i.e. most recent) posts for the RSS feed. tail -5 | + # Convert paths so we operate on the generated HTML, not the unformatted Markdown. + sed "s|^$SITE_PATH/$SOURCE_DIR|$SITE_PATH/$OUTPUT_DIR|" | + sed 's|.md$|.html|' | # Glue the file names together to be passed to Prolog. paste --serial --delimiters=',' - | sed "s|,|','|g") @@ -68,4 +71,3 @@ swipl --traditional --quiet -l generate_rss.pl -g "consult('$SITE_PATH/site.pl') # Strip everything before the XML declaration. awk "/<?xml/{i++}i" \ > "$SITE_PATH/$OUTPUT_DIR/feeds/rss.xml" - - \ No newline at end of file