One way to achieve this is to post your articles on all appropriate media
yourself, i.e., being the first. There are two issues with that. First, on
both Hacker News and Lobsters doing so is generally scowled at, in particular if
you do this too often, and you may be awarded with a ban. Second, you don’t
control subsequent reposts, and everything on these sites has its own, limited
lifetime.
The more sustainable approach is to search for your domain’s appearances on
those sites of interest in an automated fashion:
-
Hacker News has an easy-to-use
specialized JSON search API.
-
Reddit's general-purpose API doesn’t
like cURL’s default User-Agent, and uses very awkward data structures,
yet it’s also fine to use.
-
Lobsters doesn’t have an API per se at all, but you can easily parse
the search page.
In my case, I simply extended the web-watching script I launch once a day through systemd with a few more
requests. It reduces to:
#!/bin/sh -e
status=0 workdir=watch
mkdir -p $workdir
check() {
local url=$1 f=$workdir/$(echo "$1" | sed 's|/|\\|g')
if ! curl -A Skynet --no-progress-meter -Lo "$f.download" "$url"; then
status=1
else
shift
"$@" <"$f.download" >"$f.filtered" || status=1
if [ -f "$f" ] && ! diff "$f.filtered" "$f" >"$f.diff"; then
mail -s "$url updated" root <"$f.diff" || status=1
fi
mv "$f.filtered" "$f"
fi
}
check 'https://hn.algolia.com/api/v1/search_by_date?query=p.janouch.name' \
jq -r '.hits[] | "https://news.ycombinator.com/item?id=\(.objectID) \(.url)"'
check 'https://www.reddit.com/search.json?q=site%3Ap.janouch.name&sort=new' \
jq -r '"https://reddit.com" + .data.children[].data.permalink'
check 'https://lobste.rs/domain/p.janouch.name' \
perl -lne 'print "https://lobste.rs$&" if m|/s/\w+| && !$seen{$&}++'
exit $status
Thus, I get diffs by mail. You just have to love the Bourne shell, Perl and jq.
Of course, the results can also be further processed directly, and with the
exception of Lobsters, which I’ll talk about in a moment, the requests can even
be run from your reader’s browser only. That is, if you don’t care about people
with disabled Javascript.
So far I’m quite content with adding links to my static pages manually,
in the manner of:
<h3 class=hacker-news data-id=12345678>
<a href='https://news.ycombinator.com/item?id=12345678'>Hacker News</a></h3>
<h3 class=lobsters data-id=1a2b3c>
<a href='https://lobste.rs/s/1a2b3c'>Lobsters</a></h3>
<h3 class=reddit data-id=1a2b3c>
<a href='https://www.reddit.com/comments/1a2b3c/'>r/linux</a></h3>
<h3 class=reddit data-id=4d5e6f>
<a href='https://www.reddit.com/comments/4d5e6f/'>r/programming</a></h3>
or declaratively, for my custom static site generator based on libasciidoc:
:hacker-news: 12345678
:lobsters: 1a2b3c
:reddit: 1a2b3c, 4d5e6f
:reddit-subs: r/linux, r/programming
Comments
Use e-mail, webchat, or the form below. I'll also pick up on new HN, Lobsters, and Reddit posts.