Tell HN: Five random IndieWeb blog links on your terminal

28 points by susam 9 hours ago

Hello HN! I believe some of you might have come across this pretty interesting post about discovering IndieWeb blogs, one blog at a time: <https://news.ycombinator.com/item?id=43139953>

That post is actually a link to <https://indieblog.page/>, which features an impressive list of independently maintained personal websites and blogs with RSS feeds.

I was wondering, if instead of discovering one blog at a time, I could discover five blogs at once, directly from my terminal! Turns out the website has <https://indieblog.page/random> which picks a random blog and redirects you to it. It has daily feeds of N random posts too, where N = 1, 3, 5, and 10.

But if, like me, you'd like to have five random blogs suggested in your terminal, here is a quick shell one-liner I'd like to share. It fetches the random blog picker page five times, cleans up the URLs a bit, and prints them on the terminal:

  for _ in $(seq 1 5); do curl -sSI https://indieblog.page/random | grep location | sed 's/location: \(.*\).utm_source=.*/\1/'; done 
Or if you want something to go in ~/.zshrc, ~/.bashrc, etc., then you could have a tiny shell function like this:

  iw5() {
      for _ in $(seq 1 5)
      do
          curl -sSI https://indieblog.page/random |
              grep location |
              sed 's/location: \(.*\).utm_source=.*/\1/'
      done
  }
Here is an example output:

  $ iw5
  https://justinsimoni.com/colorado-trail-24-hour-backpacking-challenge-in-may-how-far-can-i-hike/
  http://www.iamcal.com/2024-09/10218/
  https://dracos.co.uk/wrote/advent-of-code-2024-7/
  https://leancrew.com/all-this/2024/12/mind-your-plotting/
  http://i.never.nu/personal-foundational-texts/
It helps me to quickly browse through a set of random suggestions on the terminal, decide which one I want to visit, right click on it, and select the "open link" option to open the link in a web browser. Yes, I know this amounts to judging a post by its link, but sometimes the post slug (if present) alone sparks enough curiosity to dive deeper, and it's a quick way to explore new posts without overthinking it.

If you're using this, please be considerate of the indieblog.page web server. Avoid running it too frequently to prevent putting unnecessary strain on the server.

darrenf 5 hours ago

Not specifically trying to play golf, just to let you know that `curl -w` is your friend for extracting data from the headers:

    for _ in {1..5} ; do
         curl -sSw '%header{location}\n' https://indieblog.page/random | \
             sed -e s/.utm_.*$//
    done
atomic128 3 hours ago

Random recent browsing is also the best way to read Hacker News.

It's not addictive, gives you a quick sample of what people are thinking about recently, prevents your own biases from narrowing your view, etc.

See https://rnsaffn.com/zg2/

Each refresh gives you a random post, plus its parent thread, plus some detail about that post's author's history.

zimpenfish 4 hours ago

To save the grep, you can do:

    sed -Ene 's/^location: (.*)?utm.*$/\1/p'
If you've got a newer `curl`, you can just print the header:

    curl -I -s -o /dev/null -w '%header{location}' <url>
But you'll obviously still need the sed if you want to trim off the UTM cruft. Unless you want to go the route of `bash` substitutions.

    l="$(curl -I -s -o /dev/null -w '%header{location}' <url>)"
    echo "${l%%?utm*}"
rednafi 4 hours ago

Yo, this shit could be a tight JavaScript app with a reactive frontend. Y’all geezers still fuckin’ with them shell commands? /s