Jim Nielsen’s Blog

You found my experimental HTML feed (there are also other ways to subscribe).

I HTML

Recent posts

Writing: Blog Posts and Songs

View

I was listening to a podcast interview with the Jackson Browne (American singer/songwriter, political activist, and inductee into the Rock and Roll Hall of Fame) and the interviewer asks him how he approaches writing songs with social commentaries and critiques — something along the lines of: “How do you get from the New York Times headline on a social subject to the emotional heart of a song that matters to each individual?”

Browne discusses how if you’re too subtle, people won’t know what you’re talking about. And if you’re too direct, you run the risk of making people feel like they’re being scolded. Here’s what he says about his songwriting:

I want this to sound like you and I were drinking in a bar and we’re just talking about what’s going on in the world. Not as if you’re at some elevated place and lecturing people about something they should know about but don’t but [you think] they should care. You have to get to people where [they are, where] they do care and where they do know.

I think that’s a great insight for anyone looking to have a connecting, effective voice. I know for me, it’s really easily to slide into a lecturing voice — you “should” do this and you “shouldn’t” do that.

But I like Browne’s framing of trying to have an informal, conversational tone that meets people where they are. Like you’re discussing an issue in the bar, rather than listening to a sermon.

Chris Coyier is the canonical example of this that comes to mind. I still think of this post from CSS Tricks where Chris talks about how to have submit buttons that go to different URLs:

When you submit that form, it’s going to go to the URL /submit. Say you need another submit button that submits to a different URL. It doesn’t matter why. There is always a reason for things. The web is a big place and all that.

He doesn’t conjure up some universally-applicable, justified rationale for why he’s sharing this method. Nor is there any pontificating on why this is “good” or “bad”. Instead, like most of Chris’ stuff, I read it as a humble acknowledgement of the practicalities at hand — “Hey, the world is a big place. People have to do crafty things to make their stuff work. And if you’re in that situation, here’s something that might help what ails ya.”

I want to work on developing that kind of a voice because I love reading voices like that.


Reply via: Email · Mastodon · Bluesky

A Few Things About the Anchor Element’s href You Might Not Have Known

View

I’ve written previously about reloading a document using only HTML but that got me thinking: What are all the values you can put in an anchor tag’s href attribute?

Well, I looked around. I found some things I already knew about, e.g.

  • Link protocols like mailto:, tel:, sms: and javascript: which deal with specific ways of handling links.
  • Protocol-relative links, e.g. href="//"
  • Text fragments for linking to specific pieces of text on a page, e.g. href="#:~:text=foo"

But I also found some things I didn’t know about (or only vaguely knew about) so I wrote them down in an attempt to remember them.

href="#"

Scrolls to the top of a document. I knew that.

But I’m writing because #top will also scroll to the top if there isn’t another element with id="top" in the document. I didn’t know that.

(Spec: “If decodedFragment is an ASCII case-insensitive match for the string top, then return the top of the document.”)

Update: HTeuMeuLeu pointed out to me on Mastodon that you can use #page= to deep-link to a specific page in a PDF, e.g. my-file.pdf#page42 would like to page 42 in the file.

href=""

Reloads the current page, preserving the search string but removing the hash string (if present).

URLResolves to
/path//path/
/path/#foo/path/
/path/?id=foo/path/?id=foo
/path/?id=foo#bar/path/?id=foo

href="."

Reloads the current page, removing both the search and hash strings (if present).

Note: If you’re using href="." as a link to the current page, ensure your URLs have a trailing slash or you may get surprising navigation behavior. The path is interpreted as a file, so "." resolves to the parent directory of the current location.

URLResolves to
/path/
/path#foo/
/path?id=foo/
/path//path/
/path/#foo/path/
/path/?id=foo/path/
/path/index.html/path/

href="?"

Reloads the current page, removing both the search and hash strings (if present). However, it preserves the ? character.

Note: Unlike href=".", trailing slashes don’t matter. The search parameters will be removed but the path will be preserved as-is.

URLResolves to
/path/path?
/path#foo/path?
/path?id=foo/path?
/path?id=foo#bar/path?
/index.html/index.html?

href="data:"

You can make links that navigate to data URLs. The super-readable version of this would be:

<a href="data:text/plain,hello world">
  View plain text data URL
</a>

But you probably want data: URLs to be encoded so you don’t get unexpected behavior, e.g.

<a href="data:text/plain,hello%20world">
  View plain text data URL
</a>

Go ahead and try it (FYI: may not work in your user agent). Here’s a plain-text file and an HTML file.

href="video.mp4#t=10,20"

Media fragments allow linking to specific parts of a media file, like audio or video.

For example, video.mp4#t=10,20 links to a video. It starts play at 10 seconds, and stops it at 20 seconds.

(Support is limited at the time of this writing.)

See For Yourself

I tested a lot of this stuff in the browser and via JS. I think I got all these right.

Thanks to JavaScript’s URL constructor (and the ability to pass a base URL), I could programmatically explore how a lot of these href’s would resolve.

Here’s a snippet of the test code I wrote. You can copy/paste this in your console and they should all pass 🤞

const assertions = [
  // Preserves search string but strips hash
  // x -> { search: '?...', hash: '' }
  { href: '', location: '/path',               resolves_to: '/path' },
  { href: '', location: '/path/',              resolves_to: '/path/' },
  { href: '', location: '/path/#foo',          resolves_to: '/path/' },
  { href: '', location: '/path/?id=foo',       resolves_to: '/path/?id=foo' },
  { href: '', location: '/path/?id=foo#bar',   resolves_to: '/path/?id=foo' },
  
  // Strips search and hash strings
  // x -> { search: '', hash: '' }
  { href: '.', location: '/path',              resolves_to: '/' },
  { href: '.', location: `/path#foo`,          resolves_to: `/` },
  { href: '.', location: `/path?id=foo`,       resolves_to: `/` },
  { href: '.', location: `/path/`,             resolves_to: `/path/` },
  { href: '.', location: `/path/#foo`,         resolves_to: `/path/` },
  { href: '.', location: `/path/?id=foo`,      resolves_to: `/path/` },
  { href: '.', location: `/path/index.html`,   resolves_to: `/path/` },
  
  // Strips search parameters and hash string,
  // but preserves search delimeter (`?`)
  // x -> { search: '?', hash: '' }
  { href: '?', location: '/path',              resolves_to: '/path?' },
  { href: '?', location: '/path#foo',          resolves_to: '/path?' },
  { href: '?', location: '/path?id=foo',       resolves_to: '/path?' },
  { href: '?', location: '/path/',             resolves_to: '/path/?' },
  { href: '?', location: '/path/?id=foo#bar',  resolves_to: '/path/?' },
  { href: '?', location: '/index.html#foo',    resolves_to: '/index.html?'}
];

const assertions_evaluated = assertions.map(({ href, location, resolves_to }) => {
  const domain = 'https://example.com';
  const expected = new URL(href, domain + location).toString();
  const received = new URL(domain + resolves_to).toString();
  return {
    href,
    location,
    expected: expected.replace(domain, ''),
    received: received.replace(domain, ''),
    passed: expected === received
  };
});

console.table(assertions_evaluated);

Reply via: Email · Mastodon · Bluesky

How to Make Websites That Will Require Lots of Your Time and Energy

View

Some lessons I’ve learned from experience.

1. Install Stuff Indiscriminately From npm

Become totally dependent on others, that’s why they call them “dependencies” after all! Lean in to it.

Once your dependencies break — and they will, time breaks all things — then you can spend lots of time and energy (which was your goal from the beginning) ripping out those dependencies and replacing them with new dependencies that will break later.

Why rip them out? Because you can’t fix them. You don’t even know how they work, that’s why you introduced them in the first place!

Repeat ad nauseam (that is, until you decide you don’t want to make websites that require lots of your time and energy, but that’s not your goal if you’re reading this article).

2. Pick a Framework Before You Know You Need One

Once you hitch your wagon to a framework (a dependency, see above) then any updates to your site via the framework require that you first understand what changed in the framework.

More of your time and energy expended, mission accomplished!

3. Always, Always Require a Compilation Step

Put a critical dependency between working on your website and using it in the browser. You know, some mechanism that is required to function before you can even see your website — like a complication step or build process. The bigger and more complex, the better.

This is a great way to spend lots of time and energy working on your website.

(Well, technically it’s not really working on your website. It’s working on the thing that spits out your website. So you’ll excuse me for recommending something that requires your time and energy that isn’t your website — since that’s not the stated goal — but trust me, this apparent diversion will directly affect the overall amount of time and energy you spend making a website. So, ultimately, it will still help you reach our stated goal.)

Requiring that the code you write be transpiled, compiled, parsed, and evaluated before it can be used in your website is a great way to spend extra time and energy making a website (as opposed to, say, writing code as it will be run which would save you time and energy and is not our goal here).

More?

Do you have more advice on building a website that will require a lot of your time and energy? Share your recommendations with others, in case they’re looking for such advice.


Reply via: Email · Mastodon · Bluesky

Occupation and Preoccupation

View

Here’s Jony Ive in his Stripe interview:

What we make stands testament to who we are. What we make describes our values. It describes our preoccupations. It describes beautiful succinctly our preoccupation.

I’d never really noticed the connection between these two words: occupation and preoccupation.

What comes before occupation? Pre-occupation.

What comes before what you do for a living? What you think about. What you’re preoccupied with.

What you think about will drive you towards what you work on.

So when you’re asking yourself, “What comes next? What should I work on?”

Another way of asking that question is, “What occupies my thinking right now?”

And if what you’re occupied with doesn’t align with what you’re preoccupied with, perhaps it's time for a change.


Reply via: Email · Mastodon · Bluesky

Measurement and Numbers

View

Here’s Jony Ive talking to Patrick Collison about measurement and numbers:

People generally want to talk about product attributes that you can measure easily with a number…schedule, costs, speed, weight, anything where you can generally agree that six is a bigger number than two

He says he used to get mad at how often people around him focused on the numbers of the work over other attributes of the work.

But after giving it more thought, he now has a more generous interpretation of why we do this: because we want relate to each other, understand each other, and be inclusive of one another. There are many things we can’t agree on, but it’s likely we can agree that six is bigger than two. And so in this capacity, numbers become a tool for communicating with each other, albeit a kind of least common denominator — e.g. “I don’t agree with you at all, but I can’t argue that 134 is bigger than 87.”

This is conducive to a culture where we spend all our time talking about attributes we can easily measure (because then we can easily communicate and work together) and results in a belief that the only things that matter are those which can be measured.

People will give lip service to that not being the case, e.g. “We know there are things that can’t be measured that are important.” But the reality ends up being: only that which can be assigned a number gets managed, and that which gets managed is imbued with importance because it is allotted our time, attention, and care.

This reminds me of the story of the judgement of King Solomon, an archetypal story found in cultures around the world. Here’s the story as summarized on Wikipedia:

Solomon ruled between two women who both claimed to be the mother of a child. Solomon ordered the baby be cut in half, with each woman to receive one half. The first woman accepted the compromise as fair, but the second begged Solomon to give the baby to her rival, preferring the baby to live, even without her. Solomon ordered the baby given to the second woman, as her love was selfless, as opposed to the first woman's selfish disregard for the baby's actual well-being

In an attempt to resolve the friction between two individuals, an appeal was made to numbers as an arbiter. We can’t agree on who the mother is, so let’s make it a numbers problem. Reduce the baby to a number and we can agree!

But that doesn’t work very well, does it?

I think there is a level of existence where measurement and numbers are a sound guide, where two and two make four and two halves make a whole.

But, as humans, there is another level of existence where mathematical propositions don’t translate. A baby is not a quantity. A baby is an entity. Take a whole baby and divide it up by a sword and you do not half two halves of a baby.

I am not a number. I’m an individual. Indivisible.

What does this all have to do with software? Software is for us as humans, as individuals, and because of that I believe there is an aspect of its nature where metrics can’t take you.cIn fact, not only will numbers not guide you, they may actually misguide you.

I think Robin Rendle articulated this well in his piece “Trust the vibes”:

[numbers] are not representative of human experience or human behavior and can’t tell you anything about beauty or harmony or how to be funny or what to do next and then how to do it.

Wisdom is knowing when to use numbers and when to use something else.


Reply via: Email · Mastodon · Bluesky

Computers Are a Feeling

View

Exploring diagram.website, I came across The Computer is a Feeling by Tim Hwang and Omar Rizwan:

the modern internet exerts a tyranny over our imagination. The internet and its commercial power has sculpted the computer-device. It's become the terrain of flat, uniform, common platforms and protocols, not eccentric, local, idiosyncratic ones.

Before computers were connected together, they were primarily personal. Once connected, they became primarily social. The purpose of the computer shifted to become social over personal.

The triumph of the internet has also impoverished our sense of computers as a tool for private exploration rather than public expression. The pre-network computer has no utility except as a kind of personal notebook, the post-network computer demotes this to a secondary purpose.

Smartphones are indisputably the personal computer. And yet, while being so intimately personal, they’re also the largest distribution of behavior-modification devices the world has ever seen. We all willing carry around in our pockets a device whose content is largely designed to modify our behavior and extract our time and money.

Making “computer” mean computer-feelings and not computer-devices shifts the boundaries of what is captured by the word. It removes a great many things – smartphones, language models, “social” “media” – from the domain of the computational. It also welcomes a great many things – notebooks, papercraft, diary, kitchen – back into the domain of the computational.

I love the feeling of a personal computer, one whose purpose primarily resides in the domain of the individual and secondarily supports the social. It’s part of what I love about the some of the ideas embedded in local-first, which start from the principle of owning and prioritizing what you do on your computer first and foremost, and then secondarily syncing that to other computers for the use of others.


Reply via: Email · Mastodon · Bluesky

Follow Up: An Analysis of YouTube Links From The White House’s “Wire” Website

View

After publishing my Analysis of Links From The White House’s “Wire” Website, Tina Nguyen, political correspondent at The Verge, reached out with some questions.

Her questions made me realize that the numbers in my analysis weren’t quite correct (I wasn’t de-depulicating links across days, so I fixed that problem).

More pointedly, she asked about the most popular domain the White House was linking to: YouTube. Specifically, were the links to YouTube 1) independent content creators, 2) the White House itself, or 3) a mix.

A great question. I didn’t know the answer but wanted to find out. A little JavaScript code in my spreadsheet and boom, I had all the YouTube links in one place.

Screenshot of a table of data in a spreadsheet showing all the links to YouTube from wh[dot]gov/wire

I couldn’t really discern from the links themselves what I was looking at. A number of them were to the /live/ subpath, meaning I was looking at links to live streaming events. But most of the others were YouTube’s standard /watch?v=:id which leaves the content and channel behind the URL opaque. The only real way to know was to click through to each one.

I did a random sampling and found most of the ones I clicked on all went to The White House’s own YouTube channel. I told Tina as much, sent here the data I had, and she reported on it in an article at The Verge.

Tina’s question did get me wondering: precisely how many of those links are to the White House’s own YouTube channel vs. other content creators?

Once again, writing scripts that process data, talk to APIs, and put it all into 2-dimensional tables in a spreadsheet was super handy.

I looked at all the YouTube links, extracted the video ID, then queried the YouTube API for information about the video (like what channel it belongs to). Once I had the script working as expected for a single cell, it was easy to do the spreadsheet thing where you just “drag down” to autocomplete all the other cells with video IDs.

Animated gif of a mouse cursor dragging down the cell cursor in a spreadsheet and data being fetched (from an API) and populated in spreadsheet cells

The result?

From May 8th to July 6th there were 78 links to YouTube from wh.gov/wire, which breaks down as follows:

  • 73 links to videos on the White House’s own YouTube channel
  • 2 links to videos on the channel “Department of Defense”
  • 1 link to a video on the channel “Pod Force One with Miranda Devine”
  • 1 link to a video on the channel “Breitbart News”
  • 1 link to a video that has since been taken down “due to a copyright claim by Sony Music Publishing” (so I’m not sure whose channel that was)

Pie chart showing the percentage distribution of data points among four sources: The White House (94.7%, shown in blue), Department of Defense (2.63%, red), Pod Force One with Miranda Devine (1.32%, green), and Breitbart News (1.32%, purple).


Reply via: Email · Mastodon · Bluesky

Do You Even Personalize, Bro?

View

There’s a video on YouTube from “Technology Connections” — who I’ve never heard of or watched until now — called Algorithms are breaking how we think. I learned of this video from Gedeon Maheux of The Iconfactory fame. Speaking in the context of why they made Tapestry, he said the ideas in this video would be their manifesto.

So I gave it a watch.

Generally speaking, the video asks: Does anyone care to have a self-directed experience online, or with a computer more generally?

I'm not sure how infrequently we’re actually deciding for ourselves these days [how we decide what we want to see, watch, and do on the internet]

Ironically we spend more time than ever on computing devices, but less time than ever curating our own experiences with them.

Which — again ironically — is the inverse of many things in our lives.

Generally speaking, the more time we spend with something, the more we invest in making it our own — customizing it to our own idiosyncrasies.

But how much time do you spend curating, customizing, and personalizing your digital experience? (If you’re reading this in an RSS reader, high five!)

I’m not talking about “I liked that post, or saved that video, so the algorithm is personalizing things for me”.

Do you know what to get yourself more of?

Do you know where to find it?

Do you even ask yourself these questions?

“That sounds like too much work” you might say.

And you’re right, it is work. As the guy in the video says:

I'm one of those weirdos who think the most rewarding things in life take effort

Me too.


Reply via: Email · Mastodon · Bluesky

Setting Element Ordering With HTML Rewriter Using CSS

View

After shipping my work transforming HTML with Netlify’s edge functions I realized I have a little bug: the order of the icons specified in the URL doesn’t match the order in which they are displayed on screen.

Why’s this happening?

I have a bunch of links in my HTML document, like this:

<icon-list>
  <a href="/1/"></a>
  <a href="/2/"></a>
  <a href="/3/"></a>
  <!-- 2000+ more -->
</icon-list>

I use html-rewriter in my edge function to strip out the HTML for icons not specified in the URL. So for a request to:

/lookup?id=1&id=2

My HTML will be transformed like so:

<icon-list>
  <!-- Parser keeps these two -->
  <a href="/1/"></a>
  <a href="/2/"></a>
  
  <!-- But removes this one -->
  <a href="/3/"></a>
</icon-list>

Resulting in less HTML over the wire to the client.

But what about the order of the IDs in the URL? What if the request is to:

/lookup?id=2&id=1

Instead of:

/lookup?id=1&id=2

In the source HTML document containing all the icons, they’re marked up in reverse chronological order. But the request for this page may specify a different order for icons in the URL. So how do I rewrite the HTML to match the URL’s ordering?

The problem is that html-rewriter doesn’t give me a fully-parsed DOM to work with. I can’t do things like “move this node to the top” or “move this node to position x”.

With html-rewriter, you only “see” each element as it streams past. Once it passes by, your chance at modifying it is gone. (It seems that’s just the way these edge function tools are designed to work, keeps them lean and performant and I can’t shoot myself in the foot).

So how do I change the icon’s display order to match what’s in the URL if I can’t modify the order of the elements in the HTML?

CSS to the rescue!

Because my markup is just a bunch of <a> tags inside a custom element and I’m using CSS grid for layout, I can use the order property in CSS!

All the IDs are in the URL, and their position as parameters has meaning, so I assign their ordering to each element as it passes by html-rewriter. Here’s some pseudo code:

// Get all the IDs in the URL
const ids = url.searchParams.getAll("id");

// Select all the icons in the HTML
rewriter.on("icon-list a", {
  element: (element) => {
    // Get the ID
    const id = element.getAttribute('id');
    
    // If it's in our list, set it's order
    // position from the URL
    if (ids.includes(id)) {
      const order = ids.indexOf(id);
      element.setAttribute(
        "style",
        `order: ${order}`
      );
    // Otherwise, remove it
    } else {
      element.remove();
    }
  },
});

Boom! I didn’t have to change the order in the source HTML document, but I can still get the displaying ordering to match what’s in the URL.

I love shifty little workarounds like this!


Reply via: Email · Mastodon · Bluesky

An Analysis of Links From The White House’s “Wire” Website

View

A little while back I heard about the White House launching their version of a Drudge Report style website called White House Wire. According to Axios, a White House official said the site’s purpose was to serve as “a place for supporters of the president’s agenda to get the real news all in one place”.

So a link blog, if you will.

As a self-professed connoisseur of websites and link blogs, this got me thinking: “I wonder what kind of links they’re considering as ‘real news’ and what they’re linking to?”

So I decided to do quick analysis using Quadratic, a programmable spreadsheet where you can write code and return values to a 2d interface of rows and columns.

I wrote some JavaScript to:

  • Fetch the HTML page at whitehouse.gov/wire
  • Parse it with cheerio
  • Select all the external links on the page
  • Return a list of links and their headline text

In a few minutes I had a quick analysis of what kind of links were on the page:

Screenshot of the Quadratic spreadsheet, with rows and columns of data on the left, and on the right a code editor containing the code which retrieved and parsed the data on the left.

This immediately sparked my curiosity to know more about the meta information around the links, like:

  • If you grouped all the links together, which sites get linked to the most?
  • What kind of interesting data could you pull from the headlines they’re writing, like the most frequently used words?
  • What if you did this analysis, but with snapshots of the website over time (rather than just the current moment)?

So I got to building.

Quadratic today doesn’t yet have the ability for your spreadsheet to run in the background on a schedule and append data. So I had to look elsewhere for a little extra functionality.

My mind went to val.town which lets you write little scripts that can 1) run on a schedule (cron), 2) store information (blobs), and 3) retrieve stored information via their API.

After a quick read of their docs, I figured out how to write a little script that’ll run once a day, scrape the site, and save the resulting HTML page in their key/value storage.

Screenshot of 9 lines of code from val.town that fetches whitehouse.gov/wire, extracts the text, and stores it in blob storage.

From there, I was back to Quadratic writing code to talk to val.town’s API and retrieve my HTML, parse it, and turn it into good, structured data. There were some things I had to do, like:

  • Fine-tune how I select all the editorial links on the page from the source HTML (I didn’t want, for example, to include external links to the White House’s social pages which appear on every page). This required a little finessing, but I eventually got a collection of links that corresponded to what I was seeing on the page.
  • Parse the links and pull out the top-level domains so I could group links by domain occurrence.
  • Create charts and graphs to visualize the structured data I had created.

Selfish plug: Quadratic made this all super easy, as I could program in JavaScript and use third-party tools like tldts to do the analysis, all while visualizing my output on a 2d grid in real-time which made for a super fast feedback loop!

Once I got all that done, I just had to sit back and wait for the HTML snapshots to begin accumulating!

It’s been about a month and a half since I started this and I have about fifty days worth of data.

The results?

Here’s the top 10 domains that the White House Wire links to (by occurrence), from May 8 to June 24, 2025:

  1. youtube.com (133)
  2. foxnews.com (72)
  3. thepostmillennial.com (67)
  4. foxbusiness.com (66)
  5. breitbart.com (64)
  6. x.com (63)
  7. reuters.com (51)
  8. truthsocial.com (48)
  9. nypost.com (47)
  10. dailywire.com (36)

A pie chart visualizing the top ten links (by domain) from the White House Wire

From the links, here’s a word cloud of the most commonly recurring words in the link headlines:

  1. “trump” (343)
  2. “president” (145)
  3. “us” (134)
  4. “big” (131)
  5. “bill” (127)
  6. “beautiful” (113)
  7. “trumps” (92)
  8. “one” (72)
  9. “million” (57)
  10. “house” (56)

Screenshot of a word cloud with “trump” being the largest word, followed by words like “bill”, “beautiful” and “president”.

The data and these graphs are all in my spreadsheet, so I can open it up whenever I want to see the latest data and re-run my script to pull the latest from val.town. In response to the new data that comes in, the spreadsheet automatically parses it, turn it into links, and updates the graphs. Cool!

Screenshot of a spreadsheet with three different charts and tables of data.

If you want to check out the spreadsheet — sorry! My API key for val.town is in it (“secrets management” is on the roadmap). But I created a duplicate where I inlined the data from the API (rather than the code which dynamically pulls it) which you can check out here at your convenience.

Update: 2025-07-03

After publishing, I realized that I wasn’t de-duplicating links. Because this works by taking snapshots once a day of the website’s HTML, if the same link stayed up for multiple days, it was getting counted twice.

So I tweaked my analysis to de-duplicate links because I want a picture of all the links shared over time. It didn’t really change the proportions of which sites were shared most frequently, just lowered their occurrence because links now weren’t counted twice.

Given that, here’s an update of the “top 10 links by domain” from May 8th to July 3rd.

  1. youtube.com (73)
  2. foxnews.com (36)
  3. x.com (31)
  4. breitbart.com (29)
  5. nypost.com (28)
  6. thepostmillennial.com (26)
  7. foxbusiness.com (22)
  8. truthsocial.com (20)
  9. washingtontimes.com (16)
  10. dailywire.com (15)

A pie chart visualizing the top ten links (by domain) from the White House Wire


Reply via: Email · Mastodon · Bluesky

Related posts linking here: (2025) Follow Up: An Analysis of YouTube Links From The White House’s “Wire” Website