Jim Nielsen’s Blog

You found my experimental HTML feed (there are also other ways to subscribe).

I HTML

Recent posts

Building Websites With LLMS

View

And by LLMS I mean: (L)ots of (L)ittle ht(M)l page(S).

I recently shipped some updates to my blog. Through the design/development process, I had some insights which made me question my knee-jerk reaction to building pieces of a page as JS-powered interactions on top of the existing document.

With cross-document view transitions getting broader and broader support, I’m realizing that building in-page, progressively-enhanced interactions is more work than simply building two HTML pages and linking them.

I’m calling this approach “lots of little HTML pages” in my head. As I find myself trying to build progressively-enhanced features with JavaScript — like a fly-out navigation menu, or an on-page search, or filtering content — I stop and ask myself: “Can I build this as a separate HTML page triggered by a link, rather than JavaScript-injected content built from a button?”

I kinda love the results. I build separate, small HTML pages for each “interaction” I want, then I let CSS transitions take over and I get something that feels better than its JS counterpart for way less work.

Allow me two quick examples.

Example 1: Filtering

Working on my homepage, I found myself wanting a list of posts filtered by some kind of criteria, like:

  • The most recent posts
  • The ones being trafficked the most
  • The ones that’ve had lots of Hacker News traffic in the past

My first impulse was to have a list of posts you can filter with JavaScript.

But the more I built it, the more complicated it got. Each “list” of posts needed a slightly different set of data. And each one had a different sort order. What I thought was going to be “stick a bunch of <li>s in the DOM, and show hide some based on the current filter” turned into lots of data-x attributes, per-list sorting logic, etc. I realized quickly this wasn’t a trivial, progressively-enhanced feature. I didn’t want to write a bunch of client-side JavaScript for what would take me seconds to write on “the server” (my static site generator).

Then I thought: Why don’t I just do this with my static site generator? Each filter can be its own, separate HTML page, and with CSS view transitions I’ll get a nice transition effect for free!

Minutes later I had it all working — mostly, I had to learn a few small things about aspect ratio in transitions — plus I had fancy transitions between “tabs” for free!

Animated gif showing a link that goes to a new document and the list re-shuffles and re-sorts its contents in an animated fashion.

This really feels like a game-changer for simple sites. If you can keep your site simple, it’s easier to build traditional, JavaScript-powered on-page interactions as small, linked HTML pages.

Example 2: Navigation

This got me thinking: maybe I should do the same thing for my navigation?

Usually I think “Ok, so I’ll have a hamburger icon with a bunch of navigational elements in it, and when it’s clicked you gotta reveal it, etc." And I thought, “What if it’s just a new HTML page?”[1]

Because I’m using a static site generator, it’s really easy to create a new HTML page. A few minutes later and I had it. No client-side JS required. You navigate to the “Menu” and you get a page of options, with an “x” to simulate closing the menu and going back to where you were.

Anitmated gif of a menu opening on a website (but it’s an entirely new HTML page).

I liked it so much for my navigation, I did the same thing with search. Clicking the icon doesn’t use JavaScript to inject new markup and animate things on screen. Nope. It’s just a link to a new page with CSS supporting a cross-document view transition.

Granted, there are some trade-offs to this approach. But on the whole, I really like it. It was so easy to build and I know it’s going to be incredibly easy to maintain!

I think this is a good example of leveraging the grain of the web. It’s really easy to build a simple website when you can shift your perspective to viewing on-page interactivity as simple HTML page navigations powered by cross document CSS transitions (rather than doing all of that as client-side JS).


  1. Jason Bradberry has a neat article that’s tangential to this idea over at Piccalil. It’s more from the design standpoint, but functionally it could work pretty much the same as this: your “menu” or “navigation” is its own page.

Reply via: Email · Mastodon · Bluesky

AX, DX, UX

View

Matt Biilman, CEO of Netlify, published an interesting piece called “Introducing AX: Why Agent Experience Matters” where he argues the coming importance of a new “X” (experience) in software: the agent experience, meaning the experience your users’ AI agents will have as automated users of products/platforms.

Too many companies are focusing on adding shallow AI features all over their products or building yet another AI agent. The real breakthrough will be thinking about how your customers’ favorite agents can help them derive more value from your product. This requires thinking deeply about agents as a persona your team is building and developing for.

In this future, software that can’t be used by an automated agent will feel less powerful and more burdensome to deal with, whereas software that AI agents can use on your behalf will become incredibly capable and efficient. So you have to start thinking about these new “users” of your product:

Is it simple for an Agent to get access to operating a platform on behalf of a user? Are there clean, well described APIs that agents can operate? Are there machine-ready documentation and context for LLMs and agents to properly use the available platform and SDKs? Addressing the distinct needs of agents through better AX, will improve their usefulness for the benefit of the human user.

In summary:

We need to start focusing on AX or “agent experience” — the holistic experience AI agents will have as the user of a product or platform.

The idea is: teams focus more time and attention on “AX” (agent experience) so that human end-users can bring their favorite agents to our platforms/products and increase productivity.

But I’m afraid the reality will be that the limited time and resources teams spend today building stuff for humans will instead get spent building stuff for robots, and as a byproduct everything human-centric about software will become increasingly subpar as we rationalize to ourselves, “Software doesn’t need to be good for human because humans don’t use software anymore. Their robots do!” In that world, anybody complaining about bad UX will be told to shift to using the AX because “that’s where we spent all our time and effort to make your experience great”.

Prior Art: DX

DX in theory: make the DX for people who are building UX really great and they’ll be able to deliver more value faster.

DX in practice: DX requires trade-offs, and a spotlight on DX concerns means UX concerns take a back seat. Ultimately, some DX concerns end up trumping UX concerns because “we’ll ship more value faster”, but the result is an overall degradation of UX because DX was prioritized first.

Ultimately, time and resources are constraining factors and trade-offs have to be made somewhere, so they’re made for and in behalf of the people who make the software because they’re the ones who feel the pain directly. User pain is only indirect.

Future Art: AX

AX in theory: build great stuff for agents (AX) so people can use stuff more efficiently by bringing their own tools.

AX in practice: time and resources being finite, AX trumps UX with the rationale being: “It’s ok if the human bit (UX) is a bit sloppy and obtuse because we’ll make the robot bit (AX) so good people won’t ever care about how poor the UX is because they’ll never use it!”

But I think we know how that plays out. A few companies may do that well, but most software will become even more confusing and obtuse to humans because most thought and care is poured into the robot experience of the product.

The thinking will be: “No need to pour extra care and thought into the inefficient experience some humans might have. Better to make the agent experience really great, so humans won’t want to interface with our thing manually.”

In other words: we don’t have the time or resources to worry about the manual human experience because we’ve got all these robots to worry about!

It appears there’s no need to fear AI becoming sentient and replacing us humans. We’ll phase ourselves out long before the robots ever become self-aware.

All that said, I’m not against the idea of “AX” but I do think the North Star of any “X” should remain centered on the (human) end-user.

UX over AX over DX.


Reply via: Email · Mastodon · Bluesky

Can You Get Better Doing a Bad Job?

View

Rick Rubin has an interview with Woody Harrelson on his podcast Tetragrammaton. Right at the beginning Woody talks about his experience acting and how he’s had roles that did’t turn out very well. He says sometimes he comes away from those experiences feeling dirty, like “I never connected to that, it never resonated, and now I feel like I sold myself...Why did I do that?!”

Then Rick asks him: even in those cases, do you feel like you got better at your craft because you did your job? Woody’s response:

I think when you do your job badly you never really get better at your craft.

Seems relevant to making websites.

I’ve built websites on technology stacks I knew didn’t feel fit for their context and Woody’s experience rings true. You just don’t feel right, like a little voice that says, “You knew that wasn’t going to turn out very good. Why did you do that??”

I don’t know if I’d go so far as to say I didn’t get better because of it. Experience is a hard teacher. Perhaps, from a technical standpoint, my skillset didn’t get any better. But from an experiential standpoint, my judgement got better. I learned to avoid (or try to re-structure) work that’s being carried out in a way that doesn’t align with its own purpose and essence.

Granted, that kind of alignment is difficult. If it makes you feel any better, even Woody admits this is not an easy thing to do:

I would think after all this time, surely I’m not going to be doing stuff I’m not proud of. Or be a part of something I’m not proud of. But damn...it still happens.


Reply via: Email · Mastodon · Bluesky

Limitations vs. Capabilities

View

Andy Jiang over on the Deno blog writes “If you're not using npm specifiers, you're doing it wrong”:

During the early days of Deno, we recommended importing npm packages via HTTP with transpile services such as esm.sh and unpkg.com. However, there are limitations to importing npm packages this way, such as lack of install hooks, duplicate dependency resolution issues, loading data files, etc.

I know, I know, here I go harping on http imports again, but this article reinforces to me that one man’s “limitations” are another man’s “features”.

For me, the limitations (i.e. constraints) of HTTP imports in Deno were a feature. I loved it precisely because it encouraged me to do something different than what node/npm encouraged.

It encouraged me to 1) do less, and 2) be more web-like. Trying to do more with less is a great way to foster creativity. Plus, doing less means you have less to worry about.

Take, for example, install hooks (since they’re mentioned in the article). Install hooks are a security vector. Use them and you’re trading ease for additional security concerns. Don’t use them and you have zero additional security concerns. (In the vein of being webby: browsers don’t offer install hooks on <script> tags.)

I get it, though. It’s hard to advocate for restraint and simplicity in the face of gaining adoption within the web-industrial-complex. Giving people what they want — what they’re used to — is easier than teaching them to change their ways.

Note to self: when you choose to use tools with practices, patterns, and recommendations designed for industrial-level use, you’re gonna get industrial-level side effects, industrial-level problems, and industrial-level complexity as a byproduct.

As much as its grown, the web still has grassroots in being a programming platform accessible by regular people because making a website was meant to be for everyone. I would love a JavaScript runtime aligned with that ethos. Maybe with initiatives like project Fugu that runtime will actually be the browser.


Reply via: Email · Mastodon · Bluesky

Sanding UI, pt. II

View

Let’s say you make a UI to gather some user feedback. Nothing complicated. Just a thumbs up/down widget. It starts out neutral, but when the user clicks up or down, you highlight what they clicked an de-emphasize/disable the other (so it requires an explicit toggle to change your mind).

A set of thumbs-up and thumbs-down icons in various states, with some in grayscale and others highlighted in green or red.

So you implement it. Ship it. Cool. Works right?

Well, per my previous article about “sanding” a user interface UI by clicking around a lot, did you click on it a lot?

If you do, you’ll find that doing so selects the thumbs up/down icon as if it were text:

Animated gif of a thumbs up icon being clicked repeatedly and gaining a text selection UI native to the OS.

So now you have this weird text selection that’s a bit of an eye sore. It’s not relevant to text selection because it’s not text. It’s an SVG. So the selection UI that appears is misleading and distracting.

A thumbs up icon that was clicked repeatedly and has a text selection UI native to the OS overlaid on it.

One possible fix: leverage the user-select: none property in CSS which makes it not selectable. When the user clicks multiple times to toggle, no text selection UI will appear.

A thumbs up icon with a cursor over it and no text selection UI.

Cool. Great!

Another reason to click around a lot. You can ensure any rough edges are smoothed out, and any “UI splinters” are ones you get (and fix) in place of your users.


Reply via: Email · Mastodon · Bluesky

CSS Space Toggles

View

I’ve been working on a transition to using light-dark() function in CSS.

What this boils down to is, rather than CSS that looks like this:

:root {
  color-scheme: light;
  --text: #000;
}

@media (prefers-color-scheme: dark) {
  :root {
    color-scheme: dark;
    --text: #fff;
  }
}

I now have this:

:root {
  color-scheme: light;
  --text: light-dark(#000, #fff);
}

@media (prefers-color-scheme: dark) {
  :root {
    color-scheme: dark;
  }
}

That probably doesn’t look that interesting. That’s what I thought when I first learned about light-dark() — “Oh hey, that’s cool, but it’s just different syntax. Six of one, half dozen of another kind of thing.”

But it does unlock some interesting ways to handling themeing which I will have to cover in another post. Suffice it to say, I think I’m starting to drink the light-dark() koolaid.

Anyhow, using the above pattern, I want to compose CSS variables to make a light/dark theme based on a configurable hue. Something like this:

:root {
  color-scheme: light;
  
  /* configurable via JS */
  --accent-hue: 56; 
  
  /* which then cascades to other derivations */
  --accent: light-dark(
    hsl(var(--accent-hue) 50% 100%),
    hsl(var(--accent-hue) 50% 0%),
  );
}

@media (prefers-color-scheme: dark) {
  :root {
    color-scheme: dark;
  }
}

The problem is that --accent-hue value doesn’t quite look right in dark mode. It needs more contrast. I need a slightly different hue for dark mode. So my thought is: I’ll put that value in a light-dark() function.

:root {
  --accent-hue: light-dark(56, 47);
  --my-color: light-dark(
    hsl(var(--accent-hue) 50% 100%),
    hsl(var(--accent-hue) 50% 0%),
  );
}

Unfortunately, that doesn’t work. You can’t put arbitrary values in light-dark(). It only accepts color values.

I asked what you could do instead and Roma Komarov told me about CSS “space toggles”. I’d never heard about these, so I looked them up.

First I found Chris Coyier’s article which made me feel good because even Chris admits he didn’t fully understand them.

Then Christopher Kirk-Nielsen linked me to his article which helped me understand this idea of “space toggles” even more.

I ended up following the pattern Christopher mentions in his article and it works like a charm in my implementation! The gist of the code works like this:

  1. When the user hasn’t specified a theme, default to “system” which is light by default, or dark if they’re on a device that supports prefers-color-scheme.
  2. When a user explicitly sets the color theme, set an attribute on the root element to denote that.
/* Default preferences when "unset" or "system" */
:root {
  --LIGHT: initial;
  --DARK: ;
  color-scheme: light;
}
@media (prefers-color-scheme: dark) {
  :root {
    --LIGHT: ;
    --DARK: initial;
    color-scheme: dark;
  }
}

/* Handle explicit user overrides */
:root[data-theme-appearance="light"] {
  --LIGHT: initial;
  --DARK: ;
  color-scheme: light;
}
:root[data-theme-appearance="dark"] {
  --LIGHT: ;
  --DARK: initial;
  color-scheme: dark;
}

/* Now set my variables */
:root {
  /* Set the “space toggles’ */
  --accent-hue: var(--LIGHT, 56) var(--DARK, 47);
  
  /* Then use them */
  --my-color: light-dark(
    hsl(var(--accent-hue) 50% 90%),
    hsl(var(--accent-hue) 50% 10%),
  );
}

So what is the value of --accent-hue? That line sort of reads like this:

  • If --LIGHT has a value, return 56
  • else if --DARK has a value, return 47

And it works like a charm! Now I can set arbitrary values for things like accent color hue, saturation, and lightness, then leverage them elsewhere. And when the color scheme or accent color change, all these values recalculate and cascade through the entire website — cool!

A Note on Minification

A quick tip: if you’re minifying your HTML and you’re using this space toggle trick, beware of minifying your CSS! Stuff like this:

selector {
  --ON: ;
  --OFF: initial;
}

Could get minified to:

selector{--OFF:initial}

And this “space toggles trick” won’t work at all.

Trust me, I learned from experience.


Reply via: Email · Mastodon · Bluesky

Aspect Ratio Changes With CSS View Transitions

View

So here I am playing with CSS view transitions (again).

I’ve got Dave Rupert’s post open in one tab, which serves as my recurring reference for the question, “How do you get these things to work again?”

I’ve followed Dave’s instructions for transitioning the page generally and am now working on individual pieces of UI specifically.

I feel like I’m 98% of the way there, I’ve just hit a small bug.

It’s small. Many people might not even notice it. But I do and it’s bugging me.

When I transition from one page to the next, I expect this “active page” outline to transition nicely from the old page to the new one. But it doesn’t. Not quite.

Animated gif of a CSS page transition where the tab outline doesn’t grow proportionally but it happens really quickly so you barely see it.

Did you notice it? It’s subtle and fast, but it’s there. I have to slow my ::view-transition-old() animation timing waaaaay down to see catch it.

Animated gif of a CSS page transition where the tab outline doesn’t grow proportionally but it happens really slowly so you can definitely see it.

The outline grows proportionally in width but not in height as it transitions from one element to the next.

I kill myself on trying to figure out what this bug is.

Dave mentions in his post how he had to use fit-content to fix some issues with container changes between pages. I don’t fully understand what he’s getting at, but I think maybe that’s where my issue is? I try sticking fit-content on different things but none of it works.

I ask AI and it’s totally worthless, synthesizing disparate topics about CSS into a seemingly right on the surface but totally wrong answer.

So I sit and think about it.

What’s happening almost looks like some kind of screwy side effect of a transform: scale() operation. Perhaps it’s something about how default user agent styles for these things is animating the before/after state? No, that can’t be it…

Honestly, I have no idea. I don’t know much about CSS view transitions, but I know enough to know that I don’t know enough to even formulate the right set of keywords for a decent question. I feel stuck.

I consider reaching out on the socials for help, but at the last minute I somehow stumble on this perfectly wonderful blog post from Jake Archibald: “View transitions: Handling aspect ratio changes” and he’s got a one-line fix in my hands in seconds!

The article is beautiful. It not only gives me an answer, but it provides really wonderful visuals that help describe why the problem I’m seeing is a problem in the first place. It really helps fill out my understanding of how this feature works. I absolutely love finding writing like this on the web.

So now my problem is fixed — no more weirdness!

Animated gif of CSS multi-page transitions animating active tabs across pages of a website

If you’re playing with CSS view transitions these days, Jake’s article is a must read to help shape your understanding of how the feature works. Go give it a read.


Reply via: Email · Mastodon · Bluesky

Related posts linking here: (2025) Building Websites With LLMS

Search Results Without JavaScript

View

I’m currently looking to add a search feature to my blog.

It’s a client-side approach, which means I was planning on using my favorite progressive-enhancement technique for client-side only search: you point a search form at Google, scope the results to your site, then use JavaScript to intercept the form submission and customize the experience on your site to your heart’s content.

<form action="https://www.google.com/search">
  <input type="text" name="q" placeholder="Search" />
  <input type="hidden" name="as_sitesearch" value="blog.jim-nielsen.com" />
  <button type="submit">Search</button>
</form>
<script>
    document.querySelector("form").addEventListener("submit", (e) => {
        e.preventDefault();
        // Do my client-side search stuff here
        // and stay on the current page
  });
</script>

However, then I remembered that Google Search no longer works without JavaScript which means this trick is no longer a trick. [1]

But have no fear, other search engines to the rescue!

DuckDuckGo, for example, supports this trick. Tweak some of the HTML from the Google example and it’ll work:

<form action="https://duckduckgo.com">
  <input type="text" name="q" placeholder="Search" />
  <input type="hidden" name="sites" value="blog.jim-nielsen.com" />
  <button type="submit">Search</button>
</form>
<script>
    document.querySelector("form").addEventListener("submit", (e) => {
        e.preventDefault();
        // Do my client-side search stuff here
        // and stay on the current page
  });
</script>

Yahoo also supports this trick, but not Bing. You can point people at Bing, but you can’t scope a query to your site only with an HTML form submission alone. Why? Because you need two search params: 1) a “query” param representing what the user typed into the search box, and 2) a “site search” param to denote which site you want to limit your results to (otherwise it’ll search the whole web).

From a UI perspective, if a search box is on your site, user intent is to search the content on your site. You don’t want to require people to type “my keywords site:blog.jim-nielsen.com” when they’re using a search box on your site — that’s just silly!

That’s why you need a second search parameter you can set yourself (a hidden input). You can’t concatenate something onto the end of a user’s HTML form submission. (What they type in the input box is what gets sent to the search engine as the ?q=... param.) To add to the q param, you would need JavaScript — but then that defeats the whole purpose of this exercise in the first place!

Anyhow, here are the search parameters I found useful for search engines that will support this trick:

  • DuckDuckGo:
    • Query: q
    • Site search param: sites
  • Yahoo
    • Query: p
    • Site search param: vs

I made myself a little test page for trying all these things. Check it out (and disable JS) if you want to try yourself!


  1. Not only that, but the as_sitesearch search param doesn’t seem to work anymore either. I can’t find any good documentation on what happened to as_sitesearch, but it seems like you’re supposed to use the “programmable search” now instead? Honestly I don’t know. And I don’t care enough to find out.

Reply via: Email · Mastodon · Bluesky

The Art of Making Websites

View

Hidde de Vries gave a great talked titled “Creativity cannot be computed” (you can checkout the slides or watch the video).

In his slides he has lots of bullet points that attempt to define what art is, and then in the talk he spends time covering each one. Here’s a sampling of the bullet points:

  • Art isn't always easy to recognize
  • Art has critics
  • Art is fuzzy
  • Art can make us think
  • Art can make the artist think
  • Art can make the audience think
  • Art can show us a mirror to reflect
  • Art can move us
  • Art can take a stance
  • Art can be used to show solidarity
  • Art can help us capture what it's like to be another person

I love all his bullet points. In fact, they got me thinking about websites.

I think you could substitute “website” for “art” in many of his slides. For example:

  • Art is repeated
  • Art may contain intentions
  • Art can show us futures we should not want
  • Art doesn’t have to fit in
    • You can make any kind of website. It gives you agency to respond to the world the way you want, not just by “liking” something on social media.
    • Me personally, I’ve made little websites meant to convey my opinion on social share imagery or reinforce the opinion I share with others on the danger of normalizing breakage on the web.
    • Each of those could’ve been me merely “liking” someone else’s opinion. Or I could’ve written a blog post. Or, as in those cases, I instead made a website.
  • Art can insult the audience
    • It doesn’t have to make you happy. Its purpose can be to offend you, or make you outraged and provoke a response. It can be a mother fucking website.

Of course, as Hidde points out, a website doesn’t have to be all of these. It also doesn’t have to be any of these.

Art — and a website — is as much about the artist and the audience as it is about the artifact. It’s a reflection of the person/people making it. Their intentions. Their purpose.

How’d you make it? Why’d you make it? When’d you make it? Each of these threads run through your art (website).

So when AI lets you make a website with the click of a button, it’s automating away a lot of the fun art stuff that goes into a website. The part where you have to wrestle with research, with your own intentions and motivations, with defining purpose, with (re)evaluating your world view.

Ultimately, a website isn’t just what you ship. It’s about who you are — and who you become — on the way to shipping.

So go explore who you are. Plumb the bottomless depths of self. Make art, a.k.a make a website.


Reply via: Email · Mastodon · Bluesky

Software Pliability

View

Quoting myself from former days on Twitter:

Businesses have a mental model of what they do.

Businesses build software to help them do it—a concrete manifestation of their mental model.

A gap always exists between these two.

What makes a great software business is their ability to keep that gap very small.

I think this holds up. And I still think about this idea (hence this post).

Software is an implementation of human understanding — people need X, so we made Y.

But people change. Businesses change. So software must also change.

One of your greatest strengths will be your ability to adapt and evolve your understanding of people’s needs and implement it in your software.

In a sense, technical debt is the other side of this coin of change: an inability to keep up with your own metamorphosis and understanding.

In a way, you could analogize this to the conundrum of rocket science: you need fuel to get to space, but the more fuel you add, the more weight you add, and the more weight you add, the more fuel you need. Ad nauseam.

It’s akin to making software.

You want to make great software for people’s needs today. It takes people, processes, and tools to make software, but the more people, processes, and tools you add to the machine of making software, the less agile you become. So to gain velocity you add more people, processes, and tools, which…you get the idea.

Being able to build and maintain pliable software that can change and evolve at the same speed as your mental model is a superpower. Quality in code means the flexibility to change.


Reply via: Email · Mastodon · Bluesky