Jim Nielsen’s Blog

You found my HTML feed — I also have an XML feed and a JSON feed.


Subscribe to my blog by copy-pasting this URL into your RSS reader.

(Learn more about RSS and subscribing to content on the web at aboutfeeds.)

Recent posts

Text Prompts Circumscribe The Surface Area of Possible Solutions


I was reading Chase McCoy’s notes about Figma’s move into the AI space and this one line stuck out to me (emphasis mine):

Generating UI designs from scratch, based on a text prompt

This reminded me of my note from a Wall Street Journal interview with Jony Ive where he talks about problem solving. He notes that when you set out to solve a problem you are open to a flood of ideas because the only clear thing is the problem itself. If all you can say is “I’m going to make this person’s life better” then the possibilities for doing that are almost endless.

But once you begin talking about solutions, you begin to drastically narrow down your possibilities. Here’s Ive:

Language is so powerful. If [I say] I’m going to design a chair, think how dangerous that is. Because you’ve just said chair, you’ve just said no to a thousand ideas.

In short: language is a design tool.

It reminds me of a project I worked on a decade ago where we named a solution early on, then later hit a wall only to realize that our initial name was our stumbling block:

Our innocent naming choice had not been innocuous. It held subtle and misleading connotations which had led us down a road of wrong assumptions where we kept trying to fit a square peg in a round hole.

My takeaway from the project?

Naming is important and should be revisited as you iterate. The way you name something, even in the initial stages when a concept or idea is fuzzy, contains vital bearings on the direction of the project—whether you’re consciousness of it or not.

As human-computer interface design barrels toward generation from a text prompt, this idea seems more important than ever.

When you use language to describe for an LLM the solution you want, you circumscribe your possible solution area. As Ive says, if you say you’re making a chair, you’ve said no to a thousand other ideas. If you ask AI for “a card display for a song”, you just asked for one thing which means you said no to a thousand others.

Language is a powerful design tool.


Creating Some Noise on Behalf of Silence


How do you write about the value of silence?

It’s kind of absurd when you think about it. Do you use words to extol the value of something whose essence is the very absence of words?

It’s like making a painting of the invisible. Do you use visible means to depict something that exists outside of the visible?

Nonetheless, here I am with this blog post.

Via a recommendation from my wife, I recently finished reading “The Stranger in the Woods: The Extraordinary Story of the Last True Hermit” by Michael Finkel. It’s about Chris Knight, a man who chose to disappear into the woods in Maine and live alone with no human contact for almost three decades.

Reading the book, you realize, “Damn, this guy led a life that was the very antithesis of our world of hyper-stimulation.”

When the author asked him to describe his experience of solitary quietude, the best Knight could do is declare that words failed him. “Silence does not translate to words” he said.

As the author points out, Knight’s observations are inline with other writings praising the value of silence. Emerson said, “He that thinks most, will say least.” The Tao Te Ching states, “Those who know do not tell; those who tell do not know.”

Anyhow, it’s a good, short read. Now I’m left with the impression that perhaps we could all use a little more silence…[as I generate some more noise in the world with this blog post to say that]


All About That Button, ’Bout That Button


In modern SPAs it’s common to immediately escape baked-in browser behaviors. For example, using <button> often looks like this:

  <input type="text" name="q" />
    onClick={(e) => {
      // Stop the baked-in behavior

      // Do something with the input's value

But a framework like Remix encourages writing mutations as declarative HTML that works without — or, perhaps better stated, before — JavaScript, using semantic elements like <form> and <button type="submit">.

<form action="/search">
  <input type="text" name="q" />
  <button type="submit">Search</button>

From this starting point of HTML — which functions before JavaScript loads & executes — you can then begin to progressively enhance your <form> with JavaScript that intercepts default browser behavior (e.g. <form onSubmit={...}>) and enhances the experience however you prefer.

As I‘ve worked more closely with forms and buttons, I’ve learned a few things.

For example, did you know you can submit a form with a button that lives outside of the form it submits? Use the form attribute:

<form id="my-search-form" action="/search">
  <input type="text" name="q" />

<!-- Somewhere else in the DOM -->
<button type="submit" form="my-search-form">

Or, when a form submits you can open the result in a new tab (you can stick target on the <form> itself too and it’ll do the same thing):

<form action="https://google.com/">
  <input type="text" name="q" />
  <input type="hidden" name="site" value="my-blog.com" />
  <button type="submit" formtarget="_blank">Search</button>

That’s a neat progressive enhancement trick because it allows the user to input a query right there on your website and then, if JavaScript is enabled/working, you e.preventDefault() and take over the interaction there on the page. But if JS is disabled or fails to load, the interaction still works and submitting the form opens a new tab on the user’s device with results for their query.

There’s a bunch of other button attributes for overriding parent form behaviours, such as: formmethod, formenctype, formaction, and formnovalidate.

If you’ve worked in a Remix app where you’re trying to build user interactions that work both with and without (or before and after) JavaScript, you’ve likely encountered many of these. They are very useful mechanisms.

“But why”, you might ask, as an example, “would you want to have two buttons on a form, one that traditionally submits the form with validation and one that uses formnovalidate to submit the form and bypass validation?”

I could go into detail describing one such use-case in a recent codebase, but it will suffice to rather quote the imitable Chris Coyier who had a similar issue years ago:

When you submit [<form action="/submit">], it’s going to go to the URL /submit. Say you need another submit button that submits to a different URL. It doesn’t matter why. There is always a reason for things. The web is a big place

A big place indeed.


Digital Trees


Trees have many functions:

  • they provide shade,
  • they purify air,
  • they store carbon,
  • they grow fruit,
  • and they’re aesthetically pleasing.

What’s intriguing to me about trees is their return on investment (ROI).

It takes years, even decades, to grow a tree to the point where you feel like you get to reap its benefits.

Because of this, many trees end up being cultivated more for others than for ourselves. They can be a living embodiment of giving over extracting.

With the web going the way it is — what with AI and its extractive penchant, poisoning the well from which it sprang — it makes me wonder: what are the “trees” of the web? Undoubtedly many (metaphorical) trees on the web were planted by others but we enjoy their fruits.

For me personally, one example is the free and open blogs of folks whose advice and education have gifted me the know-how necessary to be employed as an interdisciplinary website maker.

Which makes me wonder: what trees am I planting? Trees I will gain little from in my lifetime, but others may revel in their fruits far into the future?

Pay it forward. Plant a digital tree.


Cool URIs Don’t Change — But Humans Do


Here are two ideas at odds with each other:

  1. You should have human-friendly URIs
  2. Cool URIs don’t change

If a slug is going to be human-friendly, i.e. human-readable, then it's going to contain information that is subject to change because humans make errors.

If “to err is human” then our errors will be forever cemented into our URIs at publish time.

For example, if I write:


But later realize I was wrong, I can change the content at that URI but am forever stuck with the erroneous idea expressed in my slug (if my URI is to remain cool).

Whereas if I’d had a non-human-readable URI like this:


Then I can hide from my errors by merely updating the content at that URI anytime I want.

How do you get around this problem?

In my post about great URI designs I note how StackOverflow addresses this via a URI design that puts the machine-readable identifier first, then the human-readable slug second.


This allows the slug to change over time without breaking links. For example, you could publish:


And later change it to:


And both will resolve to the same resource. It doesn’t matter what you put in the position of :slug it’ll always be as if you merely typed:


Granted you can’t protect from people putting misleading information in your URIs. For example, this would resolve to the same resource as the others:


That said, there is one problem with the StackOverflow example: it doesn’t work with simple static file hosts where you don’t have control over routing logic.

The McGyver, jerry-rigged version of this URL would be to use a search param that doesn’t do anything other than provide human-readable context. For example:


That would work with a static file host without special routing logic (though it’s still subject to abuse same as the StackOverflow example).

So, to my original example:


Could later be changed to:


And it remains cool 🕶️

Not saying you should, but you could.


A Local-first Codebase Opens the Door to More Collaborators


I thought this was interesting: Dax Raad on the local-first podcast observes how a local-first model drastically simplifies the experience of building an app, both as an individual and as a team.

He talks about how his wife is not an engineer but she learned to be more hands on in the codebase of the project they work on together.

For them, one of the things that’s been “crazy helpful” about a local-first approach is that all the data for the app is “just there” locally. For Dax’s wife, as a beginning coder, it’s such a simple model to work with. She’s not trying to figure out how to round trip to the server and keep data in sync. Dax handles all that upfront. The result?

There's not all this weird like, loading states, or like fetching it, or like just a whole bunch of complexity around getting data back and forth. It's solved in one part of your app, and then you never have to think about it anywhere else.

So from a team productivity point of view, she can build any feature she wants, even if I didn't explicitly think about it from the backend point of view, because she has all the data locally.

She's like, “I want to create a view that searches through this set of data.” She can just go do that. All the data is there. [It’s] very, very straightforward.

And it's actually wild how much of a productivity boost that has on your team, because…with every new feature you’re not rebuilding [yet] another way to sync that data back and forth.

When every single feature you build has to scaffold the lifecycle around fetching, updating, and revalidating the data that’s being changed, you alienate people who could otherwise collaborate on the front-end because they don’t know how to build the show spinner -> fetch -> render -> update -> show spinner -> revalidate loop (we spend a lot of time and effort on the coordination problem).

I’ve been in this position. As someone who started writing mostly HTML & CSS, then later moved to writing view logic with languages like JSX, I could only take my design work so far. Then I’d have to leave it for someone else to “wire things up”, which often resulted in them having to re-write a lot of what I did because it didn’t take into account the architecture of the network layer.

But that problem — how do I get (and update) the data required to build and style a functioning UI as a front-of-the-front-end engineer — can be solved up-front by a local-first architecture, allowing more people to collaborate on building UIs.


Custom Elements Don’t Require a Hyphen as a Separator


Scott Jehl reached out to help me resolve a conundrum in my post about what constitutes a valid custom element tag.

The spec says you can have custom elements with emojis in them. For example:


But for some reason the Codepen where I tested this wasn’t working.

Turns out, I’m not very good at JavaScript and simply failed to wrap everything in a try/catch.

What’s funny about this is that <my-$0.02> isn’t a valid custom element but <my-💲0.02> is!

Anyhow, I’ve since updated that post and now things work as the spec says. All is good with the world.

But that’s not all.

In my convo with Scott, he pointed out that custom element tag names don’t need a hyphen as a separator of characters, they only need the hyphen.

This kinda blew my mind when I realized it. All this time I’d been thinking about the rules for custom elements wrong.

You aren’t required to have the hyphen as a separator:


You’re just required to have it:


Those are both valid custom element tag names!

Which means, if you have a really simple element and can’t think of a better name than an existing HTML element, you can do this:

<h1->My custom heading</h1->

Or this:

<p->My custom paragraph</p->

Or, I suppose, even this:

  <li>My custom unordered list</li>
  <li>That still uses normal li’s</li>
  <li>Because why not?</li>

I’m not saying you should do this, but I am saying you could — you know, nothing ever went wrong doing something before stopping to thinking about whether you should.


Organic Intelligence


Jeremy wrote about how the greatest asset of a company like Google is the trust people put in them:

If I use a [knowledge tool] I need to be able to trust [it] is good...I don’t expect perfection, but I also don’t expect to have to constantly be thinking “was this generated by a large language model, and if so, how can I know it’s not hallucinating?”

That question — “Was this generated, in some part, by an LLM and how can I assess its accuracy?” — is becoming a larger and larger part of my life. It’s taxing.

Jeremy’s post made me think[1] about the parallels between the rise of industrial farming and AI (or, might I say, industrial knowledge work).

Artificial food is to organic food, as artificial intelligence is to natural (i.e. organic) intelligence.

At one point in time, we said “eggs” and generally agreed on what that meant. With the rise of industrial farming, we began to understand that not all eggs are created equal, nor do they match our mental model of where eggs come from. So terms like “organic” and “free-range” and “cage-free” began to surface in our vernacular to help us suss out which eggs match our mental model for the term “eggs” that’s printed on the label.

It’s like that ice cream that can’t be called ice cream but rather a frozen dairy dessert. Or chocolate that can’t be called chocolate so it’s labeled “chocolate-flavored“ or “chocolatey”.

Now, with LLMs, a search result isn’t a search result. An image isn’t an image. A video isn’t a video.

We’re going to need a lot more qualifiers.

  1. I swear someone already wrote at-length about this parallel between food/“organic food“ and knowledge/“organic knowledge” but I can’t find. If you know it, reach out. Update: Found it, from an iA article: “Organic food only became organic once we ate enough frozen pizzas to realize the difference and importance of healthy, organic food.”

Notes From “You Are Not A Gadget”


Jaron Lanier’s book You Are Not a Gadget was written in 2010, but its preface is a prescient banger for 2024, the year of our AI overlord:

It's early in the 21st century, and that means that these words will mostly be read by nonpersons...[they] will be minced...within industrial cloud computing facilities...They will be scanned, rehashed, and misrepresented...Ultimately these words will contribute to the fortunes of those few who have been able to position themselves as lords of the computing clouds.

Today he might call the book, “You Are Not an Input to Artificial Intelligence”.

Lanier concludes the preface to his book by saying the words in it are intended for people, not computers.

Same for my blog! The words in it are meant for people, not computers. And I would hope any computerized representation of these words is solely for facilitating humans finding them and reading them in context.

Anyhow, here’s a few of my notes from the book.

So Long to The Individual Point of View

Authorship—the very idea of the individual point of view—is not a priority of the new technology...Instead of people being treated as the sources of their own creativity, commercial aggregation and abstraction sites present anonymized fragments of creativity…obscuring the true sources.

Again, this was 2010, way before “AI”.

Who cares for sources anymore? The perspective of the individual is obsolete. Everyone is flattened into a global mush. A word smoothie. We care more for the abstractions we can create on top of individual expression rather than the individuals and their expressions.

The central mistake of recent digital culture is to chop up a network of individuals so finely that you end up with a mush. You then start to care about the abstraction of the network more than the real people who are networked, even though the network by itself is meaningless. Only people were ever meaningful

While Lanier was talking about “the hive mind” of social networks as we understood it then, AI has a similar problem: we begin to care more about the training data than the individual humans whose outputs constitute the training data, even though the training data by itself is meaningless. Only people are meaningful.[1] As Lanier says in the book:

The bits don't mean anything without a cultured person to interpret them.

Information is alienated experience.

Emphasizing Artificial or Natural Intelligence

Emphasizing the crowd means deemphasizing individual humans.

I like that.

Here’s a corollary: emphasizing artificial intelligence means de-emphasizing natural intelligence.

Therein lies the tradeoff.

In Web 2.0, we emphasized the crowd over the individual and people behaved like a crowd instead of individuals, like a mob rather than a person. The design encouraged, even solicited, that kind of behavior.

Now with artificial intelligence enshrined, is it possible we begin to act like it? Hallucinating reality and making baseless claims in complete confidence will be normal, as that’s what the robots we interact with all day do.

What is communicated between people eventually becomes their truth. Relationships take on the troubles of software engineering.

What Even is “Intelligence”?

Before MIDI, a musical note was a bottomless idea that transcended absolute definition

But the digitalization of music require removing options and possibilities based on what was easiest to be represented and processed by the computer. We remove “the unfathomable penumbra of meaning that distinguishes” a musical note in the flesh to make a musical note in the computer.

Why? Because computers require abstractions. But abstractions are just that: models that roughly fit the real thing. But too often we let the abstractions become our reality:

Each layer of digital abstraction, no matter how well it is crafted, contributes some degree of error and obfuscation. No abstraction corresponds to reality perfectly. A lot of such layers become a system unto themselves, one that functions apart from the reality that is obscured far below.

Lanier argues it happened with MIDI and it happened with social networks, where people became rows in a database and began living up to that abstraction.

people are becoming like MIDI notes—overly defined, and restricted in practice to what can be represented in a computer...We have narrowed what we expect from the most commonplace forms of musical sound in order to make the technology adequate.

Perhaps similarly, intelligence (dare I say consciousness) was a bottomless idea that transcended definition. But we soon narrowed it down to fit our abstractions in the computer.

We are happy to enshrine into engineering designs mere hypotheses—and vague ones at that—about the hardest and most profound questions faced by science, as if we already posses perfect knowledge.

So we enshrine the idea of intelligence into our computing paradigm when we don’t even know what it means for ourselves. Are we making computers smarter or ourselves dumber?

You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart.


  1. This reminds me of Paul Ford’s questioning why we’re so anxious automate the hell out of everything and remove humans from the process when the whole point of human existence is to interact with other humans.

Hedge Words Affirm Creative, Imaginative Thinking


Mandy’s note piqued my interest so much, I started reading Being Wrong by Kathryn Schulz. So far, I love it! (I hope to write more about it once I’ve finished, but I’m afraid I won’t because the whole book is underlined in red pencil and I wouldn’t know where to start.)

As someone who has been told they self-sabotage by using hedge words, I like this excerpt from Schulz that Mandy quotes in her post:

disarming, self-deprecating comments, (“this could be wrong, but…” “maybe I’m off the mark here…”)…are often criticized [as] overly timid and self-sabotaging. But I’m not sure that’s the whole story. Awareness of one’s own qualms, attention to contradiction, acceptance of the possibility of error: these strike me as signs of sophisticated thinking, far preferable in many contexts to the confident bulldozer of unmodified assertions.

It’s kind of strange when you think about it.

Why do I feel this need to qualify what I’m about to say with a phrase like, “Maybe I’m wrong here, but…” As if being wrong is, in the words of Kathryn Schulz, a rare, bizarre, and “inexplicable aberration in the natural state of things”.

And yet, as much as we all say “to err is human”, we don’t always act like we believe it. As Schulz says in the book:

A whole lot of us go through life assuming that we are basically right, basically all the time, about basically everything.

Which is why I appreciate a good hedge word now and then.

In fact, I don’t think it’s hedging. It’s an open affirmation, as Mandy notes, of one’s desire to learn and evolve (as opposed to a desire to affirm and validate one’s own beliefs).

I would love to see less certainty and more openness. Less “it is this” and more “perhaps it could be this, or that, or maybe even both!”

Give me somebody who is willing to say “Maybe I’m wrong”. Somebody who can creatively imagine new possibilities, rather than be stuck with zero imagination and say, “I know all there is, and there’s no way this can be.”