Jim Nielsen’s Blog

You found my experimental HTML feed (there are also other ways to subscribe).

I HTML

Recent posts

The Continuum From Static to Dynamic

View

Dan Abramov in “Static as a Server”:

Static is a server that runs ahead of time.

“Static” and “dynamic” don’t have to be binaries that describe an entire application architecture. As Dan describes in his post, “static” or “dynamic” it’s all just computers doing stuff.

Computer A requests something (an HTML document, a PDF, some JSON, who knows) from computer B. That request happens via a URL and the response can be computed “ahead of time” or “at request time”. In this paradigm:

  • “Static” is server responding ahead of time to an anticipated requests with identical responses.
  • “Dynamic” is a server responding at request time to anticipated requests with varying responses.

But these definitions aren’t binaries, but rather represent two ends of a spectrum. Ultimately, however you define “static” or “dynamic”, what you’re dealing with is a response generated by a server — i.e. a computer — so the question is really a matter of when you want to respond and with what.

Answering the question of when previously had a really big impact on what kind of architecture you inherited. But I think we’re realizing we need more nimble architectures that can flex and grow in response to changing when a request/response cycle happens and what you respond with.

Perhaps a poor analogy, but imagine you’re preparing holiday cards for your friends and family:

  • “Static” is the same card sent to everyone
  • “Dynamic” is a hand-written card to each individual

But between these two are infinite possibilities, such as:

  • A hand-written card that’s photocopied and sent to everyone
  • A printed template with the same hand-written note to everyone
  • A printed template with a different hand-written note for just some people
  • etc.

Are those examples “static” or “dynamic”? [Cue endless debate].

The beauty is that in proving the space between binaries — between what “static” means and what “dynamic” means — I think we develop a firmer grasp of what we mean by those words as well as what we’re trying to accomplish with our code.

I love tools that help you think of the request/response cycle across your entire application as an endlessly-changing set of computations that happen either “ahead of time”, “just in time”, or somewhere in-between.


Reply via: Email · Mastodon · Bluesky

The Web as URLs, Not Documents

View

Dan Abramov on his blog (emphasis mine):

The division between the frontend and the backend is physical. We can’t escape from the fact that we’re writing client/server applications. Some logic is naturally more suited to either side. But one side should not dominate the other. And we shouldn’t have to change the approach whenever we need to move the boundary.

What we need are the tools that let us compose across the stack.

What are these tools that allow us to easily change the computation of an application happening between two computers? I think Dan is arguing that RSC is one of these tools.

I tend to think of Remix (v1) as one of these tools. Let me try and articulate why by looking at the difference between how we thought of websites in a “JAMstack” architecture vs. how tools (like Remix) are changing that perspective.

JAMstack: a website is a collection of static documents which are created by a static site generator and put on a CDN. If you want dynamism, you “opt-out” of a static document for some host-specific solution whose architecture is starkly different from the rest of your site.

Remix: a website is a collection of URLs that follow a request/response cycle handled by a server. Dynamism is “built-in” to the architecture and handled on a URL-by-URL basis. You choose how dynamic you want any particular response to be: from a static document on a CDN for everyone, to a custom response on a request-by-request basis for each user.

As your needs grow beyond the basic “static files on disk”, a JAMstack architecture often ends up looking like a microservices architecture where you have disparate pieces that work together to create the final whole: your static site generator here, your lambda functions there, your redirect engine over yonder, each with its own requirements and lifecycles once deployed.

Remix, in contrast, looks more like a monolith: your origin server handles the request/response lifecycle of all URLs at the time and in the manner of your choosing.

Instead of a build tool that generates static documents along with a number of distinct “escape hatches” to handle varying dynamic needs, your entire stack is “just a server” (that can be hosted anywhere you host a server) and you decide how and when to respond to each request — beforehand (at build), or just in time (upon request). No architectural escape hatches necessary.

You no longer have to choose upfront whether your site as a whole is “static” or “dynamic”, but rather how much dynamism (if any) is present on a URL-by-URL basis. It’s a sliding scale — a continuum of dynamism — from “completely static, the same for everyone” to “no one line of markup is the same from one request to another”, all of it modeled under the same architecture.

And, crucially, that URL-by-URL decision can change as needs change. As Dan Abramov noted in a tweet:

[your] build doesn’t have to be modeled as server. but modeling it as a server (which runs once early) lets you later move stuff around.

Instead of opting into a single architecture up front with escape hatches for every need that breaks the mold, you’re opting in to the request/response cycle of the web’s natural grain, and deciding how to respond on a case-by-case basis.

The web is not a collection of static documents. It’s a collection of URLs — of requests and responses — and tools that align themselves to this grain make composing sites with granular levels of dynamism so much easier.


Reply via: Email · Mastodon · Bluesky

Related posts linking here: (2025) The Continuum From Static to Dynamic

Some Miscellaneous Thoughts on Visual Design Prodded By The Sameness of AI Company Logos

View

Radek Sienkiewicz in a funny-because-its-true piece titled “Why do AI company logos look like buttholes?“:

We made a circular shape [logo] with some angles because it looked nice, then wrote flowery language to justify why our…design is actually profound.

As someone who has grown up through the tumult of the design profession in technology, that really resonates. I’ve worked on lots of projects where I got tired of continually justifying design decisions with language dressed in corporate rationality.

This is part of the allure of code. To most people, code either works or it doesn’t. However bad it might be, you can always justify it with “Yeah, but it’s working.”

But visual design is subjective forever. And that’s a difficult space to work in, where you need to forever justify your choices.

In that kind of environment, decisions are often made by whoever can come up with the best language to justify their choices, or whoever has the most senior job title.

Personally, I found it very exhausting.

As Radek points out, this homogenization justified through seemingly-profound language reveals something deeper about tech as an industry: folks are afraid to stand out too much.

Despite claims of innovation and disruption, there's tremendous pressure to look legitimate by conforming to established visual language.

In contrast to this stands the work of individual creators whose work I have always loved — whether its individual blogs, videos, websites, you name it. The individual (and I’ll throw small teams in there too) have a sense of taste that doesn’t dilute through the structure and processes of a larger organization.

No single person suggests making a logo that resembles an anus, but when everyone's feedback gets incorporated, that's what often emerges.

In other words, no individual would ever recommend what you get through corporate hierarchies.

That’s why I love the work of small teams and individuals. There’s still soul. You can still sense the individuals — their personalities, their values — oozing through the work. Reminds me of Jony Ive’s description of when he first encountered a Mac:

I was shocked that I had a sense for the people who made it. They could’ve been in the room. You really had a sense of what was on their minds, and their values, and their joy and exuberance in making something that they knew was helpful.

This is precisely why I love the websites of individuals because their visual language is as varied as the humans behind them — I mean, just look at the websites of these individuals and small teams. You immediately get a sense for the people behind them. I love it!


Reply via: Email · Mastodon · Bluesky

Notes from Andreas Fredriksson’s “Context is Everything”

View

I quite enjoyed this talk. Some of the technical details went over my head (I don’t know what “split 16-bit mask into two 8-bit LTUs” means) but I could still follow the underlying point.

First off, Andreas has a great story at the beginning about how he has a friend with a browser bookmarklet that replaces every occurrence of the word “dependency” with the word “liability”. Can you imagine npm working that way? Inside package.json:

{
  "liabilities": {
    "react": "^19.0.0",
    "typescript": "^5.0.0"
  },
  "devLiabilities": {...}
}

But I digress, back to Andreas.

He points out that the context of your problems and the context of someone else’s problems do not overlap as often as we might think.

It’s so unlikely that someone else tried to solve exactly our same problem with exactly our same constraints that [their solution or abstraction] will be the most economical or the best choice for us. It might be ok, but it won’t be the best thing.

So while we immediately jump to tools built by others, the reality is that their tools were built for their problems and therefore won’t overlap with our problems as much or as often as we’re led to believe.

Venn diagram with three circles. The first says 'My problems', the second 'Your problems' and the third 'Facebook’s problems' and they barely have any overlap and where they do it’s labeled “React”.

In Andreas’ example, rather than using a third-party library to parse JSON and turn it into something, he writes his own bespoke parser for the problem at hand. His parser ignores a whole swath of abstractions a more generalized parser solves for, and guess what? His is an order of magnitude faster!

Solving problems in the wrong domain and then glueing things together is always much, much worse [in terms of performance] than solving for what you actually need to solve.

It’s fun watching him step through the performance gains as he goes from a generalized solution to one more tailored to his own specific context.

What really resonates in his step-by-step process is how, as problems present themselves, you see how much easier it is to deal with performance issues for stuff you wrote vs. stuff others wrote. Not only that, but you can debug way faster!

(Just think of the last time you tried to debug a file 1) you wrote, vs. 2) one you vendored vs. 3) one you installed deep down in node_modules somewhere.)

Andreas goes from 41MB/s throughput to 1.25GB/s throughput without changing the behavior of the program. He merely removed a bunch of generalized abstractions he wasn’t using and didn’t need.

Surprise, surprise: not doing unnecessary things is faster!

You should always consider the unique context of your situation and weigh trade-offs. A “generic” solution means a solution “not tuned for your use case”.


Reply via: Email · Mastodon · Bluesky

Is It JavaScript?

View

OH: It’s just JavaScript, right? I know JavaScript.

My coworker who will inevitably spend the rest of the day debugging an electron issue

@jonkuperman.com on BlueSky

“It’s Just JavaScript!” is probably a phrase you’ve heard before. I’ve used it myself a number of times.

It gets thrown around a lot, often to imply that a particular project is approachable because it can be achieved writing the same, ubiquitous, standardized scripting language we all know and love: JavaScript.

Take what you learned moving pixels around in a browser and apply that same language to running a server and querying a database. You can do both with the same language, It’s Just JavaScript!

But wait, what is JavaScript?

Is any code in a .js file “Just JavaScript”?

Let’s play a little game I shall call: “Is It JavaScript?”

Poster from the game “Is It Cake?” showing a guy cutting through a cake, but the words “Is It JavaScript?” have been superimposed on the poster, as well as the JS logo over the cake.

Browser JavaScript

let el = document.querySelector("#root");
window.location = "https://jim-nielsen.com";

That’s DOM stuff, i.e. browser APIs. Is it JavaScript?

“If it runs in the browser, it’s JavaScript” seems like a pretty good rule of thumb. But can you say “It’s Just JavaScript” if it only runs in the browser?

What about the inverse: code that won’t run in the browser but will run elsewhere?

Server JavaScript

const fs = require('fs');
const content = fs.readFileSync('./data.txt', 'utf8');

That will run in Node — or something with Node compatibility, like Deno — but not in the browser.

Is it “Just JavaScript”?

Environment Variables

It’s very possible you’ve seen this in a .js file:

const apiUrl = process.env.API_URL;

But that’s following a Node convention which means that particular .js file probably won’t work as expected in a browser but will on a server.

Is it “Just JavaScript” if executes but will only work as expected with special knowledge of runtime conventions?

JSX

What about this file MyComponent.js

function MyComponent() {
  const handleClick = () => {/* do stuff */}
  return (
    <Button onClick={handleClick}>Click me</Button>
  )
}

That won’t run in a browser. It requires a compilation step to turn it into React.createElement(...) (or maybe even something else) which will run in a browser.

Or wait, that can also run on the server.

So it can run on a server or in the browser, but now requires a compilation step. Is it “Just JavaScript”?

Pragmas

What about this little nugget?

/** @jsx h */
import { h } from "preact";
const HelloWorld = () => <div>Hello</div>;

These are magic comments which affect the interpretation and compilation of JavaScript code (Tom MacWright has an excellent article on the subject).

If code has magic comments that direct how it is compiled and subsequently executed, is it “Just JavaScript”?

TypeScript

What about:

const name: string = "Hello world";

You see it everywhere and it seems almost synonymous with JavaScript, would you consider it “Just JavaScript”?

Imports

It’s very possible you’ve come across a .js file that looks like this at the top.

import icon from './icon.svg';
import data from './data.json';
import styles from './styles.css';
import foo from '~/foo.js';
import foo from 'bar:foo';

But a lot of that syntax is non-standard (I’ve written about this topic previously in more detail) and requires some kind of compilation — is this “Just JavaScript”?

Vanilla

Here’s a .js file:

var foo = 'bar';

I can run it here (in the browser).

I can run it there (on the server).

I can run it anywhere.

It requires no compiler, no magic syntax, no bundler, no transpiler, no runtime-specific syntax. It’ll run the same everywhere.

That seems like it is, in fact, Just JavaScript.

As Always, Context Is Everything

A lot of JavaScript you see every day is non-standard. Even though it might be rather ubiquitous — such as seeing process.env.* — lots of JS code requires you to be “in the know” to understand how it’s actually working because it’s not following any part of the ECMAScript standard.

There are a few vital pieces of context you need in order to understand a .js file, such as:

  • Which runtime will this execute in? The browser? Something server-side like Node, Deno, or Bun? Or perhaps something else like Cloudflare Workers?
  • What tools are required to compile this code before it can be executed in the runtime? (vite, esbuild, webpack, rollup typescript, etc.)
  • What frameworks are implicit in the code? e.g. are there non-standard globals like Deno.* or special keyword exports like export function getServerSideProps(){...}?

When somebody says, “It’s Just JavaScript” what would be more clear is to say “It’s Just JavaScript for…”, e.g.

  • It’s just JavaScript for the browser
  • It’s just JavaScript for Node
  • It’s just JavaScript for Next.js

So what would you call JavaScript that can run in any of the above contexts?

Well, I suppose you would call that “Just JavaScript”.


Reply via: Email · Mastodon · Bluesky

Tradeoffs to Continuous Software?

View

I came across this post from the tech collective crftd. about how software is in a process of “continuous disintegration”:

One of the uncomfortable truths we sometimes have to break to people is that software isn't just never “done”. Worse even, it rots…

The practices of continuous integration act as enablers for us to keep adding value and keeping development maintainable, but they cannot stop the inevitable: The system will eventually fail in unexpected ways, as is the nature of complex systems:

That all resonates with me — software is rarely “done”, it generally has shelf life and starts rotting the moment you ship it — but what really made me pause was this line:

The practices of continuous integration act as enablers for us

I read “enabler” there in the negative context of the word, like in addiction when the word “enabler” refers to someone who exploits others by encouraging a pattern of self-destructive behavior.

Is CI/CD an enabler?

I’d only ever thought on moving towards CI/CD as a net positive thing. Is it possible that, like everything, CI/CD has its tradeoffs and isn’t always the Best Thing Ever™️?

What are the trade-offs of CI/CD?

The thought occurred to me that CI stands for “continuous investment” because that’s what it requires to keep it working — a continuous investment in the both the infrastructure that delivers the software and the software itself.

Everybody complains now-a-days about how software requires a subscription. Why is that? Could it be, perhaps, because of CI/CD? If you want continuous updates to your software, you’re going to have to pay for it continuously.

We’ve made delivering software continuously easy, which means we’ve made creating software that’s “done” hard — be careful of what you make easy.

In some sense — at least on the web — I think you could argue that we don’t know how to make software that’s done (e.g. software that ships on a CD). We’re inundated with tools and practices and norms that enable the opposite of that.

And, perhaps, we’ve trading something there?

When something comes along and enables new capabilities, it often severs others.


Reply via: Email · Mastodon · Bluesky

Could I Have Some More Friction in My Life, Please?

View

A clip from “Buy Now! The Shopping Conspiracy” features a former executive of an online retailer explaining how motivated they were to make buying easy. Like, incredibly easy. So easy, in fact, that their goal was to “reduce your time to think a little bit more critically about a purchase you thought you wanted to make.” Why? Because if you pause for even a moment, you might realize you don’t actually want whatever you’re about to buy.

Been there. Ready to buy something and the slightest inconvenience surfaces — like when I can’t remember the precise order of my credit card’s CCV number and realize I’ll have to find my credit card and look it up — and that’s enough for me to say, “Wait a second, do I actually want to move my slug of a body and find my credit card? Nah.”

That feels like the socials too.

The algorithms. The endless feeds. The social interfaces. All engineered to make you think less about what you’re consuming, to think less critically about reacting or responding or engaging.

Don’t think, just scroll.

Don’t think, just like.

Don’t think, just repost.

And now with AI don’t think at all.[1]

Because if you have to think, that’s friction. Friction is an engagement killer on content, especially the low-grade stuff. Friction makes people ask, “Is this really worth my time?”

Maybe we need a little more friction in the world. More things that merit our time. Less things that don’t.

It’s kind of ironic how the things we need present so much friction in our lives (like getting healthcare) while the things we don’t need that siphon money from our pockets (like online gambling[2]) present so little friction you could almost inadvertently slip right into them.

It’s as if The Good Things™️ in life are full of friction while the hollow ones are frictionless.


  1. Nicholas Carr said, “The endless labor of self-expression cries out for the efficiency of automation.” Why think when you can prompt a probability machine to stitch together a facade of thinking for you?
  2. John Oliver did a segment on sports betting if you want to feel sad.

Reply via: Email · Mastodon · Bluesky

Webkit’s New Color Picker as an Example of Good Platform Defaults

View

I’ve written about how I don’t love the idea of overriding basic computing controls. Instead, I generally favor opting to respect user choice and provide the controls their platform does.

Of course, this means platforms need to surface better primitives rather than supplying basic ones with an ability to opt out.

What am I even talking about? Let me give an example.

The Webkit team just shipped a new API for <input type=color> which provides users the ability to pick colors with wide gamut P3 and alpha transparency. The entire API is just a little bit of declarative HTML:

<label>
  Select a color:
  <input type="color" colorspace="display-p3" alpha>
</label>

From that simple markup (on iOS) you get this beautiful, robust color picker.

Screenshot of the native color picker in Safari on iOS

That’s a great color picker, and if you’re choosing colors a lot on iOS respectively and encountering this particular UI a lot, that’s even better — like, “Oh hey, I know how to use this thing!”

With a picker like that, how many folks really want additional APIs to override that interface and style it themselves?

This is the kind of better platform defaults I’m talking about. A little bit of HTML markup, and boom, a great interface to a common computing task that’s tailored to my device and uniform in appearance and functionality across the websites and applications I use. What more could I want? You might want more, like shoving your brand down my throat, but I really don’t need to see BigFinanceCorp Green™️ as a themed element in my color or date picker.

If I could give HTML an aspirational slogan, it would be something along the lines of Mastercard’s old one: There are a few use cases platform defaults can’t solve, for everything else there’s HTML.


Reply via: Email · Mastodon · Bluesky

Product Pseudoscience

View

In his post about “Vibe Drive Development”, Robin Rendle warns against what I’ll call the pseudoscientific approach to product building prevalent across the software industry:

when folks at tech companies talk about data they’re not talking about a well-researched study from a lab but actually wildly inconsistent and untrustworthy data scraped from an analytics dashboard.

This approach has all the theater of science — “we measured and made decisions on the data, the numbers don’t lie” etc. — but is missing the rigor of science.

Like, for example, corroboration.

Independent corroboration is a vital practice of science that we in tech conveniently gloss over in our (self-proclaimed) objective data-driven decision making.

In science you can observe something, measure it, analyze the results, and draw conclusions, but nobody accepts it as fact until there can be multiple instances of independent corroboration.

Meanwhile in product, corroboration is often merely a group of people nodding along in support of a Powerpoint with some numbers supporting a foregone conclusion — “We should do X, that’s what the numbers say!”

(What’s worse is when we have the hubris to think our experiments, anecdotal evidence, and conclusions should extend to others outside of our own teams, despite zero independent corroboration — looking at you Medium articles.)

Don’t get me wrong, experimentation and measurement are great. But let’s not pretend there is (or should be) a science to everything we do. We don’t hold a candle to the rigor of science. Software is as much art as science. Embrace the vibe.


Reply via: Email · Mastodon · Bluesky

Multiple Computers

View

I’ve spent so much time, had so many headaches, and encountered so much complexity from what, in my estimation, boils down to this: trying to get something to work on multiple computers.

It might be time to just go back to having one computer — a personal laptop — do everything.

No more commit, push, and let the cloud build and deploy.

No more making it possible to do a task on my phone and tablet too.

No more striving to make it possible to do anything from anywhere.

Instead, I should accept the constraint of doing specific kinds of tasks when I’m at my laptop. No laptop? Don’t do it. Save it for later. Is it really that important?

I think I’d save myself a lot of time and headache with that constraint. No more continuous over-investment of my time in making it possible to do some particular task across multiple computers.

It’s a subtle, but fundamental, shift in thinking about my approach to computing tasks.

Today, my default posture is to defer control of tasks to cloud computing platforms. Let them do the work, and I can access and monitor that work from any device. Like, for example, publishing a version of my website: git commit, push, and let the cloud build and deploy it.

But beware, there be possible dragons! The build fails. It’s not clear why, but it “works on my machine”. Something is different between my computer and the computer in the cloud. Now I’m troubleshooting an issue unrelated to my website itself. I’m troubleshooting an issue with the build and deployment of my website across multiple computers.

It’s easy to say: build works on my machine, deploy it! It’s deceivingly time-consuming to take that one more step and say: let another computer build it and deploy it.

So rather than taking the default posture of “cloud-first”, i.e. push to the cloud and let it handle everything, I’d rather take a “local-first” approach where I choose one primary device to do tasks on, and ensure I can do them from there. Everything else beyond that, i.e. getting it to work on multiple computers, is a “progressive enhancement” in my workflow. I can invest the time, if I want to, but I don’t have to. This stands in contrast to where I am today which is if a build fails in the cloud, I have to invest the time because that’s how I’ve setup my workflow. I can only deploy via the cloud. So I have to figure out how to get the cloud’s computer to build my site, even when my laptop is doing it just fine.

It’s hard to make things work identically across multiple computers.

I get it, that’s a program not software. And that’s the work. But sometimes a program is just fine. Wisdom is knowing the difference.


Reply via: Email · Mastodon · Bluesky