Jim Nielsen’s Blog

You found my experimental HTML feed (there are also other ways to subscribe).

I HTML

Recent posts

Webkit’s New Color Picker as an Example of Good Platform Defaults

View

I’ve written about how I don’t love the idea of overriding basic computing controls. Instead, I generally favor opting to respect user choice and provide the controls their platform does.

Of course, this means platforms need to surface better primitives rather than supplying basic ones with an ability to opt out.

What am I even talking about? Let me give an example.

The Webkit team just shipped a new API for <input type=color> which provides users the ability to pick colors with wide gamut P3 and alpha transparency. The entire API is just a little bit of declarative HTML:

<label>
  Select a color:
  <input type="color" colorspace="display-p3" alpha>
</label>

From that simple markup (on iOS) you get this beautiful, robust color picker.

Screenshot of the native color picker in Safari on iOS

That’s a great color picker, and if you’re choosing colors a lot on iOS respectively and encountering this particular UI a lot, that’s even better — like, “Oh hey, I know how to use this thing!”

With a picker like that, how many folks really want additional APIs to override that interface and style it themselves?

This is the kind of better platform defaults I’m talking about. A little bit of HTML markup, and boom, a great interface to a common computing task that’s tailored to my device and uniform in appearance and functionality across the websites and applications I use. What more could I want? You might want more, like shoving your brand down my throat, but I really don’t need to see BigFinanceCorp Green™️ as a themed element in my color or date picker.

If I could give HTML an aspirational slogan, it would be something along the lines of Mastercard’s old one: There are a few use cases platform defaults can’t solve, for everything else there’s HTML.


Reply via: Email · Mastodon · Bluesky

Product Pseudoscience

View

In his post about “Vibe Drive Development”, Robin Rendle warns against what I’ll call the pseudoscientific approach to product building prevalent across the software industry:

when folks at tech companies talk about data they’re not talking about a well-researched study from a lab but actually wildly inconsistent and untrustworthy data scraped from an analytics dashboard.

This approach has all the theater of science — “we measured and made decisions on the data, the numbers don’t lie” etc. — but is missing the rigor of science.

Like, for example, corroboration.

Independent corroboration is a vital practice of science that we in tech conveniently gloss over in our (self-proclaimed) objective data-driven decision making.

In science you can observe something, measure it, analyze the results, and draw conclusions, but nobody accepts it as fact until there can be multiple instances of independent corroboration.

Meanwhile in product, corroboration is often merely a group of people nodding along in support of a Powerpoint with some numbers supporting a foregone conclusion — “We should do X, that’s what the numbers say!”

(What’s worse is when we have the hubris to think our experiments, anecdotal evidence, and conclusions should extend to others outside of our own teams, despite zero independent corroboration — looking at you Medium articles.)

Don’t get me wrong, experimentation and measurement are great. But let’s not pretend there is (or should be) a science to everything we do. We don’t hold a candle to the rigor of science. Software is as much art as science. Embrace the vibe.


Reply via: Email · Mastodon · Bluesky

Multiple Computers

View

I’ve spent so much time, had so many headaches, and encountered so much complexity from what, in my estimation, boils down to this: trying to get something to work on multiple computers.

It might be time to just go back to having one computer — a personal laptop — do everything.

No more commit, push, and let the cloud build and deploy.

No more making it possible to do a task on my phone and tablet too.

No more striving to make it possible to do anything from anywhere.

Instead, I should accept the constraint of doing specific kinds of tasks when I’m at my laptop. No laptop? Don’t do it. Save it for later. Is it really that important?

I think I’d save myself a lot of time and headache with that constraint. No more continuous over-investment of my time in making it possible to do some particular task across multiple computers.

It’s a subtle, but fundamental, shift in thinking about my approach to computing tasks.

Today, my default posture is to defer control of tasks to cloud computing platforms. Let them do the work, and I can access and monitor that work from any device. Like, for example, publishing a version of my website: git commit, push, and let the cloud build and deploy it.

But beware, there be possible dragons! The build fails. It’s not clear why, but it “works on my machine”. Something is different between my computer and the computer in the cloud. Now I’m troubleshooting an issue unrelated to my website itself. I’m troubleshooting an issue with the build and deployment of my website across multiple computers.

It’s easy to say: build works on my machine, deploy it! It’s deceivingly time-consuming to take that one more step and say: let another computer build it and deploy it.

So rather than taking the default posture of “cloud-first”, i.e. push to the cloud and let it handle everything, I’d rather take a “local-first” approach where I choose one primary device to do tasks on, and ensure I can do them from there. Everything else beyond that, i.e. getting it to work on multiple computers, is a “progressive enhancement” in my workflow. I can invest the time, if I want to, but I don’t have to. This stands in contrast to where I am today which is if a build fails in the cloud, I have to invest the time because that’s how I’ve setup my workflow. I can only deploy via the cloud. So I have to figure out how to get the cloud’s computer to build my site, even when my laptop is doing it just fine.

It’s hard to make things work identically across multiple computers.

I get it, that’s a program not software. And that’s the work. But sometimes a program is just fine. Wisdom is knowing the difference.


Reply via: Email · Mastodon · Bluesky

Notes from Alexander Petros’ “Building the Hundred-Year Web Service”

View

I loved this talk from Alexander Petros titled “Building the Hundred-Year Web Service”. What follows is summation of my note-taking from watching the talk on YouTube.


Is what you’re building for future generations:

  • Useful for them?
  • Maintainable by them?
  • Adaptable by them?

Actually, forget about future generations. Is what you’re building for future you 6 months or 6 years from now aligning with those goals?

While we’re building codebases which may not be useful, maintainable, or adaptable by someone two years from now, the Romans built a bridge thousands of years ago that is still being used today.

It should be impossible to imagine building something in Roman times that’s still useful today. But if you look at [Trajan’s Bridge in Portugal, which is still used today] you can see there’s a little car on its and a couple pedestrians. They couldn’t have anticipated the automobile, but nevertheless it is being used for that today.

That’s a conundrum. How do you build for something you can’t anticipate? You have to think resiliently.

Ask yourself: What’s true today, that was true for a software engineer in 1991? One simple answer is: Sharing and accessing information with a uniform resource identifier. That was true 30+ years ago, I would venture to bet it will be true in another 30 years — and more!

There [isn’t] a lot of source code that can run unmodified in software that is 30 years apart.

And yet, the first web site ever made can do precisely that. The source code of the very first web page — which was written for a line mode browser — still runs today on a touchscreen smartphone, which is not a device that Tim Berners-less could have anticipated.

Alexander goes on to point out how interaction with web pages has changed over time:

  • In the original line mode browser, links couldn’t be represented as blue underlined text. They were represented more like footnotes on screen where you’d see something like this[1] and then this[2]. If you wanted to follow that link, there was no GUI to point and click. Instead, you would hit that number on your keyboard.
  • In desktop browsers and GUI interfaces, we got blue underlines to represent something you could point and click on to follow a link
  • On touchscreen devices, we got “tap” with your finger to follow a link.

While these methods for interaction have changed over the years, the underlying medium remains unchanged: information via uniform resource identifiers.

The core representation of a hypertext document is adaptable to things that were not at all anticipated in 1991.

The durability guarantees of the web are absolutely astounding if you take a moment to think about it.

In you’re sprinting you might beat the browser, but it’s running a marathon and you’ll never beat it in the long run.

If your page is fast enough, [refreshes] won’t even repaint the page. The experience of refreshing a page, or clicking on a “hard link” is identical to the experience of partially updating the page. That is something that quietly happened in the last ten years with no fanfare. All the people who wrote basic HTML got a huge performance upgrade in their browser. And everybody who tried to beat the browser now has to reckon with all the JavaScript they wrote to emulate these basic features.


Reply via: Email · Mastodon · Bluesky

Notes from the Chrome Team’s “Blink principles of web compatibility”

View

Following up on a previous article I wrote about backwards compatibility, I came across this document from Rick Byers of the Chrome team titled “Blink principles of web compatibility” which outlines how they navigate introducing breaking changes.

“Hold up,” you might say. “Breaking changes? But there’s no breaking changes on the web!?”

Well, as outlined in their Google Doc, “don’t break anyone ever” is a bit unrealistic. Here’s their rationale:

The Chromium project aims to reduce the pain of breaking changes on web developers. But Chromium’s mission is to advance the web, and in some cases it’s realistically unavoidable to make a breaking change in order to do that. Since the web is expected to continue to evolve incrementally indefinitely, it’s essential to its survival that we have some mechanism for shedding some of the mistakes of the past.

Fair enough. We all need ways of shedding mistakes from the past. But let’s not get too personal. That’s a different post.

So when it comes to the web, how do you know when to break something and when to not? The Chrome team looks at the data collected via Chrome's anonymous usage statistics (you can take a peak at that data yourself) to understand how often “mistake” APIs are still being used. This helps them categorize breaking changes as low-risk or high-risk. What’s wild is that, given Chrome’s ubiquity as a browser, a number like 0.1% is classified as “high-risk”!

As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial. There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly!

But the usage stats are merely a guide — a partially blind one at that. The Chrome team openly acknowledges their dataset doesn’t tell the whole story (e.g. Enterprise clients have metrics recording is disabled, China has Google’s metric servers are disabled, and Chromium derivatives don’t record metrics at all).

And Chrome itself is only part of the story. They acknowledge that a change that would break Chrome but align it with other browsers is a good thing because it’s advancing the whole web while perhaps costing Chrome specifically in the short term — community > corporation??

Breaking changes which align Chromium’s behavior with other engines are much less risky than those which cause it to deviate…In general if a change will break only sites coding specifically for Chromium (eg. via UA sniffing), then it’s likely to be net-positive towards Chromium’s mission of advancing the whole web.

Yay for advancing the web! And the web is open, which is why they also state they’ll opt for open formats where possible over closed, proprietary, “patent-encumbered” ones.

The chromium project is committed to a free and open web, enabling innovation and competition by anyone in any size organization or of any financial means or legal risk tolerance. In general the chromium project will accept an increased level of compatibility risk in order to reduce dependence in the web ecosystem on technologies which cannot be implemented on a royalty-free basis.

One example we saw of breaking change that excluded proprietary in favor of open was Flash. One way of dealing with a breaking change like that is to provide opt-out. In the case of Flash, users were given the ability to “opt-out” of Flash being deprecated via site settings (in other words, opt-in to using flash on a page-by-page basis). That was an important step in phasing out that behavior completely over time. But not all changes get that kind of heads-up.

there is a substantial portion of the web which is unmaintained and will effectively never be updated…It may be useful to look at how long chromium has had the behavior in question to get some idea of the risk that a lot of unmaintained code will depend on it…In general we believe in the principle that the vast majority of websites should continue to function forever.

There’s a lot going on with Chrome right now, but you gotta love seeing the people who work on it making public statements like that — “we believe…that the vast majority of websites should continue to function forever.”

There’s some good stuff in this document that gives you hope that people really do care and work incredibly hard to not break the web! (It’s an ecosystem after all.)

It’s important for [us] browser engineers to resist the temptation to treat breaking changes in a paternalistic fashion. It’s common to think we know better than web developers, only to find out that we were wrong and didn’t know as much about the real world as we thought we did. Providing at least a temporary developer opt-out is an act of humility and respect for developers which acknowledges that we’ll only succeed in really improving the web for users long-term via healthy collaborations between browser engineers and web developers.

More 👏 acts 👏 of 👏 humility 👏 in tech 👏 please!


Reply via: Email · Mastodon · Bluesky

Language Needs Innovation

View

In his book “The Order of Time” Carlo Rovelli notes how we often asks ourselves questions about the fundamental nature of reality such as “What is real?” and “What exists?”

But those are bad questions he says. Why?

the adjective “real” is ambiguous; it has a thousand meanings. The verb “to exist” has even more. To the question “Does a puppet whose nose grows when he lies exist?” it is possible to reply: “Of course he exists! It’s Pinocchio!”; or: “No, it doesn’t, he’s only part of a fantasy dreamed up by Collodi.”

Both answers are correct, because they are using different meanings of the verb “to exist.”

He notes how Pinocchio “exists” and is “real” in terms of a literary character, but not so far as any official Italian registry office is concerned.

To ask oneself in general “what exists” or “what is real” means only to ask how you would like to use a verb and an adjective. It’s a grammatical question, not a question about nature.

The point he goes on to make is that our language has to evolve and adapt with our knowledge.

Our grammar developed from our limited experience, before we know what we know now and before we became aware of how imprecise it was in describing the richness of the natural world.

Rovelli gives an example of this from a text of antiquity which uses confusing grammar to get at the idea of the Earth having a spherical shape:

For those standing below, things above are below, while things below are above, and this is the case around the entire earth.

On its face, that is a very confusing sentence full of contradictions. But the idea in there is profound: the Earth is round and direction is relative to the observer. Here’s Rovelli:

How is it possible that “things above are below, while things below are above"? It makes no sense…But if we reread it bearing in mind the shape and the physics of the Earth, the phrase becomes clear: its author is saying that for those who live at the Antipodes (in Australia), the direction “upward” is the same as “downward” for those who are in Europe. He is saying, that is, that the direction “above” changes from one place to another on the Earth. He means that what is above with respect to Sydney is below with respect to us. The author of this text, written two thousand years ago, is struggling to adapt his language and his intuition to a new discovery: the fact that the Earth is a sphere, and that “up” and “down” have a meaning that changes between here and there. The terms do not have, as previously thought, a single and universal meaning.

So language needs innovation as much as any technological or scientific achievement. Otherwise we find ourselves arguing over questions of deep import in a way that ultimately amounts to merely a question of grammar.


Reply via: Email · Mastodon · Bluesky

The Tumultuous Evolution of the Design Profession

View

Via Jeremy Keith’s link blog I found this article: Elizabeth Goodspeed on why graphic designers can’t stop joking about hating their jobs. It’s about the disillusionment of designers since the ~2010s. Having ridden that wave myself, there’s a lot of very relatable stuff in there about how design has evolved as a profession.

But before we get into the meat of the article, there’s some bangers worth acknowledging, like this:

Amazon – the most used website in the world – looks like a bunch of pop-up ads stitched together.

lol, burn. Haven’t heard Amazon described this way, but it’s spot on.

The hard truth, as pointed out in the article, is this: bad design doesn’t hurt profit margins. Or at least there’s no immediately-obvious, concrete data or correlation that proves this. So most decision makers don’t care.

You know what does help profit margins? Spending less money. Cost-savings initiatives. Those always provide a direct, immediate, seemingly-obvious correlation. So those initiatives get prioritized.

Fuzzy human-centered initiatives (humanities-adjacent stuff), are difficult to quantitatively (and monetarily) measure.

“Let’s stop printing paper and sending people stuff in the mail. It’s expensive. Send them emails instead.” Boom! Money saved for everyone. That’s easier to prioritize than asking, “How do people want us to communicate with them — if at all?” Nobody ever asks that last part.

Designers quickly realized that in most settings they serve the business first, customers second — or third, or fourth, or...

Shar Biggers [says] designers are “realising that much of their work is being used to push for profit rather than change..”

Meet the new boss. Same as the old boss.

As students, designers are encouraged to make expressive, nuanced work, and rewarded for experimentation and personal voice. The implication, of course, is that this is what a design career will look like: meaningful, impactful, self-directed. But then graduation hits, and many land their first jobs building out endless Google Slides templates or resizing banner ads...no one prepared them for how constrained and compromised most design jobs actually are.

Reality hits hard. And here’s the part Jeremy quotes:

We trained people to care deeply and then funnelled them into environments that reward detachment. ​​And the longer you stick around, the more disorienting the gap becomes – especially as you rise in seniority. You start doing less actual design and more yapping: pitching to stakeholders, writing brand strategy decks, performing taste. Less craft, more optics; less idealism, more cynicism.

Less work advocating for your customers, more work for advocating for yourself and your team within the organization itself.

Then the cynicism sets in. We’re not making software for others. We’re making company numbers go up, so our numbers ($$$) will go up.

Which reminds me: Stephanie Stimac wrote about reaching 1 year at Igalia and what stood out to me in her post was that she didn’t feel a pressing requirement to create visibility into her work and measure (i.e. prove) its impact.

I’ve never been good at that. I’ve seen its necessity, but am just not good at doing it. Being good at building is great. But being good at the optics of building is often better — for you, your career, and your standing in many orgs.

Anyway, back to Elizabeth’s article. She notes you’ll burn out trying to monetize something you love — especially when it’s in pursuit of maintaining a cost of living.

Once your identity is tied up in the performance, it’s hard to admit when it stops feeling good.

It’s a great article and if you’ve been in the design profession of building software, it’s worth your time.


Reply via: Email · Mastodon · Bluesky

Backwards Compatibility in the Web, but Not Its Tools

View

After reading an article, I ended up on HackerNews and stumbled on this comment:

The most frustrating thing about dipping in to the FE is that it seems like literally everything is deprecated.

Lol, so true. From the same comment, here’s a description of a day in the life of a front-end person:

Oh, you used the apollo CLI in 2022? Bam, deprecated, go learn how to use graphql-client or whatever, which has a totally different configuration and doesn’t support all the same options. Okay, so we just keep the old one and disable the node engine check in pnpm that makes it complain. Want to do a patch upgrade to some dependency? Hope you weren’t relying on any of its type signatures! Pin that as well, with a todo in the codebase hoping someone will update the signatures.

Finally get things running, watch the stream of hundreds of deprecation warnings fly by during the install. Eventually it builds, and I get the hell out of there.

Apt.

It’s ironic that the web platform itself has an ethos of zero breaking changes.

But the tooling for building stuff on the web platform? The complete opposite. Breaking changes are a way of life.

Is there some mystical correlation here, like the tools remain in such flux because the platform is so stable — stability taken for granted breeds instability?

Either way, as Morpheus says in The Matrix: Fate, it seems, is not without a sense of irony.


Reply via: Email · Mastodon · Bluesky

Related posts linking here: (2025) Notes from the Chrome Team’s “Blink principles of web compatibility”

Craft and Satisfaction

View

Here’s Sean Voisen writing about how programming is a feeling:

For those of us who enjoy programming, there is a deep satisfaction that comes from solving problems through well-written code, a kind of ineffable joy found in the elegant expression of a system through our favorite syntax. It is akin to the same satisfaction a craftsperson might find at the end of the day after toiling away on well-made piece of furniture, the culmination of small dopamine hits that come from sweating the details on something and getting them just right. Maybe nobody will notice those details, but it doesn’t matter. We care, we notice, we get joy from the aesthetics of the craft.

This got me thinking about the idea of satisfaction in craft. Where does it come from?

In part, I think, it comes from arriving at a deeper, and more intimate understanding of and relationship to what you’re working with.

For example, I think of a sushi chef. I’m not a sushi chef, but I’ve tried my hand at making rolls and I’ve seen Jiro Dreams of Sushi, so I have a speck of familiarity with the spectrum from beginner to expert.

When you first start out, you’re focused on the outcome. “Can I do this? Let see if I can pull it off.” Then comes the excitement of, “Hey I made my own roll!” That’s as far as many of us go. But if you keep going, you end up in a spot where you’re more worried about what goes into the roll than the outcome of roll itself. Where was the fish sourced from? How was it sourced? Was it ever frozen? A million and one questions about what goes into the process, which inevitably shape what comes out of it.

And I think an obsession with the details of what goes in drives your satisfaction of what comes out.

In today’s moment, I wonder if AI tools help or hinder fostering a sense of wonder in what it means to craft something?

When you craft something, you’re driven further into the essence of the materials you work. But AI can easily reverse this, where you care less about what goes in and only what comes out.

One question I’m asking myself is: do I care more or less about what I’ve made when I’m done using AI to help make it?


Reply via: Email · Mastodon · Bluesky

Brian Regan Helped Me Understand My Aversion to Job Titles

View

I like the job title “Design Engineer”. When required to label myself, I feel partial to that term (I should, I’ve written about it enough).

Lately I’ve felt like the term is becoming more mainstream which, don’t get me wrong, is a good thing. I appreciate the diversification of job titles, especially ones that look to stand in the middle between two binaries.

But — and I admit this is a me issue — once a title starts becoming mainstream, I want to use it less and less.

I was never totally sure why I felt this way. Shouldn’t I be happy a title I prefer is gaining acceptance and understanding? Do I just want to rebel against being labeled? Why do I feel this way?

These were the thoughts simmering in the back of my head when I came across an interview with the comedian Brian Regan where he talks about his own penchant for not wanting to be easily defined:

I’ve tried over the years to write away from how people are starting to define me. As soon as I start feeling like people are saying “this is what you do” then I would be like “Alright, I don't want to be just that. I want to be more interesting. I want to have more perspectives.” [For example] I used to crouch around on stage all the time and people would go “Oh, he’s the guy who crouches around back and forth.” And I’m like, “I’ll show them, I will stand erect! Now what are you going to say?” And then they would go “You’re the guy who always feels stupid.” So I started [doing other things].

He continues, wondering aloud whether this aversion to not being easily defined has actually hurt his career in terms of commercial growth:

I never wanted to be something you could easily define. I think, in some ways, that it’s held me back. I have a nice following, but I’m not huge. There are people who are huge, who are great, and deserve to be huge. I’ve never had that and sometimes I wonder, ”Well maybe it’s because I purposely don’t want to be a particular thing you can advertise or push.”

That struck a chord with me. It puts into words my current feelings towards the job title “Design Engineer” — or any job title for that matter.

Seven or so years ago, I would’ve enthusiastically said, “I’m a Design Engineer!” To which many folks would’ve said, “What’s that?”

But today I hesitate. If I say “I’m a Design Engineer” there are less follow up questions. Now-a-days that title elicits less questions and more (presumed) certainty.

I think I enjoy a title that elicits a “What’s that?” response, which allows me to explain myself in more than two or three words, without being put in a box.

But once a title becomes mainstream, once people begin to assume they know what it means, I don’t like it anymore (speaking for myself, personally).

As Brian says, I like to be difficult to define. I want to have more perspectives. I like a title that befuddles, that doesn’t provide a presumed sense of certainty about who I am and what I do.

And I get it, that runs counter to the very purpose of a job title which is why I don’t think it’s good for your career to have the attitude I do, lol.

I think my own career evolution has gone something like what Brian describes:

  • Them: “Oh you’re a Designer? So you make mock-ups in Photoshop and somebody else implements them.”
  • Me: “I’ll show them, I’ll implement them myself! Now what are you gonna do?”
  • Them: “Oh, so you’re a Design Engineer? You design and build user interfaces on the front-end.”
  • Me: “I’ll show them, I’ll write a Node server and setup a database that powers my designs and interactions on the front-end. Now what are they gonna do?”
  • Them: “Oh, well, we I’m not sure we have a term for that yet, maybe Full-stack Design Engineer?”
  • Me: “Oh yeah? I’ll frame up a user problem, interface with stakeholders, explore the solution space with static designs and prototypes, implement a high-fidelity solution, and then be involved in testing, measuring, and refining said solution. What are you gonna call that?”

[As you can see, I have some personal issues I need to work through…]

As Brian says, I want to be more interesting. I want to have more perspectives. I want to be something that’s not so easily definable, something you can’t sum up in two or three words.

I’ve felt this tension my whole career making stuff for the web. I think it has led me to work on smaller teams where boundaries are much more permeable and crossing them is encouraged rather than discouraged.

All that said, I get it. I get why titles are useful in certain contexts (corporate hierarchies, recruiting, etc.) where you’re trying to take something as complicated and nuanced as an individual human beings and reduce them to labels that can be categorized in a database. I find myself avoiding those contexts where so much emphasis is placed in the usefulness of those labels.

“I’ve never wanted to be something you could easily define” stands at odds with the corporate attitude of, “Here’s the job req. for the role (i.e. cog) we’re looking for.”


Reply via: Email · Mastodon · Bluesky