Jim Nielsen’s Blog

You found my experimental HTML feed (there are also other ways to subscribe).

I HTML

Recent posts

Language Needs Innovation

View

In his book “The Order of Time” Carlo Rovelli notes how we often asks ourselves questions about the fundamental nature of reality such as “What is real?” and “What exists?”

But those are bad questions he says. Why?

the adjective “real” is ambiguous; it has a thousand meanings. The verb “to exist” has even more. To the question “Does a puppet whose nose grows when he lies exist?” it is possible to reply: “Of course he exists! It’s Pinocchio!”; or: “No, it doesn’t, he’s only part of a fantasy dreamed up by Collodi.”

Both answers are correct, because they are using different meanings of the verb “to exist.”

He notes how Pinocchio “exists” and is “real” in terms of a literary character, but not so far as any official Italian registry office is concerned.

To ask oneself in general “what exists” or “what is real” means only to ask how you would like to use a verb and an adjective. It’s a grammatical question, not a question about nature.

The point he goes on to make is that our language has to evolve and adapt with our knowledge.

Our grammar developed from our limited experience, before we know what we know now and before we became aware of how imprecise it was in describing the richness of the natural world.

Rovelli gives an example of this from a text of antiquity which uses confusing grammar to get at the idea of the Earth having a spherical shape:

For those standing below, things above are below, while things below are above, and this is the case around the entire earth.

On its face, that is a very confusing sentence full of contradictions. But the idea in there is profound: the Earth is round and direction is relative to the observer. Here’s Rovelli:

How is it possible that “things above are below, while things below are above"? It makes no sense…But if we reread it bearing in mind the shape and the physics of the Earth, the phrase becomes clear: its author is saying that for those who live at the Antipodes (in Australia), the direction “upward” is the same as “downward” for those who are in Europe. He is saying, that is, that the direction “above” changes from one place to another on the Earth. He means that what is above with respect to Sydney is below with respect to us. The author of this text, written two thousand years ago, is struggling to adapt his language and his intuition to a new discovery: the fact that the Earth is a sphere, and that “up” and “down” have a meaning that changes between here and there. The terms do not have, as previously thought, a single and universal meaning.

So language needs innovation as much as any technological or scientific achievement. Otherwise we find ourselves arguing over questions of deep import in a way that ultimately amounts to merely a question of grammar.


Reply via: Email · Mastodon · Bluesky

The Tumultuous Evolution of the Design Profession

View

Via Jeremy Keith’s link blog I found this article: Elizabeth Goodspeed on why graphic designers can’t stop joking about hating their jobs. It’s about the disillusionment of designers since the ~2010s. Having ridden that wave myself, there’s a lot of very relatable stuff in there about how design has evolved as a profession.

But before we get into the meat of the article, there’s some bangers worth acknowledging, like this:

Amazon – the most used website in the world – looks like a bunch of pop-up ads stitched together.

lol, burn. Haven’t heard Amazon described this way, but it’s spot on.

The hard truth, as pointed out in the article, is this: bad design doesn’t hurt profit margins. Or at least there’s no immediately-obvious, concrete data or correlation that proves this. So most decision makers don’t care.

You know what does help profit margins? Spending less money. Cost-savings initiatives. Those always provide a direct, immediate, seemingly-obvious correlation. So those initiatives get prioritized.

Fuzzy human-centered initiatives (humanities-adjacent stuff), are difficult to quantitatively (and monetarily) measure.

“Let’s stop printing paper and sending people stuff in the mail. It’s expensive. Send them emails instead.” Boom! Money saved for everyone. That’s easier to prioritize than asking, “How do people want us to communicate with them — if at all?” Nobody ever asks that last part.

Designers quickly realized that in most settings they serve the business first, customers second — or third, or fourth, or...

Shar Biggers [says] designers are “realising that much of their work is being used to push for profit rather than change..”

Meet the new boss. Same as the old boss.

As students, designers are encouraged to make expressive, nuanced work, and rewarded for experimentation and personal voice. The implication, of course, is that this is what a design career will look like: meaningful, impactful, self-directed. But then graduation hits, and many land their first jobs building out endless Google Slides templates or resizing banner ads...no one prepared them for how constrained and compromised most design jobs actually are.

Reality hits hard. And here’s the part Jeremy quotes:

We trained people to care deeply and then funnelled them into environments that reward detachment. ​​And the longer you stick around, the more disorienting the gap becomes – especially as you rise in seniority. You start doing less actual design and more yapping: pitching to stakeholders, writing brand strategy decks, performing taste. Less craft, more optics; less idealism, more cynicism.

Less work advocating for your customers, more work for advocating for yourself and your team within the organization itself.

Then the cynicism sets in. We’re not making software for others. We’re making company numbers go up, so our numbers ($$$) will go up.

Which reminds me: Stephanie Stimac wrote about reaching 1 year at Igalia and what stood out to me in her post was that she didn’t feel a pressing requirement to create visibility into her work and measure (i.e. prove) its impact.

I’ve never been good at that. I’ve seen its necessity, but am just not good at doing it. Being good at building is great. But being good at the optics of building is often better — for you, your career, and your standing in many orgs.

Anyway, back to Elizabeth’s article. She notes you’ll burn out trying to monetize something you love — especially when it’s in pursuit of maintaining a cost of living.

Once your identity is tied up in the performance, it’s hard to admit when it stops feeling good.

It’s a great article and if you’ve been in the design profession of building software, it’s worth your time.


Reply via: Email · Mastodon · Bluesky

Backwards Compatibility in the Web, but Not Its Tools

View

After reading an article, I ended up on HackerNews and stumbled on this comment:

The most frustrating thing about dipping in to the FE is that it seems like literally everything is deprecated.

Lol, so true. From the same comment, here’s a description of a day in the life of a front-end person:

Oh, you used the apollo CLI in 2022? Bam, deprecated, go learn how to use graphql-client or whatever, which has a totally different configuration and doesn’t support all the same options. Okay, so we just keep the old one and disable the node engine check in pnpm that makes it complain. Want to do a patch upgrade to some dependency? Hope you weren’t relying on any of its type signatures! Pin that as well, with a todo in the codebase hoping someone will update the signatures.

Finally get things running, watch the stream of hundreds of deprecation warnings fly by during the install. Eventually it builds, and I get the hell out of there.

Apt.

It’s ironic that the web platform itself has an ethos of zero breaking changes.

But the tooling for building stuff on the web platform? The complete opposite. Breaking changes are a way of life.

Is there some mystical correlation here, like the tools remain in such flux because the platform is so stable — stability taken for granted breeds instability?

Either way, as Morpheus says in The Matrix: Fate, it seems, is not without a sense of irony.


Reply via: Email · Mastodon · Bluesky

Craft and Satisfaction

View

Here’s Sean Voisen writing about how programming is a feeling:

For those of us who enjoy programming, there is a deep satisfaction that comes from solving problems through well-written code, a kind of ineffable joy found in the elegant expression of a system through our favorite syntax. It is akin to the same satisfaction a craftsperson might find at the end of the day after toiling away on well-made piece of furniture, the culmination of small dopamine hits that come from sweating the details on something and getting them just right. Maybe nobody will notice those details, but it doesn’t matter. We care, we notice, we get joy from the aesthetics of the craft.

This got me thinking about the idea of satisfaction in craft. Where does it come from?

In part, I think, it comes from arriving at a deeper, and more intimate understanding of and relationship to what you’re working with.

For example, I think of a sushi chef. I’m not a sushi chef, but I’ve tried my hand at making rolls and I’ve seen Jiro Dreams of Sushi, so I have a speck of familiarity with the spectrum from beginner to expert.

When you first start out, you’re focused on the outcome. “Can I do this? Let see if I can pull it off.” Then comes the excitement of, “Hey I made my own roll!” That’s as far as many of us go. But if you keep going, you end up in a spot where you’re more worried about what goes into the roll than the outcome of roll itself. Where was the fish sourced from? How was it sourced? Was it ever frozen? A million and one questions about what goes into the process, which inevitably shape what comes out of it.

And I think an obsession with the details of what goes in drives your satisfaction of what comes out.

In today’s moment, I wonder if AI tools help or hinder fostering a sense of wonder in what it means to craft something?

When you craft something, you’re driven further into the essence of the materials you work. But AI can easily reverse this, where you care less about what goes in and only what comes out.

One question I’m asking myself is: do I care more or less about what I’ve made when I’m done using AI to help make it?


Reply via: Email · Mastodon · Bluesky

Brian Regan Helped Me Understand My Aversion to Job Titles

View

I like the job title “Design Engineer”. When required to label myself, I feel partial to that term (I should, I’ve written about it enough).

Lately I’ve felt like the term is becoming more mainstream which, don’t get me wrong, is a good thing. I appreciate the diversification of job titles, especially ones that look to stand in the middle between two binaries.

But — and I admit this is a me issue — once a title starts becoming mainstream, I want to use it less and less.

I was never totally sure why I felt this way. Shouldn’t I be happy a title I prefer is gaining acceptance and understanding? Do I just want to rebel against being labeled? Why do I feel this way?

These were the thoughts simmering in the back of my head when I came across an interview with the comedian Brian Regan where he talks about his own penchant for not wanting to be easily defined:

I’ve tried over the years to write away from how people are starting to define me. As soon as I start feeling like people are saying “this is what you do” then I would be like “Alright, I don't want to be just that. I want to be more interesting. I want to have more perspectives.” [For example] I used to crouch around on stage all the time and people would go “Oh, he’s the guy who crouches around back and forth.” And I’m like, “I’ll show them, I will stand erect! Now what are you going to say?” And then they would go “You’re the guy who always feels stupid.” So I started [doing other things].

He continues, wondering aloud whether this aversion to not being easily defined has actually hurt his career in terms of commercial growth:

I never wanted to be something you could easily define. I think, in some ways, that it’s held me back. I have a nice following, but I’m not huge. There are people who are huge, who are great, and deserve to be huge. I’ve never had that and sometimes I wonder, ”Well maybe it’s because I purposely don’t want to be a particular thing you can advertise or push.”

That struck a chord with me. It puts into words my current feelings towards the job title “Design Engineer” — or any job title for that matter.

Seven or so years ago, I would’ve enthusiastically said, “I’m a Design Engineer!” To which many folks would’ve said, “What’s that?”

But today I hesitate. If I say “I’m a Design Engineer” there are less follow up questions. Now-a-days that title elicits less questions and more (presumed) certainty.

I think I enjoy a title that elicits a “What’s that?” response, which allows me to explain myself in more than two or three words, without being put in a box.

But once a title becomes mainstream, once people begin to assume they know what it means, I don’t like it anymore (speaking for myself, personally).

As Brian says, I like to be difficult to define. I want to have more perspectives. I like a title that befuddles, that doesn’t provide a presumed sense of certainty about who I am and what I do.

And I get it, that runs counter to the very purpose of a job title which is why I don’t think it’s good for your career to have the attitude I do, lol.

I think my own career evolution has gone something like what Brian describes:

  • Them: “Oh you’re a Designer? So you make mock-ups in Photoshop and somebody else implements them.”
  • Me: “I’ll show them, I’ll implement them myself! Now what are you gonna do?”
  • Them: “Oh, so you’re a Design Engineer? You design and build user interfaces on the front-end.”
  • Me: “I’ll show them, I’ll write a Node server and setup a database that powers my designs and interactions on the front-end. Now what are they gonna do?”
  • Them: “Oh, well, we I’m not sure we have a term for that yet, maybe Full-stack Design Engineer?”
  • Me: “Oh yeah? I’ll frame up a user problem, interface with stakeholders, explore the solution space with static designs and prototypes, implement a high-fidelity solution, and then be involved in testing, measuring, and refining said solution. What are you gonna call that?”

[As you can see, I have some personal issues I need to work through…]

As Brian says, I want to be more interesting. I want to have more perspectives. I want to be something that’s not so easily definable, something you can’t sum up in two or three words.

I’ve felt this tension my whole career making stuff for the web. I think it has led me to work on smaller teams where boundaries are much more permeable and crossing them is encouraged rather than discouraged.

All that said, I get it. I get why titles are useful in certain contexts (corporate hierarchies, recruiting, etc.) where you’re trying to take something as complicated and nuanced as an individual human beings and reduce them to labels that can be categorized in a database. I find myself avoiding those contexts where so much emphasis is placed in the usefulness of those labels.

“I’ve never wanted to be something you could easily define” stands at odds with the corporate attitude of, “Here’s the job req. for the role (i.e. cog) we’re looking for.”


Reply via: Email · Mastodon · Bluesky

“I Don’t See Why Not”

View

Excuse my rant[1].

Nobel-prize winning CEO of DeepMind, Demis Hassabis, was on 60 Minutes and floored me when he predicted:

We can cure all diseases with the help of AI. [The end of disease] is within reach, maybe within the next decade or so. I don't see why not.

“I don’t see why not” is doing a lot of work in that sentence.

As I’m sure you know from working on problems, “I don’t see why not” moments are usually followed by, “Actually this is going to be a bit harder that we thought…”

If you want to call me a skeptic, that’s fine. But “the end of disease” in the next decade is some ostentatious claim chowder IMHO. As one of the YouTube comments says:

The goodies are always just another 5-10 years ahead, aren't they

Generally speaking, I tend to regard us humans as incredibly short-sighted. So if I had to place a wager, I’d put my money on the end of disease not happening in the next decade (against my wishes, of course).

But that’s not really how AI predictions work. You can’t put wagers on them, because AI predictions aren’t things you get held accountable for.

“Yeah, when I said that, I added ‘I don’t see why not’ but we quickly realized that X was going to be an issue and now I’m going to have to qualify that prediction. Once we solve X, I don’t see why not.”

And then “once we solve _Y_”. And then Z.

“Ok, phew, we solved Z we’re close.”

And then AA. And AB. And AC. And…

I get it, it’s easy to sit here and play the critic. I’m not the “man in the arena”. I’m not a Nobel-prize winner.

I just want to bookmark this prediction for an accountability follow-up in April 2035. If I’m wrong, HOORAY! DISEASE IS ENDED!!! I WILL GLADLY EAT MY HAT!

But if not, does anyone’s credibility take a hit?

You can’t just say stuff that’s not true and continue having credibility.

Unless you’re AI, of course.


  1. Serendipitously, a few hours after publishing this post, I was reading my copy of The Bomb: A Life by Gerard J. De Groot. In it, he notes how in 1936 Soviet physicist Igor Tamm confidently remarked that “the idea that the use of nuclear energy is a question for [the next] five or ten years is really naive.” The contrast between the mainstream hyperbole around AI vs. nuclear is striking.

Reply via: Email · Mastodon · Bluesky

You’re Only As Strong As Your Weakest Point

View

In April 1945, as US soldiers overtook Merkers, Germany, stories began to surface to Army officials of stolen Nazi riches stored in the local salt mine.

Eventually, the Americans found the mine and began exploring it, ending up at a vaulted door. Here’s the story, as told by Greg Bradsher:

the Americans found the main vault. It was blocked by a brick wall three feet thick…In the center of the wall was a large bank-type steel safe door, complete with combination lock and timing mechanism with a heavy steel door set in the middle of it. Attempts to open the steel vault door were unsuccessful.

Word went up the chain of command about the find and suspected gold hoard behind the vaulted steel door. The order came back down to open it up.

But what to do about this vault door that, up until now, nobody could open? One engineer looked at the problem and said: forget the door, blow the wall!

One of the engineers who inspected the brick wall surrounding the vault door thought it could be blasted through with little effort. Therefore the engineers, using a half-stick of dynamite, blasted an entrance though the masonry wall.

To me, this is a fascinating commentary on security specifically [insert meme of gate with no fence]

Photograph of a gate along a path with no fence alongside it, making it easy to just side-step the gate.

But also a commentary on problem-solving generally.

When you have a seemingly intractable problem — there’s an impenetrable door we can’t open — rather than focus on the door itself, you take a step back and realize the door may be impenetrable but the wall enclosing it is not. A little dynamite and problem solved.

Lessons:

Footnote to this story, in case you’re wondering what they found inside:

[a partial] inventory indicated that there were 8,198 bars of gold bullion; 55 boxes of crated gold bullion; hundreds of bags of gold items; over 1,300 bags of gold Reichsmarks, British gold pounds, and French gold francs; 711 bags of American twenty-dollar gold pieces; hundreds of bags of gold and silver coins; hundreds of bags of foreign currency; 9 bags of valuable coins; 2,380 bags and 1,300 boxes of Reichsmarks (2.76 billion Reichsmarks); 20 silver bars; 40 bags containing silver bars; 63 boxes and 55 bags of silver plate; 1 bag containing six platinum bars; and 110 bags from various countries


Reply via: Email · Mastodon · Bluesky

Be Mindful of What You Make Easy

View

Carson Gross has a post about vendoring which brought back memories of how I used to build websites in ye olden days, back in the dark times before npm.

“Vendoring” is where you copy dependency source files directly into your project (usually in a folder called /vendor) and then link to them — all of this being a manual process. For example:

  • Find jquery.js or reset.css somewhere on the web (usually from the project’s respective website, in my case I always pulled jQuery from the big download button on jQuery.com and my CSS reset from Eric Meyer’s website).
  • Copy that file into /vendor, e.g. /vendor/jquery@1.2.3.js
  • Pull it in where you need it, e.g. <script src="/vendor/jquery@1.2.3.js">

And don’t get me started on copying your transitive dependencies (the dependencies of your dependencies). That gets complicated when you’re vendoring by hand!

Now-a-days package managers and bundlers automate all of this away: npm i what you want, import x from 'pkg', and you’re on your way! It’s so easy (easy to get all that complexity).

But, as the HTMX article points out, a strength can also be a weakness. It’s not all net gain (emphasis mine):

Because dealing with large numbers of dependencies is difficult, vendoring encourages a culture of independence.

You get more of what you make easy, and if you make dependencies easy, you get more of them.

I like that — you get more of what you make easy. Therefore: be mindful of what you make easy!

As Carson points out, dependency management tools foster a culture of dependence — imagine that!

I know I keep lamenting Deno’s move away from HTTP imports by default, but I think this puts a finger on why I’m sad: it perpetuates the status quo, whereas a stance on aligning imports with how the browser works would not perpetuate this dependence on dependency resolution tooling. There’s no package manager or dependency resolution algorithm for the browser.

I was thinking about all of this the other day when I then came across this thread of thoughts from Dave Rupert on Mastodon. Dave says:

I prefer to use and make simpler, less complex solutions that intentionally do less. But everyone just wants the thing they have now but faster and crammed with more features (which are juxtaposed)

He continues with this lesson from his startup Luro:

One of my biggest takeaways from Luro is that it’s hard-to-impossible to sell a process change. People will bolt stuff onto their existing workflow (ecosystem) all day long, but it takes a religious conversion to change process.

Which really helped me put words to my feelings regarding HTTP imports in Deno:

i'm less sad about the technical nature of the feature, and more about what it represented as a potential “religious revival” in the area of dependency management in JS. package.json & dep management has become such an ecosystem unto itself that it seems only a Great Reawakening™️ will change it.

I don’t have a punchy point to end this article. It’s just me working through my feelings.


Reply via: Email · Mastodon · Bluesky

Some Love For Interoperable Apps

View

I like to try different apps.

What makes trying different apps incredible is a layer of interoperability — standardized protocols, data formats, etc.

When I can bring my data from one app to another, that’s cool. Cool apps are interoperable. They work with my data, rather than own it.

For example, the other day I was itching to try a new RSS reader. I’ve used Reeder (Classic) for ages. But every once in a while I like to try something different.

This is super easy because lots of clients support syncing to Feedbin. It’s worth pointing out: Feedbin has their own app. But they don’t force you to use it. You’re free to use any RSS client you want that supports their service.

So all I have to do is download a new RSS client, login to Feedbin, and boom! An experience of my data in a totally different app from a totally different developer.

Screenshot of two RSS reader clients, both with the same unread articles (one is Reeder and the other is NetNewsWire).

That’s amazing!

And you know how long it took? Seconds. No data export. No account migration.

Doing stuff with my blog is similar. If I want to try a different authoring experience, all my posts are just plain-text markdown files on disk. Any app that can operate on plain-text files is a potential new app to try.

Screenshots of iA Writer and VSCode on macOS, both with the same list of plain-text markdown files.

No shade on them, but this why I personally don’t use apps like Bear. Don’t get me wrong, I love so much about Bear. But it wants to keep your data in its own own proprietary, note-keeping safe. You can’t just open your notes in Bear in another app. Importing is required. But there’s a big difference between apps that import (i.e. copy) your existing data and ones that interoperably work with it.

Email can also be this way. I use Gmail, which supports IMAP, so I can open my mail in lots of different clients — and believe me, I've tried a lot of email clients over the years.

  • Sparrow
  • Mailbox
  • Spark
  • Outlook
  • Gmail (desktop web, mobile app)
  • Apple Mail
  • Airmail

This is why I don’t use un-standardized email features because I know I can’t take them with me.

It’s also why I haven’t tried email providers like HEY! Because they don't support open protocols so I can’t swap clients when I want.

My email is a dataset, and I want to be able to access it with any existing or future client. I don't want to be stuck with the same application for interfacing with my data forever (and have it tied to a company).

I love this way of digital life, where you can easily explore different experiences of your data. I wish it was relevant to other areas of my digital life. I wish I could:

  • Download a different app to view/experience my photos
  • Download a different app to view/experience my music
  • Download a different app to view/read my digital books

In a world like this, applications would compete on an experience of my data, rather than on owning it.

The world’s a big place. The entire world doesn’t need one singular photo experience to Rule Them All.

Let’s have experiences that are as unique and varied as us.


Reply via: Email · Mastodon · Bluesky

Ductility on the Web

View

I learned a new word: ductile. Do you know it?

I’m particularly interested in its usage in a physics/engineering setting when talking about materials.

Here’s an answer on Quora to: “What is ductile?”

Ductility is the ability of a material to be permanently deformed without cracking.

In engineering we talk about elastic deformation as deformation which is reversed once the load is removed for example a spring, conversely plastic deformation isn’t reversed.

Ductility is the amount (usually expressed as a ratio) of plastic deformation that a material can undergo before it cracks or tears.

I read that and started thinking about the “ductility” of languages like HTML, CSS, and JS. Specifically: how much deformation can they undergo before breaking?

HTML, for example, is famously forgiving. It can be stretched, drawn out, or deformed in a variety of ways without breaking.

Take this short snippet of HTML:

<!doctype html>
<title>My site</title>
<p>Hello world!
<p>Nice to meet you

That is valid HTML. But it can also be “drawn out” for readability without losing any of its meaning. It’ll still render the same in the browser:

<!doctype html>
<html>
  <head>
    <title>My site</title>
  </head>
  <body>
    <p>Hello world!</p>
    <p>Nice to meet you.</p>
  </body>
</html>

This capacity for the language to undergo a change in form without breaking is its “ductility”.

HTML has some pull before it breaks.

JS, on the other hand, doesn’t have the same kind of ductility. Forget a quotation mark and boom! Stretch it a little and it breaks.

console.log('works!');
// -> works!

console.log('works!);
// Uncaught SyntaxError: Invalid or unexpected token

I suppose some would say “this isn’t ductility, this is merely forgiving error-parsing”. Ok, sure.

Nevertheless, I’m writing here because I learned this new word that has very practical meaning in another discipline to talk about the ability of materials to be stretched and deformed without breaking.

I think we need more of that in software. More resiliency. More malleability. More ductility — prioritized in our materials (tools, languages, paradigms) so we can talk more about avoiding sudden failure.


Reply via: Email · Mastodon · Bluesky