Jim Nielsen’s Blog

You found my HTML feed — I also have an XML feed and a JSON feed.

I ♥ HTML

Subscribe to my blog by copy-pasting this URL into your RSS reader.

(Learn more about RSS and subscribing to content on the web at aboutfeeds.)

Recent posts

Bulletproof Method to Solving Problems

View

Step 1: Write down the problem in a message you plan to send to a co-worker.

Most of the time you’ll solve the problem before you’re done with Step 1. However, if you complete Step 1 and still have the problem, continue to Step 2.

Step 2: Hit the “Send” button.

Shortly after sending, the solution will present itself. I don’t know why this is. I don’t make the rules. But the solution frequently presents itself after you hit “Send” and no longer need the recipient’s help.

Step 3: Return to message you just sent and follow up with: “Nevermind. Figured it out.”

Ok, ok. This is in jest — a little bit. But it is a good method for getting yourself unstuck.


Reply

Motorcycles, Cars, Websites, and Seams

View

In high school, I had a friend named Joe who owned a Honda Trail 110, a small motorcycle with enough history for its own Wikipedia page.

It didn’t go very fast (40MPH tops if you’re going downhill) but Joe rode that thing to school every day — or at least he tried, it often broke down on the way.

On those cold, winter mornings in the desert you’d find Joe striding into school five minutes before the bell rang. He’d take off his helmet, revealing a face flush red from the cold, then take off his brown leather bomber jacket he’d found at a thirft store, stuff it all in his locker, and head to class with the faint whiff of oil and gasoline in his wake.

It was so damn cool.

It made me want a Honda trail bike but my parents didn’t approve of motorbikes.

Fast forward twenty years and my moment came.

Right before the pandemic hit, I came across a local ad where a guy was selling his 1978 Honda Trail CT90 (the “90” classification meant it was even weaker than the “110” Joe had).

With three little kids to care for, my wife shared the “motorcycles aren’t safe” hesitancy of my parents. But this bike that couldn’t go more than 35MPH, which meant I was relegated to the back roads of town and desert trails. Being gutless was a feature, not a bug.

So I bought it, a 1978 Honda Trail CT90 in mustard yellow.

Yup, that is not a typo: 1978. 44 years old (at the time of publishing) — older than me! And, all things considered, it was in fantastic shape. To be frank, if I’m in that good of shape when I reach that age, I’ll be happy.

But being in good shape was a relative term for its age. It came brimming with its own idiosyncratic problems, the first of which became apparent the day I bought it. After buying it and driving it home it failed to start back up, so I went to the internet to troubleshoot the issue.

(I must admit here that I knew — and still know — basically nothing about motorcycles, which made owning a bike older than myself a bit tricky given its penchant for breaking down and needing constant work.)

A few forum posts later, I had diagnosed the problem as a battery issue (turns out the CT90 requires battery to start while the CT110 can kickstart without one, who’d have thought?)

That ended up being the first of many electrical issues to come.

Another time my bike quit on me south of town, leaving me stranded on a desert road. I later gave my buddy Joe a call asking for help to diagnose the issue. "Sounds like you have a short,” he said. He recommended I look at the wiring, starting at the battery, and following it back to the source. “The wiring on that bike is so simple,” he said, “That it shouldn’t take long to cover the entirety of the wiring and find the issue.”

So I tried it. And I quickly discovered the issue: a spot in the wiring where the outer casing had worn off and the electrical wires were exposed, intermittently touching each other and causing a short.

The fix didn’t even require a trip to the auto parts store. I went inside, grabbed some black electrical tape, wrapped it around the exposed part of the wiring, and the bike started right up, no problem!

Photograph of the wiring on a Honda Trail CT90

Being able to repair the problems on that bike with such little know-how or experience made me reflect on the simplicity and elegance of such a simple piece of engineering. Everything I needed to know on that bike I could inspect myself.

Cars

Cars of my past had engine bays open to inspection. When you popped the hood, they revealed their inner workings to you.

For example, here’s what the engine bay of my first car looked like:

Photograph of a mid-2000s Toyota Camry engine bay.

But now-a-days, modern cars seem to encourage a hands-off approach. When you pop the hood, everything’s covered and hidden away, almost as if to say, “No need to go any further. Don’t bother concerning yourself with anything under here.”

Photograph of a ~2020 Volvo XC90 engine bay.

Websites

I wonder if there’s a metaphor in here for websites? Are they following a similar arc?

In the beginning, the mechanisms of the web were more evidently surfaced by browsers for manipulation by end users — protocols, URLs, custom stylesheets, etc. — but those have increasingly been abstracted away such that you don’t have to even think about how the web works to use it. Here’s Jeremy Keith on the subject:

Making it harder to “view source” might seem like an inconsequential decision. Removing the ability to apply user stylesheets might seem like an inconsequential decision. Heck, even hiding the URL might seem like an inconsequential decision. But each one of those decisions has repercussions. And each one of those decisions reflects an underlying viewpoint.

Ah yes, “view source”, the website equivalent of popping the hood on a vehicle. At one point, the utility of “view source” on a website felt akin to troubleshooting my 1972 CT90: inspectable and decipherable, even if you didn’t know much.

But time has led it to become much more like popping the hood on a modern vehicle: the inner mechanisms remain hidden away beneath coverings that seem to speak, “This is too sophisticated to worry yourself with, best to consult a professional.”

Abstracting away URLs would be similar. As Devine notes, giving people power over the technology that rules their lives requires going beyond providing mere solutions and instead fostering the production of knowledge.

Like owning that CT90 did for me — or owning my own website has too.


Reply

The Big Sur-ification of macOS Icons

View

Here’s an example of some icons that transitioned well in the Big Sur-ification of macOS icons:

And just for good measure, here’s a few more — I love this stuff.

While some apps made this transition fun (and further infused their brand with character), others did not. They did the bare minimum and moved on.

A few years ago I tweeted about this “bare minimum” phenomenon where app makers updated their icon for macOS Big Sur by taking their previous icon/logo, putting it on a white squircle, and calling it a day.

That always felt like a bit of a shame when compared to the alternative: take an opportunity to imagine a new expression of your brand/logo/icon in the context and constraints of macOS Big Sur’s new icon template (i.e. the squircle).

For example, here’s a re-imagining of the Outlook icon (done by, as far as I can tell, agraaaaao).

It’s fun to see how folks take advantage of “the ever-so-subtle yet unique-to-macOS opportunity” to break outside of the outer edges of the squircle and provide some dimensionality to their icons.

As a self-professed icon-noisseur, I love browsing through app icons that people have re-imagined for their desktops — wresting control of the visual appearance of the app icon from its maker and appropriating it to themselves.

For example, as I browsed the wonderful macosicons.com gallery, I came across these alternatives for Chrome (the original from Google is on the far left, outlined in yellow):

Screenshot of three different Goolge Chrome icons for macOS Big Sur. The one outlined on the left is the original from Google, the others are more visually interesting alternatives from third parties.

I love seeing the character of Chrome bleed to the edges and fit the visual language of its environment (macOS). More visually interesting than merely dropping the Chrome circle on a white background.

In a similar vein, here’s Slack:

Screenshot of three different Slack icons for macOS Big Sur. The one outlined on the left is the original from Salesforce, the others are more visually interesting alternatives from third parties.

Again, more interesting to see some character infused into the icon (as opposed to just dropping it on a white background).

Where things get really interesting is when people explore breaking out of the squircle (which you can do on macOS) to provide some dimension to their icon. For example, here’s Firefox:

Screenshot of six different Firefox icons for macOS Big Sur. The one outlined on the upper left is the original from Mozilla, the others are more visually interesting alternatives from third parties that provide some dimensionality while breaking out of the squircle’s shape.

And there are some VSCode alternatives that explore both 1) going beyond a logo on a white background, and 2) providing dimension while borrowing from the visual language for Apple’s native development tool (Xcode).

Screenshot of five different VSCode icons for macOS Big Sur. The one outlined on the upper left is the original from Microsoft, the others are more visually interesting alternatives from third parties that provide some dimensionality while breaking out of the squircle’s shape.

Microsoft is perhaps the biggest culprit of “drop it on a white background” as their suite of office tools do precisely that — which makes it a fertile ground for folks re-imagining what the family of office icons could be.

For example here’s Outlook:

Screenshot of six different Outlook icons for macOS Big Sur. The one outlined on the upper left is the original from Microsoft, the others are more visually interesting alternatives from third parties that provide some dimensionality while breaking out of the squircle’s shape.

And here’s Excel:

Screenshot of nine different Excel icons for macOS Big Sur. The one outlined on the upper left is the original from Microsoft, the others are more visually interesting alternatives from third parties that provide some dimensionality while breaking out of the squircle’s shape.

Word:

Screenshot of nine different Word icons for macOS Big Sur. The one outlined on the upper left is the original from Microsoft, the others are more visually interesting alternatives from third parties that provide some dimensionality while breaking out of the squircle’s shape.

And Powerpoint:

Screenshot of nine different Powerpoint icons for macOS Big Sur. The one outlined on the upper left is the original from Microsoft, the others are more visually interesting alternatives from third parties that provide some dimensionality while breaking out of the squircle’s shape.

Just a little visual fun/exploration for your day. Adios!


Reply
Tags

Interdisciplinary Website Maker

View

Paul Ford has a great article at Wired about his own experience as an English major working in tech. While I myself am not an English major (more on that below) his desire to be interdisciplinary parallels my own.

I began to realize I was that most horrifying of things: interdisciplinary...the idea that an English major should learn to code was seen as wasteful, bordering on abusive—like teaching a monkey to smoke.

When I’ve interviewed and expressed my desire to work doing design and code, some folks look at me strange and stammer. “Hm...well, maybe we have a place for that…I’ll have to get back to you.” To some, the idea that a designer would write code or that an engineer would move pixels seems strange — like “teaching a monkey to smoke”.

The sentiment I often perceive is: “Why would we need a designer that codes? We have designers. We have coders. Why would we need someone who can do both?”[1]

In the early days of making websites, “a designer who codes” didn’t seem like a big deal. After all, the only place to procure people who made websites was Craigslist. It was such a new idea that “a designer who codes” seemed like the least odd thing. The strangest concoction of disciplines existed in the early era of making websites: an English major who leads product, an actor who writes API code, or a poet who moves pixels around.

But now-a-days, any cross-disciplinary interest is easily interpreted as a lack of specialization and dedication to craft. If you’re doing design and code, how can you be really great at either? You’re not maximizing.

There’s another angle to it though, which Paul discusses in his article when he says, “humans are primates and disciplines are our territories”.

this same battle of the disciplines, everlasting, ongoing, eternal, and exhausting, defines the internet. Is blogging journalism? Is fan fiction “real” writing? Can video games be art? (The answer is always: Of course, but not always. No one cares for that answer.)

The analogy of disciplines as borders is intriguing. In disciplines, when things get complicated we don’t open borders but instead create new ones.

Existing disciplines don’t say, “Sure, c’mon over here. If you don’t fit, we’ll find room for you.” And new disciplines don’t say, “Let’s fold ourselves under the old umbrella of discipline X.” Both parties prefer new lines be drawn. New borders. And so new disciplines arise like Computer Science. And new titles appear, like AI Engineer. In this world of borders and disciplinary citizenship, what do you do with the unpatriotic interdiciplinarian? Paul:

The interdisciplinarian is essentially an exile. Someone who respects no borders enjoys no citizenship.

The irony here is that no discipline works without the others. Paul pointed this out in a separate article when he said:

The most brutal fact of life is that the discipline you love and care for is utterly irrelevant without the other disciplines that you tend to despise.

He illustrates this perfectly at the end of his Wired article using trees:

All you have to do is look at a tree—any tree will do—to see how badly our disciplines serve us. Evolutionary theory, botany, geography, physics, hydrology, countless poems, paintings, essays, and stories—all trying to make sense of the tree. We need them all, the whole fragile, interdependent ecosystem. No one has got it right yet.

Websites are like trees. You need understanding from all the disciplines — engineering, design, psychology, writing, etc. — to make sense of how to best grow them.

Interdisciplinary Studies

My official four-year degree is: Bachelor of Science in Interdisciplinary Studies with an emphasis in Visual Technologies and Spanish.

I rarely tell anyone that because, well frankly it’s a mouthful. Most conversations about higher education happen in the context of a career story, so to move the spotlight off me I’ll say “Yeah, I got a degree in computers.” That sounds like I majored in Computer Science but really it was more Graphic Design than anything programming-related.

But I don’t feel like a great Graphic Designer. Nor a great Computer Scientist. Somewhere along the way I ended up making websites. A strange hybrid of computer science and graphic design — and a million other things.

To be honest, I chose a degree in Interdisciplinary Studies because it was the fastest way for me to graduate. I knew Spanish, so I could take a test that gave me almost a year’s worth of credits. A four-year degree in three? Yes, please.

But in hindsight, maybe there was more in my decision than just a faster route to the finish line. Perhaps there was more “Hey, don’t put me in a box” inside of me than I realized at the time.

Because now in a world of Designer and Product Designer and UI Designer and UX Designer and Interaction Designer and Front-End Engineer and Full Stack Engineer and Software Engineer and et al., I still don’t want a label.

Lately, “Design Engineer” has felt more and more like a good fit for me. Perhaps because it is deliberately cross-discipline. It satisfies my deep-seated feeling of “don’t put me in a box” while also satisfying my belief that one narrow discipline can’t produce everything necessary to create a great experience on the web.

It reminds me of Maggie’s decisions to just call herself “website maker” because none of the disciplines alone are enough to make sense of how to build websites.

I feel that. Even “Design Engineer” doesn’t feel adequate. It deliberately mixes two disciplines, which is great, but also leaves out all the others.

To Paul’s point, I find myself wanting to draw new borders. “The Web” as its own discipline (I guess that’s the primate in me, oo oo ah ah).

Or maybe “Interdisciplinary Website Maker” is the right title.

Rolls right off the tongue, doesn’t it?


Footnotes
  1. In an attempt to answer that question, I started my series of posts “The Case for Design Engineers” ⏎
Reply
Tags

Consistent Navigation Across My Inconsistent Websites

View

Anything I ship to my personal domain jim-nielsen.com is made using IDD: impulse driven development.

I can convince myself that just about anything is a good idea at the time. But in retrospect my rationales are quite often specious.

At one point in the past, I decided that I wanted to have my personal homepage and my blog be different “websites”. By that I mean: rather than have one site that has unified navigation and a coherent experience across all content, I wanted to have independent sites that evolve and progress at their own pace.

For example, if I decided I wanted to redesign my homepage (which acts as a sort of resume), I wouldn’t have to think about the typography of my blog posts. I could go super whacky in one direction, if I wanted, without having to think about how it affects the whole.

That’s how I’ve ended up with the different sites I have today, like my homepage jim-nielsen.com, my blog blog.jim-nielsen.com, and my notes notes.jim-nielsen.com. Each has their own unique design and codebase that can be modified and changed without regard for the others.

That’s nice for me, but one of the drawbacks has been that it’s not immediately apparent to people who visit those sites that all three exist. There’s no top-level navigation across all three sites with links to “Home”, “Blog”, and “Notes”.

I sort of always knew this and thought “Well that’s intentional, they’re three different sites after all!” But the trouble it actually presents people was brought to my attention by Chris Coyier when I was on the ShopTalkShow (he called it an “intervention”):

Jim, I gotta tell ya, intervention here, you don’t make it easy. You go to jim-nielsen.com, there’s no link to the blog, you gotta just “know” it’s a subdomain. And then, Jim has this incredible blog, cause he has the “think blog” and then he has this “what I’m reading” with thoughts [and they’re] equally great blogs (you should subscribe I’m not blowing smoke) but you just can’t find [them]. Like you have to go to the “About” page to find the reading blog. You gotta just smash them together [Jim]. I mean, you do you, but they’re just too hard to find.

I’m nothing if not a very poor marketer for myself and the things I do.

This was a Good Idea from Chris.

Honestly, I have lots of ideas on how to remedy this. But in the spirit of avoiding curiously exploring all the possibilities and then shipping nothing, I decided to just start with something small as a stop-gap.

My thought was: what kind of widget can I build that represents a coherent interaction across an otherwise incoherent set of web properties?

My solution? A floating head. Of myself. Fixed on every page.

At least you’ll know who the site belongs to, right?

So that’s what I built. It’s a JavaScript web component. Basically I stick this markup on every page across all my domains:

<jim-navbar></jim-navbar>
<script
  type="module"
  src="https://www.jim-nielsen.com/jim-navbar.js">
</script>

And, for browsers that support it, you get a floating head of me that works as a navigational widget across my home page, blog, and notes. When you click it, you get a popup that lets you easily navigate between the three disparate sites.

Screenshot of a floating popup that provides navigation to and across blog.jim-nielsen.com, notes.jim-nielsen.com, and www.jim-nielsen.com

I’m Rusty on Animations, But They’re Fun

For my first implementation of the widget, I wanted to try and make a little animated menu. I settled on the idea of my head and, when clicked, it spins around and reveals a menu.

For the v1 iteration, I used CSS transform to scale and rotate the different elements.

Animated gif of a profile photo of Jim Nielsen that, when clicked, reveals a popup menu with an 'x' over where Jim’s face was.

It was pretty decent. I liked it, but I wanted to try doing something more sophisticated — something like what iOS does with the dynamic island.

To do this, I would need to make it look like the round avatar of my head was transforming its shape into the popup menu. In v1, the popup menu was just scaling down to zero and was distinct and separate from the shape of the avatar. So I tried doing this in v2.

Animated gif of a profile photo of Jim Nielsen that, when clicked, reveals a popup menu with an 'x' over where Jim’s face was. This one has more refined shape shifting in the animation.

The difference here is subtle. You almost have to slow down the animation to notice it: the popup transforms itself into the circle shape of the avatar.

Slow motion animated gif of a popup menu whose shape transforms back down to a circle avatar with a profile photo of Jim Nielsen.

In slow motion you’ll notice there are some other parts of this animation that aren’t quite right (like the timing of the opacity on the profile photo).

Feeling like I could do better, I tried a third iteration. This is the one that’s on the site today. It’s still not as refined as the dynamic island, but hey, baby steps.

Animated gif of a profile photo of Jim Nielsen that, when clicked, reveals a popup menu with a 'x' over where Jim’s face was.

Honestly, even v3 is still not very great. But I’m improving on it, including responding to bugs on social media (in that case, I was excited about shipping nested CSS without compilation, but maybe the world isn’t quite ready for that yet).

I’ve got even better ideas for this in the future, but who knows if I’ll ever get to them. This works for now.

Anyhow, that’s a very long way of saying: intervention succeeded.


Reply
Tags

Faster Connectivity !== Faster Websites

View

This post from Dan Luu discussing how web bloat impacts users with slow devices caused me to reflect on the supposition that faster connectivity means faster websites.

I grew up in an era when slow internet was the primary limiting factor to a great experience on the web. I was always pining for faster speeds: faster queries, faster page navigations, faster file downloads, etc. Whatever I wanted to do with a computer, bandwidth seemed like the sole limiting factor to a great experience.

So that’s why I still often mistakenly equate a faster connection with a faster (and better) experience on the web. And I often need reminding that’s not necessarily true.

That’s what Dan does well in his post. He points out how slow devices are becoming as big of an impediment to a good experience on the web as slow connections.

CPU performance for web apps hasn't scaled nearly as quickly as bandwidth so, while more of the web is becoming accessible to people with low-end connections, more of the web is becoming inaccessible to people with low-end devices even if they have high-end connections.

Here’s that last line again:

more of the web is becoming inaccessible to people with low-end devices even if they have high-end connections

It’s kind of incredible how the world is being flooded with bandwidth (I mean, you can get internet beamed to you anywhere on earth from a string of satellites.)

The question is quickly shifting from how slow is your connection to how slow is your device?

Newer !== Better, and More Usable for the Minority is More Usable For the Majority

To borrow from Devine’s warning about equating newer with better: if the new website runs slower on old hardware, is the new website better than the old website?

Here’s Dan talking about how old websites beat out new ones in performance:

Another pattern we can see is how the older sites are, in general, faster than the newer ones, with sites that (visually) look like they haven't been updated in a decade or two tending to be among the fastest.

This reminds me of an accessibility ethos which asserts that things that are made usable for marginalized individuals are invariable the most usable for everyone — regardless of capability.

Similarly: websites that were made to be fast on older, slower connections (and devices) are invariably the fastest for everyone — regardless of device or connection speed.

When using slow devices or any device with low bandwidth and/or poor connectivity, the best experiences, by far, are generally the ones that load a lot of content at once into a static page.

This is undoubtedly true for high-end devices as well. When you use something in the way it was designed to be used, it’s going to perform at its peak — and the web was designed, from its inception, to load a lot of static content up front.

It’s fascinating to see from Dan’s research how the output of modern blogging platforms (such as Medium or Substack) are not really competitive in terms of pure speed and performance with the “old” blogging / bulletin board platforms.

Overriding Defaults is the Fastest Path to Jank

To paraphrase Johan, the fastest path to janky websites is overriding browser defaults. Dan illustrates this perfectly in his piece, which I quote at length:

Sites that use modern techniques like partially loading the page and then dynamically loading the rest of it, such as Discourse, Reddit, and Substack, tend to be less usable… Although, in principle, you could build such a site in a simple way that works well with cheap devices but, in practice sites that use dynamic loading tend to be complex enough that the sites are extremely janky on low-end devices. It's generally difficult or impossible to scroll a predictable distance, which means that users will sometimes accidentally trigger more loading by scrolling too far, causing the page to lock up. Many pages actually remove the parts of the page you scrolled past as you scroll; all such pages are essentially unusable. Other basic web features, like page search, also generally stop working. Pages with this kind of dynamic loading can't rely on the simple and fast ctrl/command+F search and have to build their own search.

The bar to overriding browser defaults should be way higher than it is.

A lot of the optimizations that modern websites do, such as partial loading that causes more loading when you scroll down the page, and the concomitant hijacking of search (because the browser's built in search is useless if the page isn't fully loaded) causes the interaction model that works to stop working and makes pages very painful to interact with.

The foundations of web browser design lay in static document exploration and navigation. The bar to overriding the interaction UI/X for these kinds of experiences should be way higher than it is.

I find it ironic how, in our search for ticking Google’s performance checkboxes like “first contentful paint” and therefore providing better user experiences, we completely break other fundamental aspects of the user experiences like basic scrolling and in-document search.

A Parting Thought on a Core Tenent of the Web: Universal Accessibility

I want to leave you with this quote from Dan’s article. It begs us all to question whether the new stuff we’re making now is as universally accessible as what we’ve had (and taken for granted) up to this point.

The impact of having the fastest growing forum software in the world [Discourse] created by an organization whose then-leader was willing to state that he doesn't really care about users who aren't "influential users who spend money", who don't have access to "infinite CPU speed", is that a lot of forums are now inaccessible to people who don't have enough wealth to buy a device with effectively infinite CPU.

Are we leaving the internet better than we found it?

If [this attitude] were an anomaly, this wouldn't be too much of a problem, but [it’s] verbalizing the implicit assumptions a lot of programmers have, which is why we see that so many modern websites are unusable if you buy the income-adjusted equivalent of a new, current generation, iPhone in a low-income country.

I need to ponder on my own part in this more. Great food for thought.


Reply

You Are What You Read, Even If You Don’t Always Remember It

View

Here’s Dave Rupert (from my notes):

the goal of a book isn’t to get to the last page, it’s to expand your thinking.

I have to constantly remind myself of this. Especially in an environment that prioritizes optimizing and maximizing personal productivity, where it seems if you can’t measure (let alone remember) the impact of a book in your life then it wasn’t worth reading.

I don’t believe that, but I never quite had the words for expressing why I don’t believe that. Dave’s articulation hit pretty close.

Then a couple days later my wife sent me this quote from Ralph Waldo Emerson:

I cannot remember the books I've read any more than the meals I have eaten; even so, they have made me.

YES!

Damn, great writers are sO gOOd wITh wORdz, amirite?

Emerson articulates with acute brevity something I couldn’t suss out in my own thoughts, let alone put into words. It makes me jealous.

Anyhow, I wanted to write this down to reinforce remembering it.

And in a similar vein for the online world: I cannot remember the blog posts I’ve read any more than the meals I’ve eaten; even so, they’ve made me.

It’s a good reminder to be mindful of my content diet — you are what you eat read, even if you don’t always remember it.

Update 2024-04-12

@halas@mastodon.social shared this story in response, which I really liked:

At the university I had a professor who had a class with us in the first year and then in the second. At the beginning of the second year’s classes he asked us something from the material of previous year. When met with silence he nodded thoughtfully and said: “Education is something you have even if you don't remember anything”

I love stories that stick with people like that, e.g. “something a teacher told me once...”

Some impact is immeasurable.


Reply

Implementing Netlify’s Image CDN

View

tl;dr I implemented Netlify’s new image transformation service on my icon gallery sites and saw a pretty drastic decrease in overall bandwidth. Here are the numbers:

Page Requests Old New Difference
Home 60 1.3MB 293kB â–Ľ 78% (1.01MB)
Colors 84 1.4MB 371kB â–Ľ 74% (1.04MB)
Designers 131 5.6MB 914kB â–Ľ 84% (4.71MB)
Developers 140 2.5MB 905kB â–Ľ 65% (1.62MB)
Categories 140 2.2MB 599kB â–Ľ 73% (1.62MB)
Years 98 4.7MB 580kB â–Ľ 88% (4.13MB)
Apps 84 5.2MB 687kB â–Ľ 87% (4.53MB)

For more details on the whole affair, read on.

A Quick History of Me, Netlify, and Images

This post has been a long time coming. Here’s a historical recap:

Phew.

Ok, so now let’s get into the details of implementing Netlify’s image CDN.

How It Works

The gist of the feature is simple: any image you want transformed, just point it at a Netlify-specific URL and their image service will take care of the rest.

For example: instead of doing this:

<img src="/assets/images/my-image.png">

Do this:

<img src="/.netlify/images?url=/assets/images/my-image.png">

And Netlify’s image service will takeover. It looks at the headers of the browser making the request and will serve a better, modern format if supported. Additionally, you can supply a bunch of parameters to exercise even greater control over how the image gets transformed (such as size, format, and quality).

How I Use It

Given my unique setup for delivering images, I spent a bit of time thinking about how I wanted to implement this feature.

Eventually I settled on an implemntation I’m really happy about. I use Netlify’s image CDN in combination with their redirects to serve the images. Why do I love this? Because if something breaks, my images continue to work. It’s kind of like a progressive enhancement use of the feature.

Previously, I had multiple sizes for each of my icons, so paths to the images looked like this:

<img src="/ios/512/my-icon.png">
<img src="/ios/256/my-icon.png">
<img src="/ios/128/my-icon.png">

Using Netlify’s redirects rules, I kept the same URLs but added a single query param:

<img src="/ios/512/my-icon.png?resize=true">
<img src="/ios/256/my-icon.png?resize=true">
<img src="/ios/128/my-icon.png?resize=true">

Now, instead of serving the original PNG, Netlify looks at the size in the URL path, resizes the image, and converts it to a modern format for supported browsers.

There’s more going on here as to why I chose this particular setup, but explaining it all would require a whole different blog post. Suffice it to say: I’m really happy about how this new image CDN feature composes with other features on Netlify (like the redirects engine) because it gives me tons of flexibility to implement this solution in a way that best suites the peculiarities of your project.

How It Turned Out

To test out how much bandwidth this feature would save me, I created a PR that implemented my changes. It was basically two lines of code.

From there, Netlify created a preview deploy where I could test the changes. I put the new preview deploy up side-by-side against what I had in production. The differences were pretty drastic.

For example, the site’s home page has 60 images on it, each displayed at 256px if you’re on a retina screen. It resulted in a 78% drop in bandwidth.

Additionally, the index pages for icon metadata (such as the designers page) can have up to 140 image on them. On a retina screen, 60 of those are 256px and 80 are 128px. They also a huge reduction in overall bandwidth.

A side-by-side screenshot of the designers index page for iOS Icon Gallery. On the left is the “old” page and on the right is the “new” page. Both websites look the same, but both also have the developer tools open and show a drastic drop in overall resources loaded.

Here’s the raw data showing the difference in overall resources loaded across different pages of the old and new sites (the old serving the original PNGs, the new serving AVIFs).

Page Requests Old New Difference
Home 60 1.3MB 293kB â–Ľ 78% (1.01MB)
Colors 84 1.4MB 371kB â–Ľ 74% (1.04MB)
Designers 131 5.6MB 914kB â–Ľ 84% (4.71MB)
Developers 140 2.5MB 905kB â–Ľ 65% (1.62MB)
Categories 140 2.2MB 599kB â–Ľ 73% (1.62MB)
Years 98 4.7MB 580kB â–Ľ 88% (4.13MB)
Apps 84 5.2MB 687kB â–Ľ 87% (4.53MB)

Out of curiosity, I wanted to see what icon in my collection had the largest file size (at its biggest resolution). It was a ridiculous 5.3MB PNG.

Screenshot of macos finder showing a list of PNG files sorted by size, the largest one being 5.3MB.

Really I should’ve spent time optimizing these images I had stored. But now with Netlify’s image service I don’t have to worry about that. In this case, I saw the image I was serving for that individual icon’s URL go from 5.3MB to 161kB. A YUGE savings (and no discernible image quality loss — AVIF is really nice).

When something is “on fire” in tech, that’s usually a bad thing — e.g. “prod is on fire” means “all hands on deck, there’s a problem in production” — but when I say Netlify’s new image CDN is on fire, I mean it in the positive, NBA Jam kind of way.


Reply
Tags

Expose Platform APIs Over Wrapping Them

View

From Kent C. Dodds’ article about why he won’t be using Next.js:

One of the primary differences between enzyme and Testing Library is that while enzyme gave you a wrapper with a bunch of (overly) helpful (dangerous) utilities for interacting with rendered elements, Testing Library gave you the elements themselves. To boil that down to a principle, I would say that instead of wrapping the platform APIs, Testing Library exposed the platform APIs.

I’ve been recently working in a Next.js app and a lot of Kent’s critiques have resonated with my own experience, particularly this insight about how some APIs wrap platform ones rather than exposing them.

For example, one thing I struggled with as a n00b to Next is putting metadata in an HTML document. If you want a <meta> tag in your HTML, Next has a bespoke (typed) API dedicated to it.

I understand why that is the case, given how Next works as an app/routing framework which dynamically updates document metadata as you move from page to page. Lots of front-end frameworks have similar APIs.

However, I prefer writing code as close as possible to how it will be run, which means staying as close as possible to platform APIs.

Why? For one, standardized APIs make it easy to shift from one tool to another while remaining productive. If I switch from tool A to tool B, it’d be a pain to relearn that <div> is written as <divv>.

Additionally, you don’t solely write code. You also run it and debug it. When I open my webpage and there’s a 1:1 correspondence between the <meta> tags I see in the devtools and the <meta> tags I see in my code, I can move quickly in debugging issues and trusting in the correctness of my code.

In other words, the closer the code that’s written is to the code that’s run, the faster I can move with trust and confidence. However, when I require documentation as an intermediary between what I see in the devtools and what I see in my code, I move slower and with less trust that I’ve both understood and implemented correctly what is documented.

With Next, what I write compiles to HTML which is what the browser runs. With plain HTML, what I write is what the browser runs. It’s weird to say writing plain HTML is “closer to the metal” but here we are ha!

That said, again, I realize why these kinds of APIs exist in client-side app/routing frameworks. But with Next in particular, I’ve encountered a lot of friction taking my base understanding of HTML APIs and translating them to Next’s APIs. Allow me a specific example.

An Example: The Metadata API

The basic premise of Next’s metadata API starts with the idea that, in order to get some <meta> tags, you use the key/value pairs of a JS object to generate the name and content values of a <meta> tag. For example:

export const metadata = {
  generator: 'Next.js',
  applicationName: 'Next.js',
  referrer: 'origin-when-cross-origin',
}

Will result in:

<meta name="generator" content="Next.js" />
<meta name="application-name" content="Next.js" />
<meta name="referrer" content="origin-when-cross-origin" />

Simple enough, right? camelCased keywords in JavaScript translate to their hyphenated counterparts, that’s all pretty standard web API stuff.

But what about when you have a <meta> tag that doesn’t conform to this simple one-key-to-one-value mapping? For example, let’s say you want the keywords meta tag which can have multiple values (a comma-delimited list of words):

<meta name="keywords" content="Next.js,React,JavaScript" />

What’s the API for that? Well, given the key/value JS object pattern of the previous examples, you might think something like this:

export const metadata = {
  keywords: 'Next.js,React,JavaScript'
}

Minus a few special cases, that’s how Remix does it. But not in Next. According to the docs, it’s this:

export const metadata = {
  keywords: ['Next.js', 'React', 'JavaScript'],
}

“Ah ok, so it’s not just key/value pairing where value is a string. It can be a more complex data type. I guess that makes sense.” I say to myself.

So what about other meta tags, like the ones whose content is a list of key/value pairs? For example, this tag:

<meta
  name="format-detection"
  content="telephone=no, address=no, email=no"
/>

How would you do that with a JS object? Initially you might think:

export const metadata = {
  formatDetection: 'telephone=no, address=no, email=no'
}

But after what we saw with keywords, you might think:

export const metadata = {
  formatDetection: ['telephone=no', 'address=no', 'email=no']
}

But this one is yet another data type. In this case, content is now expressed as a nested object with more key/value pairs:

export const metadata = {
  formatDetection: {
    email: false,
    address: false,
    telephone: false,
  },
}

To round this out, let’s look at one more example under the “Basic fields” section of the docs.

export const metadata = {
  authors: [
    { name: 'Seb' },
    { name: 'Josh', url: 'https://nextjs.org' }
  ],
}

This configuration will produce <meta> tags and a link tag.

<meta name="author" content="Seb" />
<meta name="author" content="Josh" />
<link rel="author" href="https://nextjs.org" />

“Ah oh, so the metadata keyword export isn’t solely for creating <meta> tags. It’ll also produce <link> tags. Got it.” I tell myself.

So, by solely looking at the “Basics” part of the docs, I’ve come to realize that to produce <meta> tags in my HTML I should use the metadata keyword export which is an object of key/value pairs where value can be a string, an array, an object, or even an array of objects! All of which will produce <meta> tags or <link> tags.

Ok, I think I got it.

Not So Fast: A Detour to Viewport

While you might think of the viewport meta tags as part of the metadata API, they’re not. Or rather, they were but got deprecated in Next 14.

Deprecated: The viewport option in metadata is deprecated as of Next.js 14. Please use the viewport configuration instead.

[insert joke here about how the <meta> tag in HTML is never gonna give you up, never gonna let you down, never gonna deprecate and desert you]

Ok so viewport has its own configuration API. How does it work?

Let's say I want a viewport tag:

<meta
  name="viewport"
  content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no"
/>

What’s the code for that? Given our knowledge of the metadata API, maybe we can guess it.

Since it gets its own named export, viewport, I assume the content part of the tag will represent the key/value pairs of the object?

And yes, that’s about right. Here's the code to get that tag:

export const viewport = {
  width: 'device-width',
  initialScale: 1,
  maximumScale: 1,
  userScalable: false,
}

Ok, I guess that kinda makes sense. false = no and all, but I see what’s going on.

But the viewport export also handles other tags, not just <meta name="viewport">. Theme color is also under there. You want this tag?

<meta name="theme-color" content="black" />

You might’ve thought it’s this:

export const metadata = { themeColor: 'black' }`

But according to the docs it's part of the viewport named export:

export const viewport = { themeColor: 'black' }

And what if you want multiple theme color meta tags?

<meta
  name="theme-color"
  media="(prefers-color-scheme: light)"
  content="cyan"
/>
<meta
  name="theme-color"
  media="(prefers-color-scheme: dark)"
  content="black"
/>

Well that’s the viewport named export but instead of a string you give it an array of objects:

export const viewport = {
  themeColor: [
    { media: '(prefers-color-scheme: light)', color: 'cyan' },
    { media: '(prefers-color-scheme: dark)', color: 'black' },
  ],
}

Ok, I guess this all kind of makes sense — in its own self-consistent way, but not necessarily in the context of the broader web platform APIs…

Back to Our Regularly Scheduled Programming: Next’s Metadata API

Ok so, given everything covered above, let’s play a little game. I give you some HTML and you see if you can guess its corresponding API in Next. Ready?

<link
  rel="canonical"
  href="https://acme.com"
/>
<link
  rel="alternate"
  hreflang="en-US"
  href="https://acme.com/en-US"
/>
<link
  rel="alternate"
  hreflang="de-DE"
  href="https://acme.com/de-DE"
/>
<meta
  property="og:image"
  content="https://acme.com/og-image.png"
/>

Go ahead, I’ll give you a second. See if you can guess it...

Have you tried? I’ll keep waiting...

Got it?

Ok, here’s the answer:

export const metadata = {
  metadataBase: new URL('https://acme.com'),
  alternates: {
    canonical: '/',
    languages: {
      'en-US': '/en-US',
      'de-DE': '/de-DE',
    },
  },
  openGraph: {
    images: '/og-image.png',
  },
}

That’s it. That’s what will produce the HTML snippet I gave you. Apparently there’s a whole “convenience” API for prefixing metadata fields with fully qualified URLs.

You’ve heard of CSS-in-JS? Well this is HTML-in-JS. If you wish every HTML API was just a (typed) JavaScript API, this would be right up your alley. No more remembering how to do something in HTML. There’s a JS API for that.

And again, I get it. Given the goals of Next as a framework, I understand why this exists. But there’s definitely a learning curve that’s feels divergent to the HTML pillar of the web.

Contrast that, for one moment, with something like this which (if you know the HTML APIs) requires no referencing docs:

const baseUrl = 'https://acme.com';

export const head = `
  <link
    rel="canonical"
    href="${baseUrl}"
  />
  <link
    rel="alternate"
    hreflang="en-US"
    href="${baseUrl}/en-US"
  />
  <link
    rel="alternate"
    hreflang="de-DE"
    href="${baseUrl}/de-DE"
  />
  <meta
    property="og:image"
    content="${baseUrl}/og-image.png"
  />
`;

I know, I know. There’s tradeoffs here. But I think what I'm trying to get at is what I expressed earlier: there’s a clear, immediate correspondence in this case between the code I write and what the browser runs. Plus this knowledge is transferable. This is why, to Kent’s point, I prefer exposed platform APIs over wrapped ones.

Conclusion

I only briefly covered parts of Next’s metadata API. If you look closer at the docs, you’ll see APIs for generating <meta> tags related to open graph, robots, icons, theme color, manifest, twitter, viewport, verification, apple web app, alternates, app links, archives, assets, bookmarks, category, and more.

Plus there’s all the stuff that you can use in “vanilla” HTML but that’s unsupported in the metadata API in Next.

This whole post might seem like an attempt to crap on Next. It’s not. As Kent states in his original article:

Your tool choice matters much less than your skill at using the tool to accomplish your desired outcome

I agree.

But I am trying to work through articulating why I prefer tools that expose underlying platform APIs over wrapping them in their own bespoke permutations.

It reminds me of this note I took from an article from the folks building HTMX:

Whenever a problem can be solved by native HTML elements, the longevity of the code improves tremendously as a result. This is a much less alienating way to learn web development, because the bulk of your knowledge will remain relevant as long as HTML does.

Well said.


Reply

The Case for Design Engineers, Pt. III

View

Previously:

I wrote about the parallels between making films and making websites, which was based on an interview with Christopher Nolan.

During part of the interview, Nolan discusses how he enjoys being a “Writer/Director” because things that aren’t in the original screenplay are uncovered through the process of making the film and he sees the incredible value in adapting to and incorporating these new meanings which reveal themselves.

In other words, making a film (like making a website) is an iterative, evolutionary process. Many important motifs, themes, and meanings cannot be in the original draft because the people making it have not yet evolved their understanding to discover them. Only through the process of making these things can you uncover a new correspondence of meaning deeper and more resonant than anything in the original draft — which makes sense, given that the drafts themselves are not even developed in the medium of the final form, e.g. movies start as screenplays and websites as hand drawings or static mocks, both very different mediums than their final forms.

Nolan embraces this inherent attribute of the creation process by calling himself a “Writer/Director” and indulging in the cross-disciplinary work of making a film. In fact, at one point in the interview he noted how he extemporaneously wrote a scene while filming:

I remember sitting on LaSalle Street in Chicago filming The Dark Knight. We flipped the [truck and then] I sat down on my laptop, and I wrote a scene and handed it to Gary Oldman. You’re often creating production revisions under different circumstances than they would normally track if you were in a writers’ room, for example, or if you weren’t on set.

If you live in a world where you think people can only be “Writers” or “Directors” but not both, this would be such an unusual and unnatural state of affairs. “Why is he writing on set? He should be directing! We’re in the process of filming the movie, we should be done with all the writing by now!”

But the creative process is not an assembly line. Complications and in-process revisions are something to be embraced, not feared, because they are an inherent part of making.

Nolan notes how, when making a film, you can have an idea in one part of the process and its medium (like writing the screenplay on paper or filming the movie on set) but if that idea doesn’t work when you get to a downstream process, such as editing sequences of images or mixing sound, then you have to be able to adapt or else you’re completely stuck.

Given that, you now understand the value of having the ability to adapt, revise, and extemporaneously improve the thing you’re creating.

Conversely, you can see the incredible risk of narrowly-defined roles in the creation process. If what was planned on paper doesn’t work in reality, you’re stuck. Or if a new, unforeseen meaning arises, you can’t take advantage of it because you’re locked in to an assembly line process which cannot be halted or improvised.

Over the course of making anything, new understandings will always arise. And if you’re unable to shift, evolve, and design through the process of production, you will lose out on these new understandings discovered through the process of making — and your finished product will be the poorer because of it.


Reply
Tags