Jim Nielsen’s Blog

You found my HTML feed — I also have an XML feed and a JSON feed.

I HTML

Subscribe to my blog by copy-pasting this URL into your RSS reader.

(Learn more about RSS and subscribing to content on the web at aboutfeeds.)

Recent posts

Notes From “You Are Not A Gadget”

View

Jaron Lanier’s book You Are Not a Gadget was written in 2010, but its preface is a prescient banger for 2024, the year of our AI overlord:

It's early in the 21st century, and that means that these words will mostly be read by nonpersons...[they] will be minced...within industrial cloud computing facilities...They will be scanned, rehashed, and misrepresented...Ultimately these words will contribute to the fortunes of those few who have been able to position themselves as lords of the computing clouds.

Today he might call the book, “You Are Not an Input to Artificial Intelligence”.

Lanier concludes the preface to his book by saying the words in it are intended for people, not computers.

Same for my blog! The words in it are meant for people, not computers. And I would hope any computerized representation of these words is solely for facilitating humans finding them and reading them in context.

Anyhow, here’s a few of my notes from the book.

So Long to The Individual Point of View

Authorship—the very idea of the individual point of view—is not a priority of the new technology...Instead of people being treated as the sources of their own creativity, commercial aggregation and abstraction sites present anonymized fragments of creativity…obscuring the true sources.

Again, this was 2010, way before “AI”.

Who cares for sources anymore? The perspective of the individual is obsolete. Everyone is flattened into a global mush. A word smoothie. We care more for the abstractions we can create on top of individual expression rather than the individuals and their expressions.

The central mistake of recent digital culture is to chop up a network of individuals so finely that you end up with a mush. You then start to care about the abstraction of the network more than the real people who are networked, even though the network by itself is meaningless. Only people were ever meaningful

While Lanier was talking about “the hive mind” of social networks as we understood it then, AI has a similar problem: we begin to care more about the training data than the individual humans whose outputs constitute the training data, even though the training data by itself is meaningless. Only people are meaningful.[1] As Lanier says in the book:

The bits don't mean anything without a cultured person to interpret them.

Information is alienated experience.

Emphasizing Artificial or Natural Intelligence

Emphasizing the crowd means deemphasizing individual humans.

I like that.

Here’s a corollary: emphasizing artificial intelligence means de-emphasizing natural intelligence.

Therein lies the tradeoff.

In Web 2.0, we emphasized the crowd over the individual and people behaved like a crowd instead of individuals, like a mob rather than a person. The design encouraged, even solicited, that kind of behavior.

Now with artificial intelligence enshrined, is it possible we begin to act like it? Hallucinating reality and making baseless claims in complete confidence will be normal, as that’s what the robots we interact with all day do.

What is communicated between people eventually becomes their truth. Relationships take on the troubles of software engineering.

What Even is “Intelligence”?

Before MIDI, a musical note was a bottomless idea that transcended absolute definition

But the digitalization of music require removing options and possibilities based on what was easiest to be represented and processed by the computer. We remove “the unfathomable penumbra of meaning that distinguishes” a musical note in the flesh to make a musical note in the computer.

Why? Because computers require abstractions. But abstractions are just that: models that roughly fit the real thing. But too often we let the abstractions become our reality:

Each layer of digital abstraction, no matter how well it is crafted, contributes some degree of error and obfuscation. No abstraction corresponds to reality perfectly. A lot of such layers become a system unto themselves, one that functions apart from the reality that is obscured far below.

Lanier argues it happened with MIDI and it happened with social networks, where people became rows in a database and began living up to that abstraction.

people are becoming like MIDI notes—overly defined, and restricted in practice to what can be represented in a computer...We have narrowed what we expect from the most commonplace forms of musical sound in order to make the technology adequate.

Perhaps similarly, intelligence (dare I say consciousness) was a bottomless idea that transcended definition. But we soon narrowed it down to fit our abstractions in the computer.

We are happy to enshrine into engineering designs mere hypotheses—and vague ones at that—about the hardest and most profound questions faced by science, as if we already posses perfect knowledge.

So we enshrine the idea of intelligence into our computing paradigm when we don’t even know what it means for ourselves. Are we making computers smarter or ourselves dumber?

You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart.

Prescient.


Footnotes
  1. This reminds me of Paul Ford’s questioning why we’re so anxious automate the hell out of everything and remove humans from the process when the whole point of human existence is to interact with other humans.
Reply
Tags

Hedge Words Affirm Creative, Imaginative Thinking

View

Mandy’s note piqued my interest so much, I started reading Being Wrong by Kathryn Schulz. So far, I love it! (I hope to write more about it once I’ve finished, but I’m afraid I won’t because the whole book is underlined in red pencil and I wouldn’t know where to start.)

As someone who has been told they self-sabotage by using hedge words, I like this excerpt from Schulz that Mandy quotes in her post:

disarming, self-deprecating comments, (“this could be wrong, but…” “maybe I’m off the mark here…”)…are often criticized [as] overly timid and self-sabotaging. But I’m not sure that’s the whole story. Awareness of one’s own qualms, attention to contradiction, acceptance of the possibility of error: these strike me as signs of sophisticated thinking, far preferable in many contexts to the confident bulldozer of unmodified assertions.

It’s kind of strange when you think about it.

Why do I feel this need to qualify what I’m about to say with a phrase like, “Maybe I’m wrong here, but…” As if being wrong is, in the words of Kathryn Schulz, a rare, bizarre, and “inexplicable aberration in the natural state of things”.

And yet, as much as we all say “to err is human”, we don’t always act like we believe it. As Schulz says in the book:

A whole lot of us go through life assuming that we are basically right, basically all the time, about basically everything.

Which is why I appreciate a good hedge word now and then.

In fact, I don’t think it’s hedging. It’s an open affirmation, as Mandy notes, of one’s desire to learn and evolve (as opposed to a desire to affirm and validate one’s own beliefs).

I would love to see less certainty and more openness. Less “it is this” and more “perhaps it could be this, or that, or maybe even both!”

Give me somebody who is willing to say “Maybe I’m wrong”. Somebody who can creatively imagine new possibilities, rather than be stuck with zero imagination and say, “I know all there is, and there’s no way this can be.”


Reply

The Night Time Sky

View

This post is a secret to everyone! Read more about RSS Club.

When I was a kid, my Dad used to take us outside to look for what he called “UFOs”. It’d take a moment, but after enough searching we’d eventually spot one.

One night, all of us kids were outside with our uncle. We saw a star-like light moving in a slow, linear fashion across the night sky. One of us said, “Look, a UFO!” My uncle, a bit confused, said “That’s not a UFO, that’s a satellite.”

Dad, you sneaky customer.

Fast forward to 2024. I was recently in the mountains in Colorado where the night sky was crisp and clear. I squinted and started looking for “UFOs”.

They were everywhere!

It seemed as though any patch of sky I looked at, I could spot four to six satellites whose paths were cross-crossing at any given moment. It made me think of Coruscant from Star Wars.

Animated gif showing the planet Coruscant from 'Star Wars' with lots of spaceship traffic traversing the sky.

It also reminded me of those times as a kid, scouring the night sky for “UFOs”. Spotting a satellite wasn’t easy. We had to look and look for a good chunk of time before anyone would get a lock on one traversing the sky.

But that night in Colorado I didn’t have to work at all. Point my eyes at any spot in the sky and I’d see not just one but many.

Knowing vaguely about the phenomenon of night sky and space pollution, I came in and looked up how many satellites are up there now-a-days vs. when I was a kid.

I found this site showing trends in satellite launch and use by Akhil Rao, which links to data from The Union of Concerned Scientists. Turns out we’ve ~10x’d the number of satellites in the sky over the last ~30 years!

That’s a long way of saying: I’ve heard about this phenomenon of sky pollution and space junk and the like, but it became much more real to me that night in Colorado.


Reply
Tags

Novels as Prototypes of the Future

View

Via Robin Rendle’s blog, I found this quote from Jack Cheng (emphasis mine):

A novel…is a prototype of the future. And if the ideas that the tech industry is pursuing feel stagnant…maybe it points to a shortage of compelling fictions for what the world could be.

I love that phrasing: novels as prototypes of the future.

Last summer I read Richard Rhodes’ book The Making of the Atomic Bomb (great book btw) and I remember reading about how influential some novels were on the physicists who worked on the science which led to the splitting of the atom, the idea of a chain reaction, and the development of a bomb.

For example, H.G. Wells’ read books on atomic physics from scientists like William Ramsay, Ernest Rutherford, and Frederick Soddy which cued him in to the idea of harnessing the power of the atom. In 1914, thirty one years before the end of WWII, Wells coined the term “atomic bomb” in his book The World Set Free which was read by physicist Leó Szilárd in 1932, the same year the neutron was discovered. Some believe Szilárd was inspired by Wells’ book in envisioning how to tap into the power of the atom via neutron bombardment to trigger a chain reaction.

Perhaps it did, perhaps it didn’t. Or perhaps it was a little bit of fact, a little bit of fiction, and a little bit of contemporary news that all led Szilárd’s inspiration.

In this way, it’s fascinating to think of someone without extensive, specialized scientific training being able to influence scientific discovery nonetheless — all through the power of imagination. Perhaps this is, in part, what Einstein meant about the power of imagination:

Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution. It is, strictly speaking, a real factor in scientific research.

For me personally, maybe my own work could benefit from more novels. Maybe a little less “latest APIs in ES2024” and a little more fiction. A little less facts, a little more fancy.


Reply

“Just” One Line

View

From Jeremy Keith’s piece “Responsibility”:

Dropping in one line of JavaScript seems like a victimless crime. It’s just one small script, right? But JavaScript can import more JavaScript.

“It’s just one line of code” is a pitch you hear all the time. It might also be the biggest lie we tell ourselves — and one another.

“Add styles with just one line”:

<link href="styles.css" rel="stylesheet">

“Add our widget, it’s just one line”:

<script src="script.js"></script>

“Install our framework in just one line”:

npm i framework

But “just one line” is a facade. It comes with hundreds, thousands, even millions of lines of code. You don’t know how many and it’s not usually disclosed.

There’s a big difference between the interface to a thing being one line of code, and the cost of a thing being one line of code.

A more acute rendering of this sales pitch is probaly: “It’s just one line of code to add many more lines of code.”

The connotation of the phrase is ease, e.g. “This big complicated problem can be solved with just one line of code on your part.”

But, however intentional, another subtle connotation sneaks in with that phrase relating to size, e.g. “It’s not big, it’s just one line.”

But “one line” does not necessarily equate to small size. It can be big. Very big. Very, very big. One line of code that creates, imports, or installs many more lines of code is “just one line” to interface with, but many lines in cost (conceptual overhead, project size and complexity, etc.).

The next time you hear “it’s just one line” pause for a moment. Just one line for who? You the developer to write? Future you (or your teammates) to debug and maintain? Or the end-user to download, parse, and execute?


Reply

Overcomplicating Things Is So Easy

View

Maciej Cegłowski writing about “The Lunacy of Artemis”:

You don’t have to be a rocket scientist to wonder what’s going on here. If we can put a man on the moon, then why can't we just go do it again? The moon hasn’t changed since the 1960’s, while every technology we used to get there has seen staggering advances. It took NASA eight years to go from nothing to a moon landing at the dawn of the Space Age. But today, twenty years and $93 billion after the space agency announced our return to the moon, the goal seems as far out of reach as ever.

Sounds like vaporware: lots of money and time invested, but little progress made towards your goal.

Advocates for Artemis insist that the program is more than Apollo 2.0. But as we’ll see, Artemis can't even measure up to Apollo 1.0. It costs more, does less, flies less frequently, and exposes crews to risks that the steely-eyed missile men of the Apollo era found unacceptable.

Sounds typical of software going from version 1.0 to 2.0 🥁

But seriously, there are a lot of parallels in this piece to making software. Like how 2.0 is touted as the “new and improved” and yet it often can’t reliably do what 1.0 did:

even that upgrade won’t give SLS the power of the Saturn V. For whatever reason, NASA designed its first heavy launcher in forty years to be unable to fly the simple, proven architecture of the Apollo missions.

Again, the parallels to software are uncanny. Not just the technical problems, but the people ones too:

But to search for technical grounds is to misunderstand the purpose of Gateway. The station is not being built to shelter astronauts in the harsh environment of space, but to protect Artemis in the harsh environment of Congress.

Keeping stakeholders happy, fighting for funding, sound familiar?

This all really goes to show how keeping things simple (and boring) is really hard:

It’s instructive to compare the HLS approach to the design philosophy on Apollo. Engineers on that program were motivated by terror; no one wanted to make the mistake that would leave astronauts stranded on the moon. The weapon they used to knock down risk was simplicity. The Lunar Module was a small metal box with a wide stance, built low enough so that the astronauts only needed to climb down a short ladder. The bottom half of the LM was a descent stage that completely covered the ascent rocket (a design that showed its value on Apollo 15, when one of the descent engines got smushed by a rock). And that ascent rocket, the most important piece of hardware in the lander, was a caveman design intentionally made so primitive that it would struggle to find ways to fail.

What technologies or tools make you think of that phrase — “a design so primitive, it struggles to find ways to fail”? (HTML & CSS might I suggest?) I would love if more of my digital tools and services employed this ethos.

But modern tools (and space hardware, apparently) seemingly go in the opposite direction. They go out of their way to create problems that can be solved with technology.

On Artemis, it's the other way around: the more hazardous the mission phase, the more complex the hardware.

As Devine noted about computing, perhaps we haven’t even scratched the surface of what can be done with little.

Apollo showed us that.


Reply

Thinking Big and Small

View

It’s so easy to start with the question, “What should I do?” And end up with a discussion about other people and what they’re doing. Here’s Paul Ford:

I’ll give you a good example. Do you go out and raise venture capital? Well, it would be nice to have more money. But then everybody tells us that VC is ridiculous. And you end up in this swirl of conversation about this thing that ends up being about the industry as a whole as opposed to what you need to accomplish.

We start with questions about ourselves but so often end up with discussions framed around other people, what they’re doing, and whether it’s “right” or “wrong”. We end up looking outward instead of inward.

Paul continues:

Over and over, we have these narratives and we have to push through them in order to figure out what success would be for us. And I see this a lot of times where people are very judgmental of relatively small efforts because they don’t behave or act like giant platform companies…And so you end up internalizing, like, venture-capital thinking and giant-platform thinking and so on, and that keeps you away from focusing on your own near-term or even long-term goals.

We internalize what success looks like in an impersonal, generalized context — for artists, engineers, startups, or organizations — and we forget the personal, individualized answer we started searching for in the first place.

For example, think of the idea of “impact”. As Paul says, we can be judgmental of small efforts because they don’t have the impact of giant platform companies — “If all I can do is recycle this one water bottle, that’s not enough. Clearing the ocean of all plastics is the only acceptable measure of success.” That’s a case of giant-platform thinking that poisons your individual thinking and action, e.g. “any small effort on my part accomplishes nothing and is therefore pointless”.

When you start to think of everything in terms of scale and impact, what can any one individual meaningfully do?

But that’s thinking about and framing your individual goals and definition of success in the language and context of large groups of people like corporations or even nation states.

“Did I pick up that water bottle from neighbor's trash that tipped over and put it in the recycle bin?” Yes? Ok, that can be success. Who cares if it wouldn’t be for a giant organization.

“Did I help that one individual?” Even better.


Reply

RSC, Localfirst, and Coordination Between Multiple Computers

View

Dan Abramov gave a talk at ReactConf called “React for two computers” (starts at ~5:14:00) which gives the conceptual background around how the team came up with the idea for React Server Components (RSC)[1].

I found the talk intriguing. It’s like watching someone take something apart and put it back together, explaining along the way how and why each piece works the way it does. This helps you see each piece in the larger context of its function and design, causing you to see something that was previously familiar with fresh, new perspective. I love talks and speakers who can do this.

But I digress. Let me get back to my bulleted notes from the talk:

  • Server/client is really just a single program that spans two computers.
  • fetch is the communication channel between them.
  • Server is a computer with a runtime of your choosing (php, rust, node, etc.). Client is a computer whose runtime you have no choice in (browser).
  • The server emits a program that can run on the client (HTML is a programming language).

So in essence, web apps are a single program distributed across time, space, and computers whose runtimes vary (server/client). fetch is used to pass messages in this program across the network, coordinating the proper manipulation of data based on specified rules of logic.

Ok, nothing new there. We all know this, right?

What stood out to me, when I view the current state of webdev in this light, is how much complexity is tied up in the problem of coordination between computers.

Dan talks about this idea of looking at your problem regardless of framework and asking yourself: What am I really solving here? What boundaries can be removed that are inessential to the problem at hand?

For example, so many apps deal with the lifecycle of data and coordinating the rules for its manipulation between two computers. Loading -> Success -> Failure -> Revalidation. Over and over and over in a codebase.

So much application complexity is tied up, not in our problem domain, but in the more generic problem domain of coordinating the execution of a single program across multiple computers.

If we simplified that, we could simplify a lot about all our applications.

Hence the allure of localfirst and sync engines: it disentangles the problem of logic, data, and execution of a program from the problem of coordination between two computers.

What do I mean by this?

In essence, it splits today’s applications into two parts:

  1. The execution of the application, which requires only one computer.
  2. The syncing and sharing of an application’s data, which requires at least two computers.

By creating this split, localfirst eliminates an entire class of coordination problems by getting rid of the second computer.

With localfirst, all logic and data lives on one computer: yours (the client). The second computer (the server) is an optional enhancement to the application that provides data backup as well as sharing functionality. Sync engines tackle the complex problem of coordination across two computers. However, this class of problems is not imperative for the program to work (on a single computer).[2]

When you think about it, this is kind of an elegant way to approach the problem: rather than requiring execution and coordination to work in conjunction across space and time, you split up the execution and coordination into different problems solved by separate solutions. Coordination builds on top of execution, but is not necessarily a required dependency.

Anyhow, these are some thoughts that came to mind during this talk.


Footnotes
  1. If you follow Dan’s writing on his blog overreacted, this talk probably isn’t a surprise. He’s written on this topic before. It’s a great blog, I wish he posted more.
  2. I noted on Mastodon the contrast between localfirst where all logic+data lives on your computer with an (optional) sync layer for backup+sharing, and traditional SasS where logic+data lives on somebody else’s computer and you can access it so long as you have 1) internet, and 2) a paid account.
Reply

Futuristic Progressive Enhancement

View

Imagine someone came to you in a time machine and said, “In the future we will write software that becomes more capable as time passes without any effort on our part.”

Wouldn’t that be amazing? Surely you’d want to know what sorcery makes this possible, right?

Well the future is here. You can do that now. It’s called progressive enhancement.

Here’s Jeremy in his piece “Baseline progressive enhancement”:

Code you’ve already written starts working from one day to the next.

Wait, what? You write code and, without any effort on your part, it becomes more capable from one day to the next?

What an antidote to so much of today’s fatigue.

We’re all tired of: write some code, come back to it in six months, try to make it do more, and find the whole project is broken until you upgrade everything.

Progressive enhancement allows you to do the opposite: write some code, come back to it in six months, and it’s doing more than the day you wrote it!


Reply

The Gist That Keeps On Giving

View

I’m working with git and make a big boo-boo.

Now I’m facing a situation where I’ve deleted a local branch with all my work and there’s no backup on GitHub.

“This is git. There has got to be a version of this things still on my computer somewhere, right? RIGHT?!”

So I start searching online: “how to recover a deleted branch in git?”

A few results later, I find this gist.

Not one to copy/paste CLI commands straight off the internet (cough rm -rf / cough) I read through the script.

git reflog

Idk what that is, but yes, I should be flogging myself after what I just did.

What else is in here?

git checkout

Yeah that seems fine. What else?

git branch

Ok, that’s not dangerous.

Yeah I think I can give this a shot.

A few commands later and the work I thought was gone forever is restored to my computer. Hallelujah!

Now, one of the principle rules of the internet is: “Don’t read the comments.” But that’s where I go because this gist just saved my life.

And apparently not just mine. Other folks are saying the same thing:

  • “you saved my life”
  • “Thank you so much, you saved me”
  • “Still saving lives in 2023”
  • “Still saving lives in 2024! Thank you so much!”

And not just lives. Saving asses too:

  • “This post just saved my ass! Thank you”
  • “You saved my ass as well!!!”
  • “Another ass saved here.”

And time:

  • “Thanks for this, it just saved me one month of work.”
  • “saved me, i was gonna work all weekends.”
  • “Thanks a lot! You saved me a week's worth of work!”
  • “you have saved me a months worth of work”

One commenter even went so far as:

  • “You deserve a Noble Peace Prize”

I love it!

Seeing as it saved my butt, I also commented on the gist.

And because I commented, I’ve since been subscribed to further comments on the gist. And you know what? I kinda like it. I haven’t unsubscribed yet. It’s so fun. Every so often I get a new email notification from someone who commented on the gist, pouring out their gratitude.

Spread the love. As Jeremy says in “Our Web”:

Tell someone that you liked something they put on the web. You’ll feel good. They’ll feel even better.


Reply