Jim Nielsen’s Blog

You found my experimental HTML feed (there are also other ways to subscribe).

I HTML

Recent posts

You Can Just Say No to the Data

View

“The data doesn’t lie.”

I imagine that’s what the cigarette companies said.

“The data doesn’t lie. People want this stuff. They’re buying it in droves. We’re merely giving them what they want.”

Which sounds more like an attempt at exoneration than a reason to exist.

Demand can be engineered. “We’re giving them what they want” ignores how desire is shaped, even engineered (algorithms, dark patterns, growth hacking, etc.).

Appealing to data as the ultimate authority — especially when fueled by engineered desire — isn’t neutrality, it’s an abdication of responsibility.

Satiating human desire is not the highest aspiration.

We can do so much more than merely supply what the data says is in demand.

Stated as a principle:

Values over data.

Data tells you what people consume, not what you should make. Values, ethics, vision, those can help you with the “should”.

“What is happening?” and “What should happen?” are two completely different questions and should be dealt with as such.

The more powerful our ability to understand demand, the more important our responsibility to decide whether to respond to it. We can choose not to build something, even though the data suggests we should. We can say no to the data.

Data can tell you what people clicked on, even help you predict what people will click on, but you get to decide what you will profit from.


Reply via: Email · Mastodon · Bluesky

CTA Hierarchy in the Wild

View

The other day I was browsing YouTube — as one does — and I clicked a link in the video description to a book.

I was then subjected to a man-in-the-middle attack, where YouTube put themselves in the middle of me and the link I had clicked:

Screenshot of a webpage that says “Are you sure you want to leave YouTube?” and there are two buttons. On the left is the secondary, de-emphasized button that says “GO TO SITE” and on the right is the primary, visually emphasized button that says “BACK TO YOUTUBE”.

Hyperlinks are subversive. Big Tech must protect themselves and their interests.

But link hijacking isn’t why I’m writing this post.

What struck me was the ordering and visual emphasis of the “call to action” (CTA) buttons. I almost clicked “Back to YouTube”, which was precisely the action I didn’t want.

I paused and laughed to myself.

Look how the design pattern for primary/secondary user interface controls has inverted over time:

  • Classic software:
    • Primary CTA: what’s best for you
    • Secondary CTA: an alternative for you
  • Modern software:
    • Primary CTA: what’s best for us
    • Secondary CTA: what’s acceptable to us

It seems like everywhere I go, software is increasingly designed against me.


Reply via: Email · Mastodon · Bluesky

New Year, New Website — Same Old Me

View

I redesigned my www website. Why?

  • The end of year / holiday break is a great time to work on such things.
  • I wanted to scratch an itch.
  • Websites are a worry stone [gestures at current state of the world]
  • Do I really need a reason? Nope.

I read something along the lines of “If you ship something that shows everything you’ve made, it’s dead on arrival.”

Oooof. I feel that. It’s so hard to make a personal website that keeps up with your own personal evolution and change.

But the hell if I’m not gonna try — and go through many existential crises in the process.

I was chasing the idea of making my “home” page essentially a list of feeds, like:

You get the idea.

The thought was: if I condense the variety of the things I do online into a collection of feeds (hard-coded or live from other sites I publish), then I’ll never be out of date!

Plus I love links. I love following them. I wanted my home page to be the start of a journey, not the end. A jumping off point, not a terminal one.

At least that was the idea behind this iteration.

Behind the Scenes

I built the (static) site using Web Origami.

I loved it! Origami is great for dealing with feeds because it makes fetching data from the network and templating it incredibly succinct.

<h2>Latest from my notes blog</h2>
<ul>
  ${Tree.map(
    (https://notes.jim-nielsen.com/feed.json).items.slice(0,3),
    (note) => `<li><a href="${note.url}">${note.title}</a></li>`
  )}
</ul>

In just those few lines of code I:

  • Fetch a JSON feed over the network
  • Grabbed the 3 most recent entries
  • Turn the data into markup

For example, here’s the code showing my latest blog posts:

Screenshot of Web Origami code on the top and its output on the bottom (a list of blog post links).

And here’s the code showing the latest icons in my iOS collection:

Screenshot of Web Origami code on top and its output on the bottom (a grid of icons).

Beautiful and succinct, isn’t it?

Origami is a static site builder, so to keep my site “up to date” I just set Netlify to build my site every 24 hours which pulls data from a variety of sources, sticks it in a single HTML file, and publishes it as a website.

The “build my site every 24 hours” isn’t quite as easy as you might think. You can use a scheduled function on Netlify’s platform but that requires writing code (which also means maintaining and debugging said code). That seems to be Netlify’s official answer to the question: “How do I schedule deploys?”

I went with something simpler — at least simpler to me.

  • Setup a build hook on Netlify (which you have to do for the schedule function approach anyway).
  • Use Apple’s Shortcuts app to create a shortcut that issues a POST request to my build hook.
  • Use Shortcuts’ “Automation” feature to run that shortcut every day.

So the “cron server” in my case is my iPhone, which works great because it’s basically always connected to the internet. If I go off grid for a few days and my website doesn’t refresh, I’m ok with that trade-off.

A tiny, pink origami bird with the text “Built with Origami”


Reply via: Email · Mastodon · Bluesky

Easy Measures Doing, Simple Measures Understanding

View

In his talk, I like the way Jake Nations pits easy vs. simple:

Easy means you can add it to your system quickly. Simple means you can understand the work that you’ve done.

I like this framing.

Easy means you can do with little effort.

Simple means you can understand what you do with little effort.

In other words: easy measures the effort in doing, while simple measures the effort in understanding the doing.

For example: npm create framework@latest or “Hey AI, build an instagram clone”. These both get you a website with little effort (easy) but do you understand what you just did (simple)?

It’s easy to get complexity, but it’s not easy to get simplicity.

(I get this is arguing semantics and definitions, but I find it to be a useful framing personally. Thanks Jake!)


Reply via: Email · Mastodon · Bluesky

In The Beginning There Was Slop

View

I’ve been slowly reading my copy of “The Internet Phone Book” and I recently read an essay in it by Elan Ullendorff called “The New Turing Test”.

Elan argues that what matters in a work isn’t the tools used to make it, but the “expressiveness” of the work itself (was it made “from someone, for someone, in a particular context”):

If something feels robotic or generic, it is those very qualities that make the work problematic, not the tools used.

This point reminded me that there was slop before AI came on the scene.

A lot of blogging was considered a primal form of slop when the internet first appeared: content of inferior substance, generated in quantities much vaster than heretofore considered possible.

And the truth is, perhaps a lot of the content in blogosphere was “slop”.

But it wasn’t slop because of the tools that made it — like Movable Type or Wordpress or Blogger.

It was slop because it lacked thought, care, and intention — the “expressiveness” Elan argues for.

You don’t need AI to produce slop because slop isn’t made by AI. It’s made by humans — AI is just the popular tool of choice for making it right now.

Slop existed long before LLMs came onto the scene.

It will doubtless exist long after too.


Reply via: Email · Mastodon · Bluesky

The AI Security Shakedown

View

Matthias Ott shared a link to a post from Anthropic titled “Disrupting the first reported AI-orchestrated cyber espionage campaign”, which I read because I’m interested in the messy intersection of AI and security.

I gotta say: I don’t know if I’ve ever read anything quite like this article.

At first, the article felt like a responsible disclosure — “Hey, we’re reaching an inflection point where AI models are being used effectively for security exploits. Look at this one.”

But then I read further and found statements like this:

[In the attack] Claude didn’t always work perfectly. It occasionally hallucinated […] This remains an obstacle to fully autonomous cyberattacks.

Wait, so is that a feature or a bug? Is it a good thing that your tool hallucinated and proved a stumbling block? Or is this bug you hope to fix?

The more I read, the more difficult it became to discern whether this security incident was a helpful warning or a feature sell.

With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers: analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator. Less experienced and resourced groups can now potentially perform large-scale attacks of this nature.

Shoot, this sounds like a product pitch! Don’t have the experience or resources to keep up with your competitors who are cyberattacking? We’ve got a tool for you!

Wait, so if you’re creating something that can cause so much havoc, why are you still making it? Oh good, they address this exact question:

This raises an important question: if AI models can be misused for cyberattacks at this scale, why continue to develop and release them? The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense.

Ok, so the article is a product pitch:

  • We’ve reached a tipping point in security.
  • Look at this recent case where our AI was exploited to do malicious things with little human intervention.
  • No doubt this same thing will happen again.
  • You better go get our AI to protect yourself.

But that’s my words. Here’s theirs:

A fundamental change has occurred in cybersecurity. We advise security teams to experiment with applying AI for defense in areas like Security Operations Center automation, threat detection, vulnerability assessment, and incident response. We also advise developers to continue to invest in safeguards across their AI platforms, to prevent adversarial misuse. The techniques described above will doubtless be used by many more attackers—which makes industry threat sharing, improved detection methods, and stronger safety controls all the more critical.

It appears AI is simultaneously the problem and the solution.

It’s a great business to be in, if you think about it. You sell a tool for security exploits and you sell the self-same tool for protection against said exploits. Everybody wins!

I can’t help but read this post and think of a mafia shakedown. You know, where the mafia implies threats to get people to pay for their protection — a service they created the need for in the first place. ”Nice system you got there, would be a shame if anyone hacked into it using AI. Better get some AI to protect yourself.”

I find it funny that the URL slug for the article is:

/disrupting-AI-espionage

That’s a missed opportunity. They could’ve named it:

/causing-and-disrupting-AI-espionage


Reply via: Email · Mastodon · Bluesky

A Letter of Feedback To Anyone Who Makes Software I Use

View

I don’t much enjoy being a lab rat to your half-baked ideas.

I can tell when your approach to what I use is: “Ship it and let’s see how people respond.”

Well let me tell you something: I’m not going to respond.

My desire to give you constructive feedback is in direct correlation to your effort to care — about your communications, about what you ship, even about what you don’t ship.

Just because you ship some half-baked feature doesn’t mean I’m going to take the time to tell you whether I find it any good.

Doubly so in the age of AI. I know how easy it is for you to ship slop, why should I take the time to formulate careful feedback on your careless output?

I can disagree with product decisions, but I won’t get mad at thoughtfulness and care. I respect that.

But I will very much disagree with and get mad at product decisions devoid of thought and care. I have no respect for that.

It’s not really worth my time to respond to such a posture of shipping software, and yet here I am writing about it. Because I care about the things I choose to (or am required to) use.

So this is my one-time, general-purpose piece of feedback to all such purveyors of digital goods and tools. Just because nobody tells you that what you shipped sucks, doesn’t mean it’s worth keeping. You can’t measure an apathetic response because it is, by definition, the absence of data.


Reply via: Email · Mastodon · Bluesky

Creating “Edit” Links That Open Plain-Text Source Files in a Native App

View

The setup for my notes blog looks like this:

  • Content is plain-text markdown files (synced via Dropbox, editable in iA Writer on my Mac, iPad, or iPhone)
  • Codebase is on GitHub
  • Builds are triggered in Netlify by a Shortcut

I try to catch spelling issues and what not before I publish, but I’m not perfect.

I can proofread a draft as much as I want, but nothing helps me catch errors better than hitting publish and re-reading what I just published on my website.

If that fails, kind readers will often reach out and say “Hey, I found a typo in your post [link].”

To fix these errors, I will:

  • Open iA Writer
  • Find the post
  • Fix typo
  • Fire Shortcut to trigger a build
  • Refresh my website and see the updated post

However, the “Open iA Writer” and “Find the post” are points of friction I’ve wanted to optimize.

I’ve found myself thinking: “When I’m reading a post on notes.jim-nielsen.com and I spot a mistake, I wish I could just click an ‘Edit’ link right there and be editing my file.”

You might be thinking, “Yeah that’s what a hosted CMS does.”

But I like my plain-text files. And I love my native writing app.

What’s one to do?

Well, turns out iA Writer supports opening files via links with this protocol:

ia-writer://open?path=location:/path/to/file.md

So, in my case, I can create a link for each post on my website that will open the corresponding plain-text file in iA Writer, e.g.

<article>
  <!-- content of post here -->
  <a href="ia-writer://open?path=notes:2026-01-04T2023.md">
    Edit
  </a>
</article>

And voilà, my OS is now my CMS!

Screenshot of Safari with notes.jim-nielsen.com open. The browser shows a web page with a link that says “Edit”. The link preview in the lower left shows an `ia-writer:` link protocol and behind the browser is an iA Writer application window with a markdown file representing the content of the post you can see in the browser window.

It’s not a link to open the post in a hosted CMS somewhere. It’s a link to open a file on the device I’m using — cool!

My new workflow looks like this:

  • Read a post in the browser
  • Click “Edit” hyperlink to open plain-text file in native app
  • Make changes
  • Fire Shortcut to trigger a build

It works great. Here’s an example of opening a post from the browser on my laptop:

And another on my phone:

Granted, these “Edit” links are only useful to me. So I don’t put them in the source markup. Instead, I generate them with JavaScript when it’s just me browsing.

How do I know it’s just me?

I wrote a little script that watches for the presence of a search param ?edit=true. If that is present, my site generates an “Edit” link on every post with the correct href and stores that piece of state in localstorage so every time I revisit the site, the “Edit” links are rendered for me (but nobody else sees them).

Well, not nobody. Now that I revealed my secret I know you can go get the “Edit” links to appear. But they won’t work for you because A) you don’t have iA Writer installed, or B) you don’t have my files on your device. So here’s a little tip if you tried rendering the “Edit” links: do ?edit=false to turn them back off :)


Reply via: Email · Mastodon · Bluesky

To Make Software Is To Translate Human Intent Into Computational Precision

View

In “The Future of Software Development is Software Developers” Jason Gorman alludes to how terrible natural language is at programming computers:

The hard part of computer programming isn’t expressing what we want the machine to do in code. The hard part is turning human thinking – with all its wooliness and ambiguity and contradictions – into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a programming language.

The work is the translation, from thought to tangible artifact. Like making a movie: everyone can imagine one, but it takes a director to produce one.

This is also the work of software development: translation. You take an idea — which is often communicated via natural language — and you translate it into functioning software. That is the work.

It’s akin to someone who translates natural languages, say Spanish to English. The work isn’t the words themselves, though that’s what we conflate it with.

You can ask to translate “te quiero” into English. And the resulting words “I love you” may seem like a job complete. But the work isn’t coming up with the words. The work is gaining the experience to know how and when to translate the words based on clues like tone, context, and other subtleties of language. You must decipher intent. Does “te quiero” here mean “I love you” or “I like you” or “I care about you”?

This is precisely why natural language isn’t a good fit for programming: it’s not very precise. As Gorman says, “Natural languages have not evolved to be precise enough and unambiguous enough” for making software. Code is materialized intent. The question is: whose?

The request ”let users sign in” has to be translated into constraints, validation, database tables, async flows, etc. You need pages and pages of the written word to translate that idea into some kind of functioning software. And if you don’t fill in those unspecified details, somebody else (cough AI cough) is just going to guess — and who wants their lives functioning on top of guessed intent?

Computers are pedants. They need to be told precisely in everything, otherwise you’ll ask for one thing and get another. “Do what I mean, not what I say” is a common refrain in working with computers. I can’t tell you how many times I’ve spent hours troubleshooting an issue only to realize a minor syntactical mistake. The computer was doing what I typed, not what I meant.

So the work of making software is translating human thought and intent into functioning computation (not merely writing, or generating, lines of code).


Reply via: Email · Mastodon · Bluesky

Leading Global Research and Advisory Firm Recommends Against Using AI Browsers

View

I recommended against using an AI browser unless you wanted to participate in a global experiment in security. My recommendation did come with a caveat:

But probably don’t listen to me. I’m not a security expert

Well, now the experts (that you pay for) have weighed in.

Gartner, the global research and advisory firm, has come to the conclusion that agentic browsers are too risky for most organizations.

Ground breaking research.

But honestly, credit where it’s due: they’re not jumping on the hype train. In fact, they’re advising against it.

I don’t have access to the original paper (because I’d have to pay Gartner for it), but the reporting on Gartner’s research says this:

research VP Dennis Xu, senior director analyst Evgeny Mirolyubov, and VP analyst John Watts observe “Default AI browser settings prioritize user experience over security.”

C’mon, let’s call a spade a spade: they prioritize their maker’s business model over security.

Continuing:

Gartner’s fears about the agentic capabilities of AI browser relate to their susceptibility to “indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website.”

And that’s just the beginning! It gets worse for large organizations.

The real horror of these AI browsers is that they can help employees to autonomously complete their mandatory trainings:

The authors also suggest that employees “might be tempted to use AI browsers and automate certain tasks that are mandatory, repetitive, and less interesting” and imagine some instructing an AI browser to complete their mandatory cybersecurity training sessions.

The horror!

In this specific case, maybe AI browsers aren’t the problem? Maybe they’re a symptom of the agonizing online instructional courses that feign training in the name of compliance?

But I digress. Ultimately, the takeaway here is:

the trio of analysts think AI browsers are just too dangerous to use

Imagine that: you take a tool that literally comes with a warning of being untrustworthy, you embed it as foundational in another tool, and now you have two tools that are untrustworthy. Who would’ve thought?


Reply via: Email · Mastodon · Bluesky