Jim Nielsen’s Blog

You found my experimental HTML feed (there are also other ways to subscribe).

I HTML

Recent posts

An Analysis of Links From The White House’s “Wire” Website

View

A little while back I heard about the White House launching their version of a Drudge Report style website called White House Wire. According to Axios, a White House official said the site’s purpose was to serve as “a place for supporters of the president’s agenda to get the real news all in one place”.

So a link blog, if you will.

As a self-professed connoisseur of websites and link blogs, this got me thinking: “I wonder what kind of links they’re considering as ‘real news’ and what they’re linking to?”

So I decided to do quick analysis using Quadratic, a programmable spreadsheet where you can write code and return values to a 2d interface of rows and columns.

I wrote some JavaScript to:

  • Fetch the HTML page at whitehouse.gov/wire
  • Parse it with cheerio
  • Select all the external links on the page
  • Return a list of links and their headline text

In a few minutes I had a quick analysis of what kind of links were on the page:

Screenshot of the Quadratic spreadsheet, with rows and columns of data on the left, and on the right a code editor containing the code which retrieved and parsed the data on the left.

This immediately sparked my curiosity to know more about the meta information around the links, like:

  • If you grouped all the links together, which sites get linked to the most?
  • What kind of interesting data could you pull from the headlines they’re writing, like the most frequently used words?
  • What if you did this analysis, but with snapshots of the website over time (rather than just the current moment)?

So I got to building.

Quadratic today doesn’t yet have the ability for your spreadsheet to run in the background on a schedule and append data. So I had to look elsewhere for a little extra functionality.

My mind went to val.town which lets you write little scripts that can 1) run on a schedule (cron), 2) store information (blobs), and 3) retrieve stored information via their API.

After a quick read of their docs, I figured out how to write a little script that’ll run once a day, scrape the site, and save the resulting HTML page in their key/value storage.

Screenshot of 9 lines of code from val.town that fetches whitehouse.gov/wire, extracts the text, and stores it in blob storage.

From there, I was back to Quadratic writing code to talk to val.town’s API and retrieve my HTML, parse it, and turn it into good, structured data. There were some things I had to do, like:

  • Fine-tune how I select all the editorial links on the page from the source HTML (I didn’t want, for example, to include external links to the White House’s social pages which appear on every page). This required a little finessing, but I eventually got a collection of links that corresponded to what I was seeing on the page.
  • Parse the links and pull out the top-level domains so I could group links by domain occurrence.
  • Create charts and graphs to visualize the structured data I had created.

Selfish plug: Quadratic made this all super easy, as I could program in JavaScript and use third-party tools like tldts to do the analysis, all while visualizing my output on a 2d grid in real-time which made for a super fast feedback loop!

Once I got all that done, I just had to sit back and wait for the HTML snapshots to begin accumulating!

It’s been about a month and a half since I started this and I have about fifty days worth of data.

The results?

Here’s the top 10 domains that the White House Wire links to (by occurrence), from May 8 to June 24, 2025:

  1. youtube.com (133)
  2. foxnews.com (72)
  3. thepostmillennial.com (67)
  4. foxbusiness.com (66)
  5. breitbart.com (64)
  6. x.com (63)
  7. reuters.com (51)
  8. truthsocial.com (48)
  9. nypost.com (47)
  10. dailywire.com (36)

A pie chart visualizing the top ten links (by domain) from the White House Wire

From the links, here’s a word cloud of the most commonly recurring words in the link headlines:

  1. “trump” (343)
  2. “president” (145)
  3. “us” (134)
  4. “big” (131)
  5. “bill” (127)
  6. “beautiful” (113)
  7. “trumps” (92)
  8. “one” (72)
  9. “million” (57)
  10. “house” (56)

Screenshot of a word cloud with “trump” being the largest word, followed by words like “bill”, “beautiful” and “president”.

The data and these graphs are all in my spreadsheet, so I can open it up whenever I want to see the latest data and re-run my script to pull the latest from val.town. In response to the new data that comes in, the spreadsheet automatically parses it, turn it into links, and updates the graphs. Cool!

Screenshot of a spreadsheet with three different charts and tables of data.

If you want to check out the spreadsheet — sorry! My API key for val.town is in it (“secrets management” is on the roadmap). But I created a duplicate where I inlined the data from the API (rather than the code which dynamically pulls it) which you can check out here at your convenience.


Reply via: Email · Mastodon · Bluesky

Transforming HTML With Netlify Edge Functions

View

I’ve long wanted the ability to create custom collections of icons from my icon gallery.

Today I can browse collections of icons that share pre-defined metadata (e.g. “Show me all icons tagged as blue”) but I can’t create your own arbitrary collections of icons.

That is, until now!

I created a page at /lookup that allows you to specify however many id search params you want and it will pull all the matching icons into a single page.

Here’s an example of macOS icons that follow the squircle shape but break out of it ever-so-slightly (something we’ll lose with macOS Tahoe).

It requires a little know how to construct the URL, something I’ll address later, but it works for my own personal purposes at the moment.

So how did I build it?

Implementation

So the sites are built with a static site generator, but this feature requires an ability to dynamically construct a page based on the icons specified in the URL, e.g.

/lookup?id=foo&id=bar&id=baz

How do I get that to work? I can’t statically pre-generate every possible combination[1] so what are my options?

  1. Create a “shell” page that uses JavaScript to read the search params, query a JSON API, and render whichever icons are specified in the URL.
  2. Send an HTML page with all icons over the wire, then use JavaScript to reach into the DOM and remove all icons whose IDs aren’t specified in the page URL.
  3. Render the page on the server with just the icons specified in the request URL.

No. 1: this is fine, but I don’t have a JSON API for clients to query and I don’t want to create one. Plus I have to duplicate template logic, etc. I’m already rendering lists of icons in my static site generator, so can’t I just do that? Which leads me to:

No. 2: this works, but I do have 2000+ icons so the resulting HTML page (I tried it) is almost 2MB if I render everything (whereas that same request for ~4 icons but filtered by the server would be like 11kb). There’s gotta be a way to make that smaller, which leads me to:

No. 3: this is great, but it does require I have a “server” to construct pages at request time.

Enter Netlify’s Edge Functions which allow you to easily transform an existing HTML page before it gets to the client.

To get this working in my case, I:

  1. Create /lookup/index.html that has all 2000+ icons on it (trivial with my current static site generator).
  2. Create a lookup.ts edge function that intercepts the request to /lookup/index.html
  3. Read the search params for the request and get all specified icon IDs, e.g. /lookup?id=a&id=b&id=c turns into ['a','b','c']
  4. Following Netlify’s example of transforming an HTML response, use HTMLRewriter to parse my HTML with all 2000+ icons in it then remove all icons that aren’t in my list of IDs, e.g. <a id='a'>…</a><a id='z'>…</a> might get pruned down to <a id='a'>…</a>
  5. Transform the parsed HTML back into a Response and return it to the client from the function.

It took me a second to get all the Netlify-specific configurations right (put the function in ./netlify/edge-functions not ./netlify/functions, duh) but once I strictly followed all of Netlify’s rules it was working! (You gotta use their CLI tool to get things working on localhost and test it yourself.)

Con-clusions

I don’t particularly love that this ties me to a bespoke feature of Netlify’s platform — even though it works really well!

But that said, if I ever switched hosts this wouldn’t be too difficult to change. If my new host provided control over the server, nothing changes about the URL for this page (/lookup?id=…). And if I had to move it all to the client, I could do that too.

In that sense, I’m tying myself to Netlify from a developer point of view but not from an end-user point of view (everything still works at the URL-level) and I’m good with that trade-off.


  1. Just out of curiosity, I asked ChatGPT: if you have approximately 2,000 unique items, how many possible combinations of those IDs can be passed in a URL like /lookup?id=1&id=2? It said the number is 2^2000 which is “astronomically large” and “far more than atoms in the universe”. So statically pre-generating them is out of the question.

Reply via: Email · Mastodon · Bluesky

Little Swarming Gnats of Data

View

Here’s a screenshot of my inbox from when I was on the last leg of my flight home from family summer vacation:

Screenshot of the Mail app on iOS where the screen is completely full of messages from United Airlines.

That’s pretty representative of the flurry of emails I get when I fly, e.g.:

  • Check in now
  • Track your bags
  • Your flight will soon depart
  • Your flight will soon board
  • Your flight is boarding
  • Information on your connecting flight
  • Tell us how we did

In addition to email, the airline has my mobile number and I have its app, so a large portion of my email notifications are also sent as 1) push notifications to my devices, as well as 2) messages to my mobile phone number.

So when the plane begins boarding, for example, I’m told about it with an email, a text, and a push notification.

I put up with it because I’ve tried pruning my stream of notifications from the airlines in the past, only to lose out on a vital notification about a change or delay. It feels like my two options are:

  1. Get all notifications multiple times via email, text, and in-app push.
  2. Get most notifications via one channel, but somehow miss the most vital one.

All of this serendipitously coincided with me reading a recent piece from Nicholas Carr where he described these kinds of notifications as “little data”:

all those fleeting, discrete bits of information that swarm around us like gnats on a humid summer evening.

That feels apt, as I find myself swiping at lots of little data gnats swarming in my email, message, and notification inboxes.

No wondering they call it “fly”ing 🥁


Reply via: Email · Mastodon · Bluesky

My Copy of The Internet Phone Book

View

I recently got my copy of the Internet Phone Book. Look who’s hiding on the bottom inside spread of page 32:

Photograph of page 32 of the Internet Phone Book listing Jim Nielsen.

The book is divided into a number of categories — such as “Small”, “Text”, and “Ecology” — and I am beyond flattered to be listed under the category “HTML”! You can dial my site at number 223.

As the authors note, the sites of the internet represented in this book are not described by adjectives like “attention”, “competition”, and “promotion”. Instead they’re better suited by adjectives like “home”, “love”, and “glow”.

These sites don’t look to impose their will on you, soliciting that you share, like, and subscribe. They look to spark curiosity, mystery, and wonder, letting you decide for yourself how to respond to the feelings of this experience.

But why make a printed book listing sites on the internet? That’s crazy, right? Here’s the book’s co-author Kristoffer Tjalve in the introduction:

With the Internet Phone Book, we bring the web, the medium we love dearly, and call it into a thousand-year old tradition [of print]

I love that! I think the juxtaposition of websites in a printed phone book is exactly the kind of thing that makes you pause and reconsider the medium of the web in a new light. Isn’t that exactly what art is for?

Kristoffer continues:

Elliot and I began working on diagram.website, a map with hundreds of links to the internet beyond platform walls. We envisioned this map like a night sky in a nature reserve—removed from the light pollution of cities—inviting a sense of awe for the vastness of the universe, or in our case, the internet. We wanted people to know that the poetic internet already existed, waiting for them…The result of that conversation is what you now hold in your hands.

The web is a web because of its seemingly infinite number of interconnected sites, not because of it’s half-dozen social platforms. It’s called the web, not the mall.

There’s an entire night sky out there to discover!


Reply via: Email · Mastodon · Bluesky

Becoming an Asshole

View

This post is a secret to everyone! Read more about RSS Club.

I’ve been reading Apple in China by Patrick McGee.

There’s this part in there where he’s talking about a guy who worked for Apple and was known for being ruthless, stopping at nothing to negotiate the best deal for Apple. He was so aggressive yet convincing that suppliers often found themselves faced with regret, wondering how they got talked into a deal that in hindsight was not in their best interest.[1]

One particular Apple executive sourced in the book noted how there are companies who don’t employ questionable tactics to gain an edge, but most of them don’t exist anymore. To paraphrase: “I worked with two kinds of suppliers at Apple: 1) complete assholes, and 2) those who are no longer in business.”

Taking advantage of people is normalized in business on account of it being existential, i.e. “If we don’t act like assholes — or have someone on our team who will on our behalf[1] — we will not survive!” In other words: All’s fair in self-defense.

But what’s the point of survival if you become an asshole in the process?

What else is there in life if not what you become in the process?

It’s almost comedically twisted how easy it is for us to become the very thing we abhor if it means our survival.

(Note to self: before you start anything, ask “What will this help me become, and is that who I want to be?”)


  1. It’s interesting how we can smile at stories like that and think, “Gosh they’re tenacious, glad they’re on my side!” Not stopping to think for a moment what it would feel like to be on the other side of that equation.

Reply via: Email · Mastodon · Bluesky

The Continuum From Static to Dynamic

View

Dan Abramov in “Static as a Server”:

Static is a server that runs ahead of time.

“Static” and “dynamic” don’t have to be binaries that describe an entire application architecture. As Dan describes in his post, “static” or “dynamic” it’s all just computers doing stuff.

Computer A requests something (an HTML document, a PDF, some JSON, who knows) from computer B. That request happens via a URL and the response can be computed “ahead of time” or “at request time”. In this paradigm:

  • “Static” is server responding ahead of time to an anticipated requests with identical responses.
  • “Dynamic” is a server responding at request time to anticipated requests with varying responses.

But these definitions aren’t binaries, but rather represent two ends of a spectrum. Ultimately, however you define “static” or “dynamic”, what you’re dealing with is a response generated by a server — i.e. a computer — so the question is really a matter of when you want to respond and with what.

Answering the question of when previously had a really big impact on what kind of architecture you inherited. But I think we’re realizing we need more nimble architectures that can flex and grow in response to changing when a request/response cycle happens and what you respond with.

Perhaps a poor analogy, but imagine you’re preparing holiday cards for your friends and family:

  • “Static” is the same card sent to everyone
  • “Dynamic” is a hand-written card to each individual

But between these two are infinite possibilities, such as:

  • A hand-written card that’s photocopied and sent to everyone
  • A printed template with the same hand-written note to everyone
  • A printed template with a different hand-written note for just some people
  • etc.

Are those examples “static” or “dynamic”? [Cue endless debate].

The beauty is that in proving the space between binaries — between what “static” means and what “dynamic” means — I think we develop a firmer grasp of what we mean by those words as well as what we’re trying to accomplish with our code.

I love tools that help you think of the request/response cycle across your entire application as an endlessly-changing set of computations that happen either “ahead of time”, “just in time”, or somewhere in-between.


Reply via: Email · Mastodon · Bluesky

The Web as URLs, Not Documents

View

Dan Abramov on his blog (emphasis mine):

The division between the frontend and the backend is physical. We can’t escape from the fact that we’re writing client/server applications. Some logic is naturally more suited to either side. But one side should not dominate the other. And we shouldn’t have to change the approach whenever we need to move the boundary.

What we need are the tools that let us compose across the stack.

What are these tools that allow us to easily change the computation of an application happening between two computers? I think Dan is arguing that RSC is one of these tools.

I tend to think of Remix (v1) as one of these tools. Let me try and articulate why by looking at the difference between how we thought of websites in a “JAMstack” architecture vs. how tools (like Remix) are changing that perspective.

JAMstack: a website is a collection of static documents which are created by a static site generator and put on a CDN. If you want dynamism, you “opt-out” of a static document for some host-specific solution whose architecture is starkly different from the rest of your site.

Remix: a website is a collection of URLs that follow a request/response cycle handled by a server. Dynamism is “built-in” to the architecture and handled on a URL-by-URL basis. You choose how dynamic you want any particular response to be: from a static document on a CDN for everyone, to a custom response on a request-by-request basis for each user.

As your needs grow beyond the basic “static files on disk”, a JAMstack architecture often ends up looking like a microservices architecture where you have disparate pieces that work together to create the final whole: your static site generator here, your lambda functions there, your redirect engine over yonder, each with its own requirements and lifecycles once deployed.

Remix, in contrast, looks more like a monolith: your origin server handles the request/response lifecycle of all URLs at the time and in the manner of your choosing.

Instead of a build tool that generates static documents along with a number of distinct “escape hatches” to handle varying dynamic needs, your entire stack is “just a server” (that can be hosted anywhere you host a server) and you decide how and when to respond to each request — beforehand (at build), or just in time (upon request). No architectural escape hatches necessary.

You no longer have to choose upfront whether your site as a whole is “static” or “dynamic”, but rather how much dynamism (if any) is present on a URL-by-URL basis. It’s a sliding scale — a continuum of dynamism — from “completely static, the same for everyone” to “no one line of markup is the same from one request to another”, all of it modeled under the same architecture.

And, crucially, that URL-by-URL decision can change as needs change. As Dan Abramov noted in a tweet:

[your] build doesn’t have to be modeled as server. but modeling it as a server (which runs once early) lets you later move stuff around.

Instead of opting into a single architecture up front with escape hatches for every need that breaks the mold, you’re opting in to the request/response cycle of the web’s natural grain, and deciding how to respond on a case-by-case basis.

The web is not a collection of static documents. It’s a collection of URLs — of requests and responses — and tools that align themselves to this grain make composing sites with granular levels of dynamism so much easier.


Reply via: Email · Mastodon · Bluesky

Related posts linking here: (2025) The Continuum From Static to Dynamic

Some Miscellaneous Thoughts on Visual Design Prodded By The Sameness of AI Company Logos

View

Radek Sienkiewicz in a funny-because-its-true piece titled “Why do AI company logos look like buttholes?“:

We made a circular shape [logo] with some angles because it looked nice, then wrote flowery language to justify why our…design is actually profound.

As someone who has grown up through the tumult of the design profession in technology, that really resonates. I’ve worked on lots of projects where I got tired of continually justifying design decisions with language dressed in corporate rationality.

This is part of the allure of code. To most people, code either works or it doesn’t. However bad it might be, you can always justify it with “Yeah, but it’s working.”

But visual design is subjective forever. And that’s a difficult space to work in, where you need to forever justify your choices.

In that kind of environment, decisions are often made by whoever can come up with the best language to justify their choices, or whoever has the most senior job title.

Personally, I found it very exhausting.

As Radek points out, this homogenization justified through seemingly-profound language reveals something deeper about tech as an industry: folks are afraid to stand out too much.

Despite claims of innovation and disruption, there's tremendous pressure to look legitimate by conforming to established visual language.

In contrast to this stands the work of individual creators whose work I have always loved — whether its individual blogs, videos, websites, you name it. The individual (and I’ll throw small teams in there too) have a sense of taste that doesn’t dilute through the structure and processes of a larger organization.

No single person suggests making a logo that resembles an anus, but when everyone's feedback gets incorporated, that's what often emerges.

In other words, no individual would ever recommend what you get through corporate hierarchies.

That’s why I love the work of small teams and individuals. There’s still soul. You can still sense the individuals — their personalities, their values — oozing through the work. Reminds me of Jony Ive’s description of when he first encountered a Mac:

I was shocked that I had a sense for the people who made it. They could’ve been in the room. You really had a sense of what was on their minds, and their values, and their joy and exuberance in making something that they knew was helpful.

This is precisely why I love the websites of individuals because their visual language is as varied as the humans behind them — I mean, just look at the websites of these individuals and small teams. You immediately get a sense for the people behind them. I love it!


Reply via: Email · Mastodon · Bluesky

Notes from Andreas Fredriksson’s “Context is Everything”

View

I quite enjoyed this talk. Some of the technical details went over my head (I don’t know what “split 16-bit mask into two 8-bit LTUs” means) but I could still follow the underlying point.

First off, Andreas has a great story at the beginning about how he has a friend with a browser bookmarklet that replaces every occurrence of the word “dependency” with the word “liability”. Can you imagine npm working that way? Inside package.json:

{
  "liabilities": {
    "react": "^19.0.0",
    "typescript": "^5.0.0"
  },
  "devLiabilities": {...}
}

But I digress, back to Andreas.

He points out that the context of your problems and the context of someone else’s problems do not overlap as often as we might think.

It’s so unlikely that someone else tried to solve exactly our same problem with exactly our same constraints that [their solution or abstraction] will be the most economical or the best choice for us. It might be ok, but it won’t be the best thing.

So while we immediately jump to tools built by others, the reality is that their tools were built for their problems and therefore won’t overlap with our problems as much or as often as we’re led to believe.

Venn diagram with three circles. The first says 'My problems', the second 'Your problems' and the third 'Facebook’s problems' and they barely have any overlap and where they do it’s labeled “React”.

In Andreas’ example, rather than using a third-party library to parse JSON and turn it into something, he writes his own bespoke parser for the problem at hand. His parser ignores a whole swath of abstractions a more generalized parser solves for, and guess what? His is an order of magnitude faster!

Solving problems in the wrong domain and then glueing things together is always much, much worse [in terms of performance] than solving for what you actually need to solve.

It’s fun watching him step through the performance gains as he goes from a generalized solution to one more tailored to his own specific context.

What really resonates in his step-by-step process is how, as problems present themselves, you see how much easier it is to deal with performance issues for stuff you wrote vs. stuff others wrote. Not only that, but you can debug way faster!

(Just think of the last time you tried to debug a file 1) you wrote, vs. 2) one you vendored vs. 3) one you installed deep down in node_modules somewhere.)

Andreas goes from 41MB/s throughput to 1.25GB/s throughput without changing the behavior of the program. He merely removed a bunch of generalized abstractions he wasn’t using and didn’t need.

Surprise, surprise: not doing unnecessary things is faster!

You should always consider the unique context of your situation and weigh trade-offs. A “generic” solution means a solution “not tuned for your use case”.


Reply via: Email · Mastodon · Bluesky

Is It JavaScript?

View

OH: It’s just JavaScript, right? I know JavaScript.

My coworker who will inevitably spend the rest of the day debugging an electron issue

@jonkuperman.com on BlueSky

“It’s Just JavaScript!” is probably a phrase you’ve heard before. I’ve used it myself a number of times.

It gets thrown around a lot, often to imply that a particular project is approachable because it can be achieved writing the same, ubiquitous, standardized scripting language we all know and love: JavaScript.

Take what you learned moving pixels around in a browser and apply that same language to running a server and querying a database. You can do both with the same language, It’s Just JavaScript!

But wait, what is JavaScript?

Is any code in a .js file “Just JavaScript”?

Let’s play a little game I shall call: “Is It JavaScript?”

Poster from the game “Is It Cake?” showing a guy cutting through a cake, but the words “Is It JavaScript?” have been superimposed on the poster, as well as the JS logo over the cake.

Browser JavaScript

let el = document.querySelector("#root");
window.location = "https://jim-nielsen.com";

That’s DOM stuff, i.e. browser APIs. Is it JavaScript?

“If it runs in the browser, it’s JavaScript” seems like a pretty good rule of thumb. But can you say “It’s Just JavaScript” if it only runs in the browser?

What about the inverse: code that won’t run in the browser but will run elsewhere?

Server JavaScript

const fs = require('fs');
const content = fs.readFileSync('./data.txt', 'utf8');

That will run in Node — or something with Node compatibility, like Deno — but not in the browser.

Is it “Just JavaScript”?

Environment Variables

It’s very possible you’ve seen this in a .js file:

const apiUrl = process.env.API_URL;

But that’s following a Node convention which means that particular .js file probably won’t work as expected in a browser but will on a server.

Is it “Just JavaScript” if executes but will only work as expected with special knowledge of runtime conventions?

JSX

What about this file MyComponent.js

function MyComponent() {
  const handleClick = () => {/* do stuff */}
  return (
    <Button onClick={handleClick}>Click me</Button>
  )
}

That won’t run in a browser. It requires a compilation step to turn it into React.createElement(...) (or maybe even something else) which will run in a browser.

Or wait, that can also run on the server.

So it can run on a server or in the browser, but now requires a compilation step. Is it “Just JavaScript”?

Pragmas

What about this little nugget?

/** @jsx h */
import { h } from "preact";
const HelloWorld = () => <div>Hello</div>;

These are magic comments which affect the interpretation and compilation of JavaScript code (Tom MacWright has an excellent article on the subject).

If code has magic comments that direct how it is compiled and subsequently executed, is it “Just JavaScript”?

TypeScript

What about:

const name: string = "Hello world";

You see it everywhere and it seems almost synonymous with JavaScript, would you consider it “Just JavaScript”?

Imports

It’s very possible you’ve come across a .js file that looks like this at the top.

import icon from './icon.svg';
import data from './data.json';
import styles from './styles.css';
import foo from '~/foo.js';
import foo from 'bar:foo';

But a lot of that syntax is non-standard (I’ve written about this topic previously in more detail) and requires some kind of compilation — is this “Just JavaScript”?

Vanilla

Here’s a .js file:

var foo = 'bar';

I can run it here (in the browser).

I can run it there (on the server).

I can run it anywhere.

It requires no compiler, no magic syntax, no bundler, no transpiler, no runtime-specific syntax. It’ll run the same everywhere.

That seems like it is, in fact, Just JavaScript.

As Always, Context Is Everything

A lot of JavaScript you see every day is non-standard. Even though it might be rather ubiquitous — such as seeing process.env.* — lots of JS code requires you to be “in the know” to understand how it’s actually working because it’s not following any part of the ECMAScript standard.

There are a few vital pieces of context you need in order to understand a .js file, such as:

  • Which runtime will this execute in? The browser? Something server-side like Node, Deno, or Bun? Or perhaps something else like Cloudflare Workers?
  • What tools are required to compile this code before it can be executed in the runtime? (vite, esbuild, webpack, rollup typescript, etc.)
  • What frameworks are implicit in the code? e.g. are there non-standard globals like Deno.* or special keyword exports like export function getServerSideProps(){...}?

When somebody says, “It’s Just JavaScript” what would be more clear is to say “It’s Just JavaScript for…”, e.g.

  • It’s just JavaScript for the browser
  • It’s just JavaScript for Node
  • It’s just JavaScript for Next.js

So what would you call JavaScript that can run in any of the above contexts?

Well, I suppose you would call that “Just JavaScript”.


Reply via: Email · Mastodon · Bluesky