At this point, I am starting to feel like we don’t need new languages, but new ways to create specifications.
I have a hypothesis that an LLM can act as a pseudocode to code translator, where the pseudocode can tolerate a mixture of code-like and natural language specification. The benefit being that it formalizes the human as the specifier (which must be done anyway) and the llm as the code writer. This also might enable lower resource “non-frontier” models to be more useful. Additionally, it allows tolerance to syntax mistakes or in the worst case, natural language if needed.
In other words, I think llms don’t need new languages, we do.
- LLMs can act as pseudocode to code translators (they are excellent at this)
- LLMs still create bugs and make errors, and a reasonable hypothesis is at a rate in direct proportion to the "complexity" or "buggedness" of the underlying language.
In other words, give an AI a footgun and it will happily use it unawares. That doesn't mean however it can't rapidly turn your pseudocode into code.
None of this means that LLMs can magically correct your pseudocode at all times if your logic is vastly wrong for your goal, but I do believe they'll benefit immensely from new languages that reduce the kind of bugs they make.
This is the moment we can create these languages. Because LLMs can optimize for things that humans can't, so it seems possible to design new languages to reduce bugs in ways that work for LLMs, but are less effective for people (due to syntax, ergonomics, verbosity, anything else).
This is crucially important. Why? Because 99% of all code written in the next two decades will be written by AI. And we will also produce 100x more code than has ever been written before (because the cost of doing it, has dropped essentially to zero). This means that, short of some revolutions in language technology, the number of bugs and vulnerabilities we can expect will also 100x.
That's why ideas like this are needed.
I believe in this too and am working on something also targeting LLMs specifically, and have been working on it since Mid to Late November last year. A business model will make such a language sustainable.
What we need is a programming language that defines the diff to be applied upon the existing codebase to the same degree of unambiguity as the codebase itself.
That is, in the same way that event sourcing materializes a state from a series of change events, this language needs to materialize a codebase from a series of "modification instructions". Different models may materialize a different codebase using the same series of instructions (like compilers), or say different "environmental factors" (e.g. the database or cloud provider that's available). It's as if the codebase itself is no longer the important artifact, the sequence of prompts is. You would also use this sequence of prompts to generate a testing suite completely independent of the codebase.
I'm actually building this, will release it early next month. I've added a URL to watch to my profile (should be up later this week). It will be Open Source.
This is the approach that Agint takes. We inference the structure of the code first top down as a graph, then add in types, then interpret the types as in out function signatures and then "inpaint" the functions for codegen.
So in this case an LLM would just be a less-reliable compiler? What's the point? If you have to formally specify your program, we already have tools for that, no boiling-the-oceans required
Thats again programming languages. Real issue with LLMs now is it doesn't matter if it can generate code quickly. Some one still has to read, verify and test it.
Perhaps we need a need a terse programming language. Which can be read quickly and verified. You could call that specification.
Yes, essentially a higher level programming language than what we currently have. A programming language that doesn't have strict syntax, and can be expressed with words or code. And like any other programming language, it includes specifications for the tests and expectations of the result.
The programming language can look more like code in parts where the specification needs to be very detailed. I think people can get intuition about where the LLM is unlikely to be successful. It can have low detail for boilerplate or code that is simple to describe.
You should be able to alter and recompile the specification, unlike the wandering prompt which makes changes faster than normal version control practices keep up with.
Perhaps there's a world where reading the specification rather than the compiled code is sufficient in order to keep cognitive load at reasonable levels.
At very least, you can read compiled code until you can establish your own validation set and create statistical expectations about your domain. Principally, these models will always be statistical in nature. So we probably need to start operating more inside that kind of framework if we really want to be professional about it.
Simply put whatever you write should produce the same output regardless of how many times you execute it. The more verbose you make it, the more pointless it becomes.
Optimistically I dumped the whole thing into Claude Opus 4.5 as a system prompt to see if it could generate a one-shot program from it:
llm -m claude-opus-4.5 \
-s https://raw.githubusercontent.com/jordanhubbard/nanolang/refs/heads/main/MEMORY.md \
'Build me a mandelbrot fractal CLI tool in this language'
> /tmp/fractal.nano
So I fired up Claude Code inside a checkout of the nanolang and told it how to run the compiler and let it fix the problems... which DID work. Here's that transcript:
Maybe I’m missing some context, but all that actually should be needed in the top-level else block is ‘gradient[idx]’. Pretty much anything else is going to be longer, harder to read, and less efficient.
I think you need to either feed it all of ./docs or give your agent access to those files so it can read them as reference. The MEMORY.md file you posted mentions ./docs/CANONICAL_STYLE.md and ./docs/LLM_CORE_SUBSET.md and they in turn mention indirectly other features and files inside the docs folder.
The thing that really unlocked it was Claude being able to run a file listing against nanolang/examples and then start picking through the examples that were most relevant to figuring out the syntax: https://gisthost.github.io/?9696da6882cb6596be6a9d5196e8a7a5...
Developed by Jordan Hubbard of NVIDIA (and FreeBSD).
My understanding/experience is that LLM performance in a language scales with how well the language is represented in the training data.
From that assumption, we might expect LLMs to actually do better with an existing language for which more training code is available, even if that language is more complex and seems like it should be “harder” to understand.
I don’t think that assumption holds. For example, only recently have agents started getting Rust code right on the first try, but that hasn’t mattered in the past because the rust compiler and linters give such good feedback that it immediately fixes whatever goof it made.
This does fill up context a little faster, (1) not as much as debugging the problem would have in a dynamic language, and (2) better agentic frameworks are coming that “rewrite” context history for dynamic on the fly context compression.
A lot of this depends on your workflow. A language with great typing, type checking and good compiler errors will work better in a loop than one with a large surface overhead and syntax complexity, even if it's well represented. This is the instinct behind, e.g. https://github.com/toon-format/toon, a json alternative format. They test LLM accuracy with the format against JSON, (and are generally slightly ahead of JSON).
Additionally just the ability to put an entire language into context for an LLM - a single document explaining everything - is also likely to close the gap.
I was skimming some nano files and while I can't say I loved how it looked, it did look extremely clear. Likely a benefit.
Thanks for sharing this! A question I've grappled with is "how do you make the DOM of a rendered webpage optimal for complex retrieval in both accuracy and tokens?" This could be a really useful transformation to throw in the mix!
It's not just how well the language is represented. Obscure-ish APIs can trip up LLMs. I've been using Antigravity for a Flutter project that uses ATProto. Gemini is very strong at Dart coding, which makes picking up my 17th managed language a breeze. It's also very good at Flutter UI elements. It was noticeably less good at ATProto and its Dart API.
The characteristics of failures have been interesting: As I anticipated it might be, an over ambitious refactoring was a train wreck, easily reverted. But something as simple as regenerating Android launcher icons in a Flutter project was a total blind spot. I had to Google that like some kind of naked savage running through the jungle.
I wonder if there is a way to create a sort of 'transpilation' layer to a new language like this for existing languages, so that it would be able to use all of the available training from other languages. Something that's like AST to AST. Though I wonder if it would only work in the initial training or fine-tuning stage.
Not my experience, honestly. With a good code base for it to explore and good tooling, and a really good prompt I've had excellent results with frankly quite obscure things, including homegrown languages.
As others said, the key is feedback and prompting. In a model with long context, it'll figure it out.
If you can generate code from the grammar then what exactly are you RLing? The point was to generate code in the first place so what does backpropagation get you here?
a.k.a. jkh. That's a blast from the past. Back in the early FreeBSD days, Jordan was fielding mailing list traffic and holding the project together as people peppered the lists with questions, trying to get their systems running with their sundry bits of hardware. I wondered when he slept.
Apparently he did as well[1]:
"The start of the 2.0 ports collection. No sup repository yet, but I'll make one when I wake up again.. :)"
Submitted by: jkh
Aug 21, 1994
> Really needs an agent-oriented “getting started” guide to put in the context, and evals vs. the same task done with Python, Rust etc.
It has several such documents, including a ~1400 line MEMORY.md file referencing several other such files, a language specification, a collection of ~100 documents containing just about every thought Jordan has ever had about the entire language and the evolution of its implementation, and a collection of examples that includes an SDL2 based OpenGL program.
Obviously, jkh clearly understands the need to bootstrap LLMs on his ~5 month old, self-hosted solo programming language.
One novel part here is every function is required to have tests that run at compile time.
I'm still skeptical of the value add having to teaching a custom language to an LLM instead of using something like lua or python and applying constraints like test requirements onto that.
I think this kind of misses what's actually challenging with LLM code -- auditing it for correctness. LLMs are ~fine at spitting out valid syntax. Humans need to be able to read the output, though.
A language targeting an LLM might be well served with a lot of keywords, similar to a CISC instruction set, where keywords do specific things well. Giving it building blocks and having them piece together is likely to pay off.
It seems that something that does away with human friendly syntax and leans more towards a pure AST representation would be even better? Basically a Lisp but with very strict typing might do the trick. And most LLMs are probably trained on lots of Lisps already.
It seems kind of silly that you can’t teach an LLM new tricks though, doesn’t it? This doesn’t sound like an intrinsic limitation and more an artifact of how we produce model weights today.
Seems like a simplified Rust with partial prefix notation (which the rationale that is better for LLMs is based on vibes really) that compiles to C. Similar language posted here not too long ago: Zen-C => more features, no prefix notation / Rue => no prefix notation, compiles directly to native code (no C target). Surprisingly compared to other LLM "optimized" languages, it isn't so much concerned about token efficiency.
I find Polish or Reverse Polish notation jarring after a lifetime of thinking in terms of operator precedence. Given that it's fairly rare to see, I wonder what about it would be more LLM-friendly. It does lend itself better to "tokenization" of a sort - if you want to construct operations from lots of smaller operations, for example if you're mutating genetic algorithms (a la Eureqa). But I've written code in the past to explicitly convert those kinds of operations back to infix for easier readability. I wonder if the LLMs in this case are expected to behave a bit like genetic algorithms as they construct things.
>It does lend itself better to "tokenization" of a sort - if you want to construct operations from lots of smaller operations [...]
That's an educated assumption to make. But therein lies the issue with every LLM "optimized" language, including those recent ones posted here oriented toward minimizing tokens. Assumptions, that are unvalidatable and unfalsifiable, about the kind of output LLMs synthesize/emit when that output is code (or any output to be real).
Just scanning through this, looks interesting and is totally needed, but I think it is missing showing future use-cases and discussions of decoding. So, for instance, it is all well and good to define a simple language focused on testing and the like, but what about live LLM control and interaction via a programming language? Sort of a conversation in code? Data streams in and function calls stream out with syntax designed to minimize mistakes in calls and optimize the stream? What I mean by this is special block declarations like:
```
#this is where functions are defined and should compile and give syntax errors
```
(yeah, ugly but the idea is there)
The point being that the behavior could change. In the streaming world it may, for instance, have guarantees of what executes and what doesn't in case of errors. Maybe transactional guarantees in the stream blocks compared to pure compile optimization in the other blocks? The point here isn't that this is the golden idea, but that we probably should think about the use cases more. High on my list of use cases to consider (I think)
- language independence: LLMs are multilingual and this should be multilingual from the start.
- support streaming vs definition of code.
- Streaming should consider parallelism/async in the calls.
- the language should consider cached token states to call back to. (define the 'now' for optimal result management, basically, the language can tap into LLM properties that matter)
Hmm... That is the top of my head thoughts at least.
1. Nanolang is a total thought experiment. The key word its description is "experimental" - whether it's a Good experiment or a Bad experiment can be argued either way, especially by language purists!
2. Yes, it's a total Decorator Crab of a language. An unholy creation by Dr Frankenstein, yes! Those criticisms are entirely merited. It wasn't designed, it accreted features and was a fever dream I couldn't seem to stop having. I should probably take my own temperature.
3. I like prefix notation because my first calculator was an HP calculator (the HP 41C remains, to this day, my favorite calculator of ALL TIME). I won't apologize for that, but I DO get that it's not everybody's cup of tea! I do, however, use both vi and emacs now.
Umm. I think that about covers it. All of this LLM stuff is still incredibly young to me and I'm just firing a shotgun into the dark and listening to hear if I hit anything. It's going to be that way for a while for all of us until we figure out what works and what does not!
Looks a bit like Rust. My peeve with Rust is that it makes error handling too much donkey work. In a large class of programs you just care that something failed and you want a good description of that thing:
context("Loading configuration from {file}")
Then you get a useful error message by unfolding all the errors at some point in the program that is makes sense to talk to a human, e.g. logs, rpc error etc.
Failed: Loading configuration from .config because: couldn't open file .config because: file .config does not exist.
It shouldn't be harder than a context command in functions. But somehow Rust conspires to require all this error type conversion and question marks. It it is all just a big uncomfortable donkey game, especially when you have nested closures forced to return errors of a specific type.
I like your "context" proposal, because it adds information about developer intention to error diagnostics, whereas showing e.g. a call stack would just provide information about the "what?", not the "why?" to the end user facing an error at runtime.
(You should try to get something like that into various language specs; I'd love you to success with it.)
I feel this could be achieved better with Golang or Kotlin and a custom linter that enforces parentheses around each expression term to make precedence explicit, and enforce each function has at least one test. Although I guess neither of those languages has free interop with C, they are close. And Go doesn’t have unions :’(
Really clean language where the design decisions have led to fewer traps (cond is a good choice).
It’s peculiar to see s-expressions mixed together with imperative style. I’ve been experimenting along similar lines - mixing s-expressions with ML style in the same dialect (for a project).
Having an agentic partner toiling away with the lexer/parser/implementation details is truly liberating. It frees the human to explore crazy ideas that would not have been feasible for a side/toy/hobby project earlier.
opus is currently the only one that can code rust, but if you give it symbol resolution there is quite literally nothing better. The type system in rust is incredibly powerful and llms are great (just opus for now) at utilizing it.
Simple & Fast - Minimal syntax, native performance
Design Philosophy:
Minimal syntax (18 keywords vs 32 in C)
One obvious way to do things
Tests are part of the language, not an afterthought
Transpile to C for maximum compatibility
ehh. i dont think the overhead of inventing a new language makes up for the lack of data around it. in fact if you're close enough to rust/c then llms are MORE likely to make up stuff from their training data and screw up your minimal language.
(pls argue against this, i want to be proven wrong)
Every new language pet project these days claims to be "designed for LLM's", lol. Don't read too much into it. The only language that's really designed for LLM is COBOL, because it was written to read just like English natural language and LLM's are trained by reading lots of English language books.
It looks like a Frankenstein's abomination that has c-like function signatures and structs with Sexpr function bodies and this will anger some homomorphism nerds. I love it.
I have a hypothesis that an LLM can act as a pseudocode to code translator, where the pseudocode can tolerate a mixture of code-like and natural language specification. The benefit being that it formalizes the human as the specifier (which must be done anyway) and the llm as the code writer. This also might enable lower resource “non-frontier” models to be more useful. Additionally, it allows tolerance to syntax mistakes or in the worst case, natural language if needed.
In other words, I think llms don’t need new languages, we do.
- LLMs can act as pseudocode to code translators (they are excellent at this)
- LLMs still create bugs and make errors, and a reasonable hypothesis is at a rate in direct proportion to the "complexity" or "buggedness" of the underlying language.
In other words, give an AI a footgun and it will happily use it unawares. That doesn't mean however it can't rapidly turn your pseudocode into code.
None of this means that LLMs can magically correct your pseudocode at all times if your logic is vastly wrong for your goal, but I do believe they'll benefit immensely from new languages that reduce the kind of bugs they make.
This is the moment we can create these languages. Because LLMs can optimize for things that humans can't, so it seems possible to design new languages to reduce bugs in ways that work for LLMs, but are less effective for people (due to syntax, ergonomics, verbosity, anything else).
This is crucially important. Why? Because 99% of all code written in the next two decades will be written by AI. And we will also produce 100x more code than has ever been written before (because the cost of doing it, has dropped essentially to zero). This means that, short of some revolutions in language technology, the number of bugs and vulnerabilities we can expect will also 100x.
That's why ideas like this are needed.
I believe in this too and am working on something also targeting LLMs specifically, and have been working on it since Mid to Late November last year. A business model will make such a language sustainable.
That is, in the same way that event sourcing materializes a state from a series of change events, this language needs to materialize a codebase from a series of "modification instructions". Different models may materialize a different codebase using the same series of instructions (like compilers), or say different "environmental factors" (e.g. the database or cloud provider that's available). It's as if the codebase itself is no longer the important artifact, the sequence of prompts is. You would also use this sequence of prompts to generate a testing suite completely independent of the codebase.
https://x.com/danielvaughn/status/2011280491287364067?s=46
Thats again programming languages. Real issue with LLMs now is it doesn't matter if it can generate code quickly. Some one still has to read, verify and test it.
Perhaps we need a need a terse programming language. Which can be read quickly and verified. You could call that specification.
The programming language can look more like code in parts where the specification needs to be very detailed. I think people can get intuition about where the LLM is unlikely to be successful. It can have low detail for boilerplate or code that is simple to describe.
You should be able to alter and recompile the specification, unlike the wandering prompt which makes changes faster than normal version control practices keep up with.
Perhaps there's a world where reading the specification rather than the compiled code is sufficient in order to keep cognitive load at reasonable levels.
At very least, you can read compiled code until you can establish your own validation set and create statistical expectations about your domain. Principally, these models will always be statistical in nature. So we probably need to start operating more inside that kind of framework if we really want to be professional about it.
More terse the better.
https://github.com/jordanhubbard/nanolang/blob/main/MEMORY.m...
Optimistically I dumped the whole thing into Claude Opus 4.5 as a system prompt to see if it could generate a one-shot program from it:
Here's the transcript for that. The code didn't work: https://gist.github.com/simonw/7847f022566d11629ec2139f1d109...So I fired up Claude Code inside a checkout of the nanolang and told it how to run the compiler and let it fix the problems... which DID work. Here's that transcript:
https://gisthost.github.io/?9696da6882cb6596be6a9d5196e8a7a5...
And the finished code, with its output in a comment: https://gist.github.com/simonw/e7f3577adcfd392ab7fa23b1295d0...
So yeah, a good LLM can definitely figure out how to use this thing given access to the existing documentation and the ability to run that compiler.
Maybe I’m missing some context, but all that actually should be needed in the top-level else block is ‘gradient[idx]’. Pretty much anything else is going to be longer, harder to read, and less efficient.
The thing that really unlocked it was Claude being able to run a file listing against nanolang/examples and then start picking through the examples that were most relevant to figuring out the syntax: https://gisthost.github.io/?9696da6882cb6596be6a9d5196e8a7a5...
My understanding/experience is that LLM performance in a language scales with how well the language is represented in the training data.
From that assumption, we might expect LLMs to actually do better with an existing language for which more training code is available, even if that language is more complex and seems like it should be “harder” to understand.
This does fill up context a little faster, (1) not as much as debugging the problem would have in a dynamic language, and (2) better agentic frameworks are coming that “rewrite” context history for dynamic on the fly context compression.
This isn't even true today. Source: heavy user of claude code and gemini with rust for almost 2 years now.
Additionally just the ability to put an entire language into context for an LLM - a single document explaining everything - is also likely to close the gap.
I was skimming some nano files and while I can't say I loved how it looked, it did look extremely clear. Likely a benefit.
This isn't really true. LLMs understand grammars really really well. If you have a grammar for your language the LLM can one-shot perfect code.
What they don't know is the tooling around the language. But again, this is pretty easily fixed - they are good at exploring cli tools.
The characteristics of failures have been interesting: As I anticipated it might be, an over ambitious refactoring was a train wreck, easily reverted. But something as simple as regenerating Android launcher icons in a Flutter project was a total blind spot. I had to Google that like some kind of naked savage running through the jungle.
Getting the Doom sound working on it involved me setting there typing "No I can't hear anything" over and over until it magically worked...
Maybe I should have written a helper program to listen using the microphone or something.
As others said, the key is feedback and prompting. In a model with long context, it'll figure it out.
(Which is true - it's easy to prompt your LLM with the language grammar, have it generate code and then RL on that)
Easy in the sense of "it is only having enough GPUs to RL a coding capable LLM" anyway.
Summary:
- Co-created FreeBSD.
- Led UNIX technologies at Apple for 13 years
- iXSystems, lead FreeNAS
- idk something about Uber
- Senior Director for GPU Compute Software at NVIDIA
For whatever it’s worth.
Apparently he did as well[1]: "The start of the 2.0 ports collection. No sup repository yet, but I'll make one when I wake up again.. :)" Submitted by: jkh Aug 21, 1994
[1] https://github.com/freebsd/freebsd-ports/commit/7ca702f09f29...
Interesting commit starting Ports 2.0. Three version of bash, four versions of Emacs, plus jove.
Seems unlikely for an out-of-distribution language to be as effective as one that’s got all the training data in the world.
Really needs an agent-oriented “getting started” guide to put in the context, and evals vs. the same task done with Python, Rust etc.
It has several such documents, including a ~1400 line MEMORY.md file referencing several other such files, a language specification, a collection of ~100 documents containing just about every thought Jordan has ever had about the entire language and the evolution of its implementation, and a collection of examples that includes an SDL2 based OpenGL program.
Obviously, jkh clearly understands the need to bootstrap LLMs on his ~5 month old, self-hosted solo programming language.
I might accidentally summon a certain person from Ork.
I'm still skeptical of the value add having to teaching a custom language to an LLM instead of using something like lua or python and applying constraints like test requirements onto that.
https://pyret.org/docs/latest/testing.html
This seems like a research dead end to me, the fundamentals are not there
That's an educated assumption to make. But therein lies the issue with every LLM "optimized" language, including those recent ones posted here oriented toward minimizing tokens. Assumptions, that are unvalidatable and unfalsifiable, about the kind of output LLMs synthesize/emit when that output is code (or any output to be real).
``` #this is where functions are defined and should compile and give syntax errors ```
:->r = some(param)/connected(param, param, @r)/calls(param)<-:
(yeah, ugly but the idea is there) The point being that the behavior could change. In the streaming world it may, for instance, have guarantees of what executes and what doesn't in case of errors. Maybe transactional guarantees in the stream blocks compared to pure compile optimization in the other blocks? The point here isn't that this is the golden idea, but that we probably should think about the use cases more. High on my list of use cases to consider (I think)
- language independence: LLMs are multilingual and this should be multilingual from the start.
- support streaming vs definition of code.
- Streaming should consider parallelism/async in the calls.
- the language should consider cached token states to call back to. (define the 'now' for optimal result management, basically, the language can tap into LLM properties that matter)
Hmm... That is the top of my head thoughts at least.
Quick reaction:
1. Nanolang is a total thought experiment. The key word its description is "experimental" - whether it's a Good experiment or a Bad experiment can be argued either way, especially by language purists!
2. Yes, it's a total Decorator Crab of a language. An unholy creation by Dr Frankenstein, yes! Those criticisms are entirely merited. It wasn't designed, it accreted features and was a fever dream I couldn't seem to stop having. I should probably take my own temperature.
3. I like prefix notation because my first calculator was an HP calculator (the HP 41C remains, to this day, my favorite calculator of ALL TIME). I won't apologize for that, but I DO get that it's not everybody's cup of tea! I do, however, use both vi and emacs now.
Umm. I think that about covers it. All of this LLM stuff is still incredibly young to me and I'm just firing a shotgun into the dark and listening to hear if I hit anything. It's going to be that way for a while for all of us until we figure out what works and what does not!
- jkh
Failed: Loading configuration from .config because: couldn't open file .config because: file .config does not exist.
It shouldn't be harder than a context command in functions. But somehow Rust conspires to require all this error type conversion and question marks. It it is all just a big uncomfortable donkey game, especially when you have nested closures forced to return errors of a specific type.
(You should try to get something like that into various language specs; I'd love you to success with it.)
EDIT: typo fixed.
It’s peculiar to see s-expressions mixed together with imperative style. I’ve been experimenting along similar lines - mixing s-expressions with ML style in the same dialect (for a project).
Having an agentic partner toiling away with the lexer/parser/implementation details is truly liberating. It frees the human to explore crazy ideas that would not have been feasible for a side/toy/hobby project earlier.
LLM Code Generation - Unambiguous syntax reduces AI errors
Testing Discipline - Mandatory tests improve code quality
Simple & Fast - Minimal syntax, native performance
Design Philosophy:
Minimal syntax (18 keywords vs 32 in C)
One obvious way to do things
Tests are part of the language, not an afterthought
Transpile to C for maximum compatibility
ehh. i dont think the overhead of inventing a new language makes up for the lack of data around it. in fact if you're close enough to rust/c then llms are MORE likely to make up stuff from their training data and screw up your minimal language.
(pls argue against this, i want to be proven wrong)
so like Go?
> Key Features; Prefix Notation
wow
NEXT!