· Marco Rapaccini · Research · 4 min read
We asked LLMs what Programming Language they would design
We ran an interesting experiment, asking major LLMs what kind of programming language they would design for machine-to-machine interaction

How It Started
The experiment started from this post ↗ from Michele Riva - Cofounder & CTO at Orama ↗ on LinkedIn:
If LLMs had to come up with a programming language that’s optimized for them (machines) rather than us (humans) they’d probably come up with an APL dialect
After reading Michele’s opinion, our first initial guess was that this hypotethical programming language - that we’ve decided to call NextLang would be:
- event-driven
- declarative
- using a binary syntax
So, we thought:
Why don’t we ask LLMs?
The results are quite fascinating.
This article is a summary of our findings, but you can read the full response from the major LLMs in this dedicated repo ↗.
Prompt
You're an experienced software architect in charge of designing from scratch a new programming language - called NextLang - for the next generation of LLMs.
NextLang enables LLMs/AI tools to work with other AI tools (aka other machines), write software, and orchestrate complex architectures, without any kind of human intervention.
NextLang needs to be efficient for machine-to-machine interaction.
Explain why you would make certain architectural decisions while designing NextLang.
Help me to understand the main aspects of NextLang by describing it with examples and by comparing it with pre-existing human-readable programming languages.
Major Findings
LLMs’ Shared Vision
Most LLMs seem to agree on the following key architectural decisions:
- Machine Efficiency
- Machine-2-Machine communication should be prioritised over human readability
- Declarative Approach
- Specifying what to achieve is more important than how to achieve it
- Static Typing
- Strong, explicit typing ensures communication reliability
- Concurrency
- AI cooperation needs native support for async and parallel operations
- Semantic Precision
- Ambiguity is not allowed, no margin for interpretation errors
- Self-describing Data Formats
- Embedded metadata improving interoperability
- Security
- Built-in security for autonomous AI interaction
Key Differentiators
- Claude focused on knowledge representation with ontologies and inference rules
- Grok proposed a minimalist syntax and AI-specific primitives for machine efficiency
- Gemini introduced a system allowing AIs to advertise capabilities to other AIs
- ChatGPT explained orchestration primitives and binary optimisations
- Llama had a minimalist approach with a great focus on integrating knowledge graphs
- DeepSeek enphasised system interoperability using native protocols and foreign function interfaces
Comparative Table
Feature | Claude | DeepSeek | Gemini | Grok | Llama | OpenAI ChatGPT |
---|---|---|---|---|---|---|
Core Paradigm | Intent-based, declarative programming with semantic precision | JSON-like declarative syntax with strict schemas | S-expressions for structured, semantically rich data streams | Minimalist functional approach with compact syntax | Minimalist syntax with strong typing | Binary-optimized serialization with explicit orchestration |
Type System | Type system based on semantic meaning, beyond structural typing | Strict+dynamic hybrid with runtime flexibility | Strong, static typing with self-describing data | Strong typing with native data structures | Strong, static typing | Gradual typing with protocol contracts |
Control Flow | Context-aware execution, self-modifying capabilities | Auto-parallelism, stateful workflows | Declarative, event-driven with explicit state machines | First-class concurrency and asynchronous execution | Native support for asynchronous operations | Spawn, await, select primitives |
Knowledge Representation | Native knowledge representation (ontologies, relations) | Interoperability first with embedded foreign functions | Capability discovery and registration | AI-specific primitives | Integration with knowledge graphs | Composable DSL layers |
Error Handling | Built-in verification and reasoning | Auto-retry and rollback features | Standardized error codes and rich error reporting | Robust error handling with retries and fallbacks | Not extensively detailed | Select statement with timeout and fallback |
Security | Cross-domain translation layer | Security and autonomy by design | Zero-trust model with fine-grained permissions | Built-in security features | Not extensively detailed | Explicit effect annotations (@io) |
Execution Model | Hybrid with declarative, optimization, and runtime layers | Auto-optimizing compiler | Protocol negotiation for communication | Self-modification capabilities | Self-optimization | Binary-first tokenization |
Sample Syntax Example | S-expression-like with explicit constraints | YAML-like declarative structure | Deeply nested S-expressions with rich metadata | Functional notation (COND, ASYNC) | Fn-prefix, arrow syntax | Binary messages with explicit annotations |
Additional Thoughts
The analysis shows us that LLMs can share a vision and the related goals but, as humans, they seem to differ on the actual implementation.
We were expecting a focus on machine efficiency and AI tools interoperability, while what really surprised us was the proposed integration/usage of knowledge graphs and ontologies.
This means that LLMs are somehow aware that without knowledge graphs they have a limited understanding of the world, because they would lack a unified view of the existing information.
An interesting lesson for programmers comes from the fact that LLMs prefer strong static typing because it reduces issues within communication. Generally speaking, programmers use strongly typed programming languages for an early error detection (thank you, Mr Compiler), but machines seem to like them because they help to improve communication between systems.
LLMs proposed native protocols for communication, confirming the AI community’s growing interest in MCP and A2A.
NextLang is just a thought experiment, of course.
So, a very important question remains:
Why should Artificial Intelligence design a programming language?
Well, we’re leaving answering this question to you!