• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Dion Almaer

Software, Development, Products

  • @dalmaer
  • LinkedIn
  • Medium
  • RSS
  • Show Search
Hide Search

AI Native Dev

English will become the most popular development language in 6 years

January 13, 2025

Simon Willison wrote up his 1, 3, and 6 year AI predictions from his chat with the lovely Oxide crew. This is always a fun thing to ponder as you start a new year, especially since 2025 feels like such a futuristic number.

I found myself reflecting on this myself, and landing on a provocative thought. What if the most popular development language 6 years from now isn’t a back and forth between Python and JavaScript, or a “new” language such as Rust, but is something much more radical: English.

When I say “English”, I am using it somewhat as a short hand for natural language, so it can actually include many other languages such as Spanish and Mandarin.

This seems somewhat outlandish, right?

Maybe not so crazy given Karpathy having said this awhile back?

The hottest new programming language is English

— Andrej Karpathy (@karpathy) January 24, 2023

So, how could it come to pass?

  • Computers can now understand natural language (and other inputs that humans do, such as images and sound)
  • Computers can write code based on that understanding (including filling in the gaps)
  • Humans already use English for large parts of the development process (documentation, comments, issues, PR conversations, group chat conversations, etc etc)

Let’s dive in.

Computers are increasingly grokking us

When you look at the history of computing, we see a continuous rise in abstraction in how we are able to talk to computers. We started speaking at a very low level with punch cards, machine code, and assembly language. Only a few people had the capability and understanding on how to get computers to do things, and those things were fairly limited (yet still amazing!).

Over time, we taught the computers abilities that would allow humans to increasingly share intent at higher levels of abstractions. Why have the humans track the memory in the computer if we can make the computers do that work?

Part of the story is in building the abstractions, and the technology breakthroughs such as garbage collectors that are fast enough, or LLMs that can write code. Intertwined with this trend is the dramatic shift in raw numbers, as illustrated by Tailscale CEO Avery Pennarun in his piece on living in the numbers. The rate of change is hard to fathom. We are used to CPUs that get faster, and GPUs that can multiply matrixes increasingly quickly, and more memory, and larger faster disk, but OMG when you see the actual magnitude of the change… how computation is 200,000x faster than 20 years ago etc… it can sink in that we can expect so much more from our systems.

It’s not that we can have fewer instances handle a few more HTTP requests. Our computers can grok what we say, can see images and video, and can hear audio. How much improvement will there be in 6 years across all of these vectors? Today is the worst we have it, with repeated changes coming. Mind blowing.

The reason that we see so many applications pop up with a chat side bar is a signal that we are building bridges between the computers and the humans in natural language ways.

With development, we see this in almost all of the tools that we use. Coding assistants started with helpful auto complete, but we want to communicate deeply with the Borg connected pair programmers that we now have available 24/7. Chop chop.

Computers can write code

I mean, humans have always written code. Since the beginning of computing, we’ve created systems that translate human-written instructions from one format into another that computers can understand.

But now, we’ve evolved beyond that. We can use plain English as input and get not just code, but various other digital assets as output.

Writing code isn’t just about mastering syntax. It’s about understanding platform capabilities and technological ecosystems. LLMs possess a vast knowledge base of this information, and with deep context awareness, they can comprehend what truly matters – your code and all its dependencies.

Humans already use English

When you think about your development projects, you realize that a large amount of your time isn’t writing for loops, it’s understanding the requirements of your business and mapping them to the technology that needs to drive it.

You write requirements. You have conversations with your team in Linear and Slack and …. You have code comments. You chat with your coding assistant. We are already using English.

The difference in 6 years though, is that we will be able to switch to a spec-centric vs. code-centric way of development.

Guypo goes over this in this talk: What is AI Native Development:

Don’t think of this as a one time moment where all of a sudden all that every developer does is write specs.

It can, and will, happen gradually. This has always been the way with development.

Python rocks up and has the ability to link to C libraries and thus has access to the evolved corpus of well tested libraries that have been worked on for decades. The same can happen here. The wrapping will occur, and the lower level can evolve as makes sense. The great feature of this new approach is that the generation from English to implementation can always be re-run with the latest gains.

Compare the two paths:

Current: you open chat with your favorite coding assistant and go back and forth to get some code for certain functionality. When you are happy with it, it ends up in your codebase. You lose the context of the chat and the English intent, and you now have a base, based on the state of the art of that moment. You are left with the compiled output, so to speak.

Future: your English is the source, and as your computer systems improve, they can be regenerating new and improved implementations. It behooves you to invest in testing and validation in this world, but this is something that is actually really needed any way… we just sometimes get away without doing it.

Once we work out how to get good at this level of abstraction, more amazing things can happen. Because LLMs can understand many languages, both nature and programming, we are not truly multi-lingual. The flexibility that comes with this can be transformational, but it is also very different. Being able to fill in the gaps isn’t the same as inheritance and composition. When do we need to be more explicit vs. when is that more than makes sense to say? How do we prompt humans for more information so they can elaborate? Do we sometimes want to use pseudocode to explain something? 

Popular doesn’t mean, solely!

Notice that there are items that are not being said in this prediction?

“You don’t mean that everyone will just be writing English only, right?”

No. Most popular doesn’t mean only. There will be room for traditional code. It’s also important to remember that when we use the term “developer” we are doing a lot of work. There is a massive spectrum of people doing very different things, with very different skill sets.

There are engineers still doing very low level work, and everyone on top of those layers gets to benefit without having to do so (e.g. on top of an operating system, or using standard libraries, etc). As we wire up new platform capabilities, that wiring may be done via platform APIs are not in English.

So, while English will play a role for everyone, I think that it will best suit cases higher up the stack, with the app developers. It will be important to still understand the platforms you are building on top of, and programming languages. But this may become less important over time.

“Why did you say development language not programming language”

This is purposefully broader to represent the usage beyond the programming aspects of software development is all.

“Does this mean you think there won’t be the need for developers?”

The opposite. I think our appetite for software is insatiable, and I’m excited about how this can democratize development, and how the flexibility can allow the personalization of computing experiences for users. If there are fewer curly braces, that doesn’t mean that there isn’t any more development going on!

We have a lot to work on to make this all happen, and I will now set a calendar entry in Jan 2031 to re-read this and see how far off I am.

What are your predictions for how software development will change in 6 years?

Principles of Developer AI Product Development

December 3, 2024

How do you build products and platforms for developers in a world that contains probabilistic black boxes that surprise you with what they can and can’t do, and when they decide to show you.

From my own trial and error, I have found that most of my mistakes are in not understanding the two pieces: developers and AI systems.

Instead of merging them, the key is to understand what makes each different and building with that in mind. Then the sum of the parts, bringing great UX to the party with smart LLMs, does the trick.

I think it is easy to anthropomorphize computers now that they seem to understand our language. Our written words. The images our eyes see. The sounds our ears hear. They have become the robots we have read about and watched on the big screen in Sci-Fi for ages!

Thus, if building a coding assistant, it should feel like another human that you are pairing with, right? With the same UX? No. We can do much better.

The Developer

First, let’s look at the developer, the human, and see how they operate, where they shine, and where they can use help.

Pat the developer:

  • Has a variable set of skills and knowledge when it comes to building software. They are proficient in some programming languages, databases, libraries, frameworks, platforms, and domains. Some of their knowledge has faded over time while other sets are fresh. It’s a unique mesh.
  • Only has a certain amount of time and energy to expend per day. They really don’t scale. Sometimes they feel sick, and other times they are in the zone and flowing.
  • Hates toil, can get bored, and prefers creative work.
  • Has some imposter syndrome.
  • Is forgetful, and makes random mistakes all the time.
  • Is able to deeply understand the context of their work environment, and the people around them.
  • Knows what winning is all about, and cares about the user and business problems that need to be solved.

This is just the tip of the iceberg, but you can already see how important it is for your solution to:

#1 Get the important context that is hidden in Pat’s brain out of their head and available to other team members and the AI system itself. There is gold locked up there.

  • Help Pat expand and elaborate with the system. For example, if you have a chat interface in your solution, end with questions for Pat to get more information, and teach Pat to keep iterating this way!
  • Make sure that Pat can tell you important things such as “this is a golden PR, trust this way of doing things”, or “this part of the codebase is legacy, please don’t give me more like this… instead treat this other part of the codebase as The New Way ™”, etc.
  • Secret side note: if this is done well, it also means that if Pat leaves the project or the company, more of the knowledge is left behind and available!

#2 Make sure that Pat is always unblocked, and in that flow state as often as possible.

  • Many AI researchers I have worked with see a failure and their instinct is: “we will fix that in the model”. They run off and try to steer the model to solve that particular problem, and for it to always come out with the perfect answer. While it’s great to keep improving on real tasks, and building those datasets, there will ALWAYS be issues. This is an endless game that reminds me of Google Search and the whack a mole world of “search quality”. The answer is to not fight for perfection, but to have a forgiving UX for the developer. If I am stuck on a task, I get angry, and feel like the system has totally failed me and my only hope is to start from scratch. If instead, there are threads for me to pull, and things for me to try, I am happy to fight to get to the solution!
  • I often think about the beauty of “the 10 blue links” with Google Search. If the best result is 3rd on the list, I don’t think of it as a failure at all… I am still very happy with Google. Contrast this with Google Assistant or Alexa… where there is one result. If it’s wrong, trust is eroded quickly. I spoke about this with Malte Ubl of Vercel, and how smart it was when the original v0 would show you multiple versions from a prompt so you could pick one. If 3 of the 4 were meh, but one was a solid starting point… great! Always have a next step for Pat.

#3 Take care of the toil so Pat can be doing the work that Pat can uniquely do best.

#4 Raise the level of abstraction: Let Pat talk to the system in a way that matches their skills, and allow translation. If Pat is expert with Rust and the backend, build confidence that they can dive into parts of the mono repo that are built with TypeScript because the guardrails are there and the details of syntax etc aren’t what is important here.

#5 Build trust with Pat. Show the sources and explain WHY the system is doing what it is doing and allow Pat the ability to jump around and learn more. Transparency is key. Let Pat change the context the AI has and re-run things so they can tinker and iterate to the best possible results.

The AI System

Now we have the AI system you are building. Broaden the view here and think of it as the overall computer system that happens to have AI components:

  • Think of this as somewhat infinitely scalable compute. You probably wouldn’t take an issue from your project tracker and farm it out to 6 developers on your team and then when they each send back a PR pick one you like to iterate from, but with AI you could decide to do that.
    • Now imagine how the UX of a system can change. It can go off and come back to Pat with multiple options and Pat can happily curate and pick a favored one to iterate from!
  • Some developers complain that current LLMs are “only junior developers”. Let’s say this is the case… but LLMs are trained on so many domains of computing that they are junior developers who know EVERY programming language, library, framework, platform, etc. This is amazing. Oh, and they ain’t no junior developers to boot.
  • By default they have this incredible broad knowledge but it’s like they are showing up on their first day and they know nothing about your domain. Fix this by connecting them with all of the context they need!
  • AI has no feelings, and thus is happy to do toil. Do all the toil all the time.
  • AI has no ego, so won’t be a jerk to Pat and is a safe space with no judgement (unless you make the AI act like a jerk ofc! Don’t.)

With this acknowledgement, you can make sure that your solution:

#1 Eval driven development: First, make it work, then make it fast and affordable.

Once you prove something out you can use synthetic data and fine tune models for particular tasks that are cheaper. Oh, and everything is getting cheaper month by month. There are new models all the time, so build a platform that can make use of multiple ones and run them against each other. You will always be surprised at which models are best for particular tasks. Don’t bet on one, bet on evolution and enjoy the ride.

#2 Tools: Give this LLM “brain” tools to wield.

Don’t rely on the model to do deterministic things when it can just use tools. We are now seeing some of the SoTA LLMs do internal calculations to decide when to use tools vs. just solve the problem directly. Great. But think about what tools are most useful and put them in reach of the LLM. Do the dance of working out when your system should be the meta-cognition agent vs. when to let the LLM do its thing. It’s a fun dance to learn.

Noam Brown, who worked on reasoning tokens and the system in o1, was talking about this for many years, such as this talk, that discusses how neural nets without special pathways are vastly inferior. Computers really got good at chess (and then Go etc) when they added search and started playing themselves.


#3 Data: Use large LLMs to generate great synthetic data. Your system should be saving data to learn from and feed back into the system to improve the AI all the time. What your AI and Pat are doing is gold. Learn from it. You will be very surprised.


#4 Smart Context: With large context windows and smart retrieval, you can make sure they have the best possible information to work with to get something done. Think about all of the signals you can give them… build output, runtime errors, you name it. And if you don’t have enough space to give them all of these signals what can you do? Run multiple parallel versions that have different signals passed in and let Pat choose the best results… or another AI judge!

And now we are seeing SoTA models looking to integrate external data via protocols such as Anthropic’s Model Context Proposal. It’s fun to see the experiments here.


#5 Cheap Experiments: you need to be able to run experiments all the time. I remember talking to one of my favorite AI researchers who worked with Dario from Anthropic back when they were at Google, and something stuck with me:

  • “It wasn’t that Dario had the best ideas, although he had plenty… he just ran 10 to 100 times as many experiments as anyone else. That’s when I knew he would do amazing things.”
  • If you hear yourself or your team saying “oh, it will be hard to try that” take a step back. Invest in a platform that makes it easy to try things.
  • I’ve always been humbled to see the difference between how I *think* something will turn out vs. how it collides with reality. With LLMs, it’s even harder to know. Try things and allow emergence to be your friend. Imperical wins the race.
  • Don’t be precious with your prompts either. Over time I have seen that models have gotten better at understanding plain language vs. magic spells and incarnations. Let people play with the prompts and have that be great because your eval framework will allow for that nicely. Don’t gate keep here.

So, here we are. We are building and iterating on a system that gets the best out of developers, and the massive scale of compute, with the new technology of transformers with the fluke of GPUs gives us an epic opportunity to build amazing things.

Shall we?

Joining Tessl to become an AI native developer

November 14, 2024

tl;dr I’m thrilled to announce that I’ve joined Tessl, a new company that’s pioneering AI Native Software Development. Founded by Guypo and fresh off securing Series A funding, we’re on an exciting mission to revolutionize how software is built.

I’m especially excited to be reuniting with Ben as we focus on the opportunity to create a platform that makes software development not just more efficient, but genuinely fun and enjoyable for developers at a time where everything has changed with the new AI tools in the toolbox!


It has been amazing to watch the progress that AI software developer tools have been making, and being part of the rise of augmentation for developers. I have seen first hand the impact on the lives of software developers as they are able to get so much more done with small teams, having much more fun to boot.

As I watch myself, and developers in general, use natural language to generate code… something has been bothering me a lil. It feels like the “source” of what you want to build gets lost as our tooling converts it expertly into our codebase.

What if we could keep this original intent, expand on it to specify what we want built… work hand in hand with our team and AI systems, and then regenerate assets as the LLMs get better at generating code? (which happens on a weekly basis rn!) What if we could separate concerns and have passes that make sure that the code is optimized for performance? Is accessible? Covers edge cases? Has rich debugging? Let me focus on clearly articulating what I want, and have systems that can do a lot of the toilsome yet important actions to make it work, make it right, and make it fast.

If we extrapolate on the improvements we have seen in the state of the art of coding models, and marry this work with systems that bring all of the guardrails and workflows that are needed to partner with the intuition that LLMs have, how far can we go?

I love coding, and when I think about why… It’s because I love thinking through building useful software. Back in the earliest days, after you think things through you start punching cards. Then we had machine code, and assemblers, and compilers, and linters, and IDEs, and all of the layers of tooling that we get to sit on top of today. We have always been changing the level of abstraction to allow developers to best specify, or take specifications from others perhaps, and generate working software.

Ben and Guypo

This is a huge challenge, and I got really excited about it after talking to Guypo and hearing his vision in this space. Ben and I were fortunate enough to invest in Snyk and see the sea-change in developer focused security that Snyk brought to the software world, so having the opportunity to jump on a rocket ship with something as large as the AI software revolution was a no-brainer.

The most fun I have ever had in my career has always been linked to partnering with Ben, and now we get to join forces with Guypo, and the incredible team he has assembled, to go change the world through leading AI Native Software Development.

It’s daunting and exciting, and the mission is larger than the products that Tessl will build itself. This is actually one reason why I was so compelled to work on this. I’m more drawn towards companies that are on a mission, where the company makes money in order to accomplish the mission rather than the other way around.

I hope that you join us in pushing the bar on what the future holds for software development. We have just begun, but with a great team and a war chest, we are ready to move mountains and do so fast. Time to build… with AI!

/fin

« Previous Page

Primary Sidebar

Twitter

My Tweets

Recent Posts

  • Stitching with the new Jules API
  • Pools of Extraction: How I Hack on Software Projects with LLMs
  • Stitch Design Variants: A Picture Really Is Worth a Thousand Words?
  • Stitch Prompt: A CLI for Design Variety
  • Stitch: A Tasteful Idea

Follow

  • LinkedIn
  • Medium
  • RSS
  • Twitter

Tags

3d Touch 2016 Active Recall Adaptive Design Agile AI Native Dev AI Software Design AI Software Development Amazon Echo Android Android Development Apple Application Apps Artificial Intelligence Autocorrect blog Bots Brain Calendar Career Advice Cloud Computing Coding Cognitive Bias Commerce Communication Companies Conference Consciousness Cooking Cricket Cross Platform Deadline Delivery Design Design Systems Desktop Developer Advocacy Developer Experience Developer Platform Developer Productivity Developer Relations Developers Developer Tools Development Distributed Teams Documentation DX Ecosystem Education Energy Engineering Engineering Mangement Entrepreneurship Exercise Eyes Family Fitness Football Founders Future GenAI Gender Equality Google Google Developer Google IO Google Labs Habits Health Hill Climbing HR Integrations JavaScript Jobs Jquery Jules Kids Stories Kotlin Language LASIK Leadership Learning LLMs Lottery Machine Learning Management Messaging Metrics Micro Learning Microservices Microsoft Mobile Mobile App Development Mobile Apps Mobile Web Moving On NPM Open Source Organization Organization Design Pair Programming Paren Parenting Path Performance Platform Platform Thinking Politics Product Design Product Development Productivity Product Management Product Metrics Programming Progress Progressive Enhancement Progressive Web App Project Management Psychology Push Notifications pwa QA Rails React Reactive Remix Remote Working Resilience Ruby on Rails Screentime Self Improvement Service Worker Sharing Economy Shipping Shopify Short Story Silicon Valley Slack Soccer Software Software Development Spaced Repetition Speaking Startup Steve Jobs Stitch Study Teaching Team Building Tech Tech Ecosystems Technical Writing Technology Tools Transportation TV Series Twitter Typescript Uber UI Unknown User Experience User Testing UX vitals Voice Walmart Web Web Components Web Development Web Extensions Web Frameworks Web Performance Web Platform WWDC Yarn

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Archives

  • October 2025
  • September 2025
  • August 2025
  • January 2025
  • December 2024
  • November 2024
  • September 2024
  • May 2024
  • April 2024
  • December 2023
  • October 2023
  • August 2023
  • June 2023
  • May 2023
  • March 2023
  • February 2023
  • January 2023
  • September 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • November 2021
  • August 2021
  • July 2021
  • February 2021
  • January 2021
  • May 2020
  • April 2020
  • October 2019
  • August 2019
  • July 2019
  • June 2019
  • April 2019
  • March 2019
  • January 2019
  • October 2018
  • August 2018
  • July 2018
  • May 2018
  • February 2018
  • December 2017
  • November 2017
  • September 2017
  • August 2017
  • July 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012

Search

Subscribe

RSS feed RSS - Posts

The right thing to do, is the right thing to do.

The right thing to do, is the right thing to do.

Dion Almaer

Copyright © 2026 · Log in

 

Loading Comments...