• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Dion Almaer

Software, Development, Products

  • @dalmaer
  • LinkedIn
  • Medium
  • RSS
  • Show Search
Hide Search

Principles of Developer AI Product Development

December 3, 2024

How do you build products and platforms for developers in a world that contains probabilistic black boxes that surprise you with what they can and can’t do, and when they decide to show you.

From my own trial and error, I have found that most of my mistakes are in not understanding the two pieces: developers and AI systems.

Instead of merging them, the key is to understand what makes each different and building with that in mind. Then the sum of the parts, bringing great UX to the party with smart LLMs, does the trick.

I think it is easy to anthropomorphize computers now that they seem to understand our language. Our written words. The images our eyes see. The sounds our ears hear. They have become the robots we have read about and watched on the big screen in Sci-Fi for ages!

Thus, if building a coding assistant, it should feel like another human that you are pairing with, right? With the same UX? No. We can do much better.

The Developer

First, let’s look at the developer, the human, and see how they operate, where they shine, and where they can use help.

Pat the developer:

  • Has a variable set of skills and knowledge when it comes to building software. They are proficient in some programming languages, databases, libraries, frameworks, platforms, and domains. Some of their knowledge has faded over time while other sets are fresh. It’s a unique mesh.
  • Only has a certain amount of time and energy to expend per day. They really don’t scale. Sometimes they feel sick, and other times they are in the zone and flowing.
  • Hates toil, can get bored, and prefers creative work.
  • Has some imposter syndrome.
  • Is forgetful, and makes random mistakes all the time.
  • Is able to deeply understand the context of their work environment, and the people around them.
  • Knows what winning is all about, and cares about the user and business problems that need to be solved.

This is just the tip of the iceberg, but you can already see how important it is for your solution to:

#1 Get the important context that is hidden in Pat’s brain out of their head and available to other team members and the AI system itself. There is gold locked up there.

  • Help Pat expand and elaborate with the system. For example, if you have a chat interface in your solution, end with questions for Pat to get more information, and teach Pat to keep iterating this way!
  • Make sure that Pat can tell you important things such as “this is a golden PR, trust this way of doing things”, or “this part of the codebase is legacy, please don’t give me more like this… instead treat this other part of the codebase as The New Way ™”, etc.
  • Secret side note: if this is done well, it also means that if Pat leaves the project or the company, more of the knowledge is left behind and available!

#2 Make sure that Pat is always unblocked, and in that flow state as often as possible.

  • Many AI researchers I have worked with see a failure and their instinct is: “we will fix that in the model”. They run off and try to steer the model to solve that particular problem, and for it to always come out with the perfect answer. While it’s great to keep improving on real tasks, and building those datasets, there will ALWAYS be issues. This is an endless game that reminds me of Google Search and the whack a mole world of “search quality”. The answer is to not fight for perfection, but to have a forgiving UX for the developer. If I am stuck on a task, I get angry, and feel like the system has totally failed me and my only hope is to start from scratch. If instead, there are threads for me to pull, and things for me to try, I am happy to fight to get to the solution!
  • I often think about the beauty of “the 10 blue links” with Google Search. If the best result is 3rd on the list, I don’t think of it as a failure at all… I am still very happy with Google. Contrast this with Google Assistant or Alexa… where there is one result. If it’s wrong, trust is eroded quickly. I spoke about this with Malte Ubl of Vercel, and how smart it was when the original v0 would show you multiple versions from a prompt so you could pick one. If 3 of the 4 were meh, but one was a solid starting point… great! Always have a next step for Pat.

#3 Take care of the toil so Pat can be doing the work that Pat can uniquely do best.

#4 Raise the level of abstraction: Let Pat talk to the system in a way that matches their skills, and allow translation. If Pat is expert with Rust and the backend, build confidence that they can dive into parts of the mono repo that are built with TypeScript because the guardrails are there and the details of syntax etc aren’t what is important here.

#5 Build trust with Pat. Show the sources and explain WHY the system is doing what it is doing and allow Pat the ability to jump around and learn more. Transparency is key. Let Pat change the context the AI has and re-run things so they can tinker and iterate to the best possible results.

The AI System

Now we have the AI system you are building. Broaden the view here and think of it as the overall computer system that happens to have AI components:

  • Think of this as somewhat infinitely scalable compute. You probably wouldn’t take an issue from your project tracker and farm it out to 6 developers on your team and then when they each send back a PR pick one you like to iterate from, but with AI you could decide to do that.
    • Now imagine how the UX of a system can change. It can go off and come back to Pat with multiple options and Pat can happily curate and pick a favored one to iterate from!
  • Some developers complain that current LLMs are “only junior developers”. Let’s say this is the case… but LLMs are trained on so many domains of computing that they are junior developers who know EVERY programming language, library, framework, platform, etc. This is amazing. Oh, and they ain’t no junior developers to boot.
  • By default they have this incredible broad knowledge but it’s like they are showing up on their first day and they know nothing about your domain. Fix this by connecting them with all of the context they need!
  • AI has no feelings, and thus is happy to do toil. Do all the toil all the time.
  • AI has no ego, so won’t be a jerk to Pat and is a safe space with no judgement (unless you make the AI act like a jerk ofc! Don’t.)

With this acknowledgement, you can make sure that your solution:

#1 Eval driven development: First, make it work, then make it fast and affordable.

Once you prove something out you can use synthetic data and fine tune models for particular tasks that are cheaper. Oh, and everything is getting cheaper month by month. There are new models all the time, so build a platform that can make use of multiple ones and run them against each other. You will always be surprised at which models are best for particular tasks. Don’t bet on one, bet on evolution and enjoy the ride.

#2 Tools: Give this LLM “brain” tools to wield.

Don’t rely on the model to do deterministic things when it can just use tools. We are now seeing some of the SoTA LLMs do internal calculations to decide when to use tools vs. just solve the problem directly. Great. But think about what tools are most useful and put them in reach of the LLM. Do the dance of working out when your system should be the meta-cognition agent vs. when to let the LLM do its thing. It’s a fun dance to learn.

Noam Brown, who worked on reasoning tokens and the system in o1, was talking about this for many years, such as this talk, that discusses how neural nets without special pathways are vastly inferior. Computers really got good at chess (and then Go etc) when they added search and started playing themselves.


#3 Data: Use large LLMs to generate great synthetic data. Your system should be saving data to learn from and feed back into the system to improve the AI all the time. What your AI and Pat are doing is gold. Learn from it. You will be very surprised.


#4 Smart Context: With large context windows and smart retrieval, you can make sure they have the best possible information to work with to get something done. Think about all of the signals you can give them… build output, runtime errors, you name it. And if you don’t have enough space to give them all of these signals what can you do? Run multiple parallel versions that have different signals passed in and let Pat choose the best results… or another AI judge!

And now we are seeing SoTA models looking to integrate external data via protocols such as Anthropic’s Model Context Proposal. It’s fun to see the experiments here.


#5 Cheap Experiments: you need to be able to run experiments all the time. I remember talking to one of my favorite AI researchers who worked with Dario from Anthropic back when they were at Google, and something stuck with me:

  • “It wasn’t that Dario had the best ideas, although he had plenty… he just ran 10 to 100 times as many experiments as anyone else. That’s when I knew he would do amazing things.”
  • If you hear yourself or your team saying “oh, it will be hard to try that” take a step back. Invest in a platform that makes it easy to try things.
  • I’ve always been humbled to see the difference between how I *think* something will turn out vs. how it collides with reality. With LLMs, it’s even harder to know. Try things and allow emergence to be your friend. Imperical wins the race.
  • Don’t be precious with your prompts either. Over time I have seen that models have gotten better at understanding plain language vs. magic spells and incarnations. Let people play with the prompts and have that be great because your eval framework will allow for that nicely. Don’t gate keep here.

So, here we are. We are building and iterating on a system that gets the best out of developers, and the massive scale of compute, with the new technology of transformers with the fluke of GPUs gives us an epic opportunity to build amazing things.

Shall we?

Joining Tessl to become an AI native developer

November 14, 2024

tl;dr I’m thrilled to announce that I’ve joined Tessl, a new company that’s pioneering AI Native Software Development. Founded by Guypo and fresh off securing Series A funding, we’re on an exciting mission to revolutionize how software is built.

I’m especially excited to be reuniting with Ben as we focus on the opportunity to create a platform that makes software development not just more efficient, but genuinely fun and enjoyable for developers at a time where everything has changed with the new AI tools in the toolbox!


It has been amazing to watch the progress that AI software developer tools have been making, and being part of the rise of augmentation for developers. I have seen first hand the impact on the lives of software developers as they are able to get so much more done with small teams, having much more fun to boot.

As I watch myself, and developers in general, use natural language to generate code… something has been bothering me a lil. It feels like the “source” of what you want to build gets lost as our tooling converts it expertly into our codebase.

What if we could keep this original intent, expand on it to specify what we want built… work hand in hand with our team and AI systems, and then regenerate assets as the LLMs get better at generating code? (which happens on a weekly basis rn!) What if we could separate concerns and have passes that make sure that the code is optimized for performance? Is accessible? Covers edge cases? Has rich debugging? Let me focus on clearly articulating what I want, and have systems that can do a lot of the toilsome yet important actions to make it work, make it right, and make it fast.

If we extrapolate on the improvements we have seen in the state of the art of coding models, and marry this work with systems that bring all of the guardrails and workflows that are needed to partner with the intuition that LLMs have, how far can we go?

I love coding, and when I think about why… It’s because I love thinking through building useful software. Back in the earliest days, after you think things through you start punching cards. Then we had machine code, and assemblers, and compilers, and linters, and IDEs, and all of the layers of tooling that we get to sit on top of today. We have always been changing the level of abstraction to allow developers to best specify, or take specifications from others perhaps, and generate working software.

Ben and Guypo

This is a huge challenge, and I got really excited about it after talking to Guypo and hearing his vision in this space. Ben and I were fortunate enough to invest in Snyk and see the sea-change in developer focused security that Snyk brought to the software world, so having the opportunity to jump on a rocket ship with something as large as the AI software revolution was a no-brainer.

The most fun I have ever had in my career has always been linked to partnering with Ben, and now we get to join forces with Guypo, and the incredible team he has assembled, to go change the world through leading AI Native Software Development.

It’s daunting and exciting, and the mission is larger than the products that Tessl will build itself. This is actually one reason why I was so compelled to work on this. I’m more drawn towards companies that are on a mission, where the company makes money in order to accomplish the mission rather than the other way around.

I hope that you join us in pushing the bar on what the future holds for software development. We have just begun, but with a great team and a war chest, we are ready to move mountains and do so fast. Time to build… with AI!

/fin

Keeping your A-Team together with developer AI

September 30, 2024

When I look back at my career, the most fun and fulfilling times I have had has been tied to impactful projects with a team that is clicking. This has often happened at a startup, but it has also happened in some magical moments where the team is able to move fast and with agency within a larger company.

How often have you heard, or thought, the following?:

Remember when we were small? We moved so fast back then.

What if you could stay small and nimble, but get so much more done?

One core philosophy that DHH has often discussed as a key part of The Rails Way, is how it scales from a single developer. From Hello World to IPO. The way to do that is to be able to do more with less, and keep as much of the system in your head as possible. Allow room for the important pieces, and keep complexity from taking up valuable space.

When we come up with new infrastructure that compresses the complexity, we see amazing scale such as what the WhatsApp team was able to build with a small team.

I believe AI native tools can help here in a slightly different way. For example, you can somewhat outsource some of the complexity to the system. The brain budget can be augmented, and some of it can be swapped in, in real time.

This is an area of AI that I am particularly excited about, and I am seeing it occur in practice every day with customers at Augment.

When talking to one customer that is startup sized, they said:

We are growing like a weed, and I was nervous that we would have to grow the team… which scared me. I love our tight-knit crew and how we have trust and minimal coordination issues. But since using Augment and other AI tools we are finding that our work isn’t scaling linearly… we are so productive that we don’t have to grow to meet feature demand. I hope this lasts as long as possible!

This resonates a lot! I have seen the coordination headwinds first hand, and anything that you can do to minimize them will result in HUGE productivity gains… and will also give you more joyful moments.

How are AI tools helping?

I think that the following properties are compounding:

More than “faster typing”

It’s easy to think about features such as code completions as a way to speed up typing. Speeding up typing is just the start. The next step is taking care of raw toil and tedium, but where savings really kick in are when the suggestion brings you something that you maybe didn’t necessarily know what to write. I love it when this happens, especially when it teaches me something new about the codebase or another way to do something.

No more “Reading the docs”

The LLMs have read the docs for you, and much beyond. They have read your code, and that of all of your dependencies. They may have consumed knowledge from other sources (Linear tickets? Slack channels? PRs with comments?).

Instead of hunting down documentation, you can use features such as Chat to ask questions that map to your exact task at hand. You can personalize responses (maybe you want a terse reply, or the opposite?). And having help that maps to your context means you aren’t translating between the examples that happen to be in the docs.

Saving time not just for myself, but for my team

I hate interrupting my coworkers when I am stuck. Now the first line of defense allows me to stay unblocked by working with Augment. This can save a ton of “clock time” when my coworkers are busy… or on the other side of the world!

This doesn’t mean I don’t want time with my colleagues, but it can be focused on working together on more novel and creative problems.

Confidence working across unfamiliar codebases

Maybe you aren’t as experience in Rust and have been nervous to touch that part of the codebase. You don’t have to worry as much about the idioms of the language, and you can use these tools to help you learn as you use autocomplete functionality and chat to act more declaratively.

This flexibility is being noticed, and “full stack” is morphing into the rise of the “product engineer”:

Many argue that front-end engineering is fading due to AI tools, but I see a convergence of roles.

Front-end devs can now generate schemas with tools like @supabase’s https://t.co/ZWMGf6cVj5, while back-end devs can scaffold UIs with @vercel’s @v0.

This is the rise of the…

— Kenneth Auchenberg 🛠 (@auchenberg) September 18, 2024

This also works when your team has to interact with another team at a larger company. You may not have to wait for the work to be done by them, and instead can dive in and collaborate to get something done!

From code completion to task completion

Code completions are still a favorite feature. I feel like I am dancing with my AI partner and quickly iterate and steer. But we are now seeing the ability to share your intent at a higher level, and have new UX that will quickly help you get a full task done. I’m very excited to share what Augment has been doing here.

Think you can keep your A-Team?

Now, I may be biased… but I think the best way to keep the A-Team together is to have a developer AI platform that has the deep codebase and external context awareness to act like you are working with the experience of your entire team vs. a knowledgable engineer that knows the basics. The difference is night and day, and I get very happy thinking about smaller teams with super powers. I hope you do too!

And maybe you will have the type of outsized impact that 13 employees did at Instagram, or 55 at WhatsApp, or 50 at Mojang (Minecraft), or if you are truly lucky… Donald Knuth with TeX?

(I was thinking about TeX and Professor Knuth again when Matt Holden recently shipped TexSandbox, a tool I wish I had in my Math courses at Uni!)

« Previous Page
Next Page »

Primary Sidebar

Twitter

My Tweets

Recent Posts

  • Stitching with the new Jules API
  • Pools of Extraction: How I Hack on Software Projects with LLMs
  • Stitch Design Variants: A Picture Really Is Worth a Thousand Words?
  • Stitch Prompt: A CLI for Design Variety
  • Stitch: A Tasteful Idea

Follow

  • LinkedIn
  • Medium
  • RSS
  • Twitter

Tags

3d Touch 2016 Active Recall Adaptive Design Agile AI Native Dev AI Software Design AI Software Development Amazon Echo Android Android Development Apple Application Apps Artificial Intelligence Autocorrect blog Bots Brain Calendar Career Advice Cloud Computing Coding Cognitive Bias Commerce Communication Companies Conference Consciousness Cooking Cricket Cross Platform Deadline Delivery Design Design Systems Desktop Developer Advocacy Developer Experience Developer Platform Developer Productivity Developer Relations Developers Developer Tools Development Distributed Teams Documentation DX Ecosystem Education Energy Engineering Engineering Mangement Entrepreneurship Exercise Eyes Family Fitness Football Founders Future GenAI Gender Equality Google Google Developer Google IO Google Labs Habits Health Hill Climbing HR Integrations JavaScript Jobs Jquery Jules Kids Stories Kotlin Language LASIK Leadership Learning LLMs Lottery Machine Learning Management Messaging Metrics Micro Learning Microservices Microsoft Mobile Mobile App Development Mobile Apps Mobile Web Moving On NPM Open Source Organization Organization Design Pair Programming Paren Parenting Path Performance Platform Platform Thinking Politics Product Design Product Development Productivity Product Management Product Metrics Programming Progress Progressive Enhancement Progressive Web App Project Management Psychology Push Notifications pwa QA Rails React Reactive Remix Remote Working Resilience Ruby on Rails Screentime Self Improvement Service Worker Sharing Economy Shipping Shopify Short Story Silicon Valley Slack Soccer Software Software Development Spaced Repetition Speaking Startup Steve Jobs Stitch Study Teaching Team Building Tech Tech Ecosystems Technical Writing Technology Tools Transportation TV Series Twitter Typescript Uber UI Unknown User Experience User Testing UX vitals Voice Walmart Web Web Components Web Development Web Extensions Web Frameworks Web Performance Web Platform WWDC Yarn

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Archives

  • October 2025
  • September 2025
  • August 2025
  • January 2025
  • December 2024
  • November 2024
  • September 2024
  • May 2024
  • April 2024
  • December 2023
  • October 2023
  • August 2023
  • June 2023
  • May 2023
  • March 2023
  • February 2023
  • January 2023
  • September 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • November 2021
  • August 2021
  • July 2021
  • February 2021
  • January 2021
  • May 2020
  • April 2020
  • October 2019
  • August 2019
  • July 2019
  • June 2019
  • April 2019
  • March 2019
  • January 2019
  • October 2018
  • August 2018
  • July 2018
  • May 2018
  • February 2018
  • December 2017
  • November 2017
  • September 2017
  • August 2017
  • July 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012

Search

Subscribe

RSS feed RSS - Posts

The right thing to do, is the right thing to do.

The right thing to do, is the right thing to do.

Dion Almaer

Copyright © 2026 · Log in

Loading Comments...