• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Dion Almaer

Software, Development, Products

  • @dalmaer
  • LinkedIn
  • Medium
  • RSS
  • Show Search
Hide Search

Archives for September 2025

Pools of Extraction: How I Hack on Software Projects with LLMs

September 23, 2025

I’ve started to notice some patterns in how I work on software projects.

Specs as mini-pools

I often sit down and just write a spec. Even if I never build it, the spec itself is a kind of temporary pool: a place where scattered ideas turn into something concrete enough to extract from later.

When I say “write a spec”, I actually start with a “brief” and the spec flushes out over time. AI helps to build out the spec through back and forth, and by reverse-extracting changes to a system. I’m excited for more tools that will help with extraction and syncing. If we can get there, specs can become the source of truth.

It’s important to do so. I was talking to a friend recently who works on spec driven systems and we kept discussing the analogy of building a house.

Imagine meeting with your architect and having a conversation about what you want to build. It starts high level, and the architect comes back with options which you iterate on together. The architect flushes out actual designs, and you can communicate around the nice 3D models vs. the blue prints.

Now imagine meeting with the GC and you just start riffing on what you want to built. Obviously crazy… things get written down and everyone aligns around the same shared sources.

We also have inspectors to verify that that build matches the design and follows all of the building codes. Yes, we have building codes that are shared sources of truth and constraints.

Right now we are still relying on a lot of “talk to the builder and they go off and try to build what you just said” with chat based agents. Over time we will add more shared sources, will have better verification, and this will enable us to scale what we build.

Faster cheaper smaller models

We often think about how specifying things well leads to higher quality output from models. This is true. What is often under appreciated is that it also means that you can use a variety of models to do the job.

Asking a SOTA galaxy brain that has knowledge of Genghis Khan, every language, and on and on can often be overkill. You can do hundreds or even thousands of “flash” calls for the price of one larger call. Thus, having the right knowledge available can allow a small model to give you what you need very quickly and cheaply… especially if a fix loop is applied.

Throwaway projects

I spin up tiny projects all the time, just to try something out. Most of them don’t live very long, but that’s the point—they’re cheap experiments that give me a quick pool of learning before I move on.

Shopify’s Tobi published a repo that helps make this easy:

I'm constantly starting tests and demos of Rails apps and other tools. This new try tool from @tobi looks exactly what I didn't even realize that I needed! https://t.co/n8ahi05Utz

— DHH (@dhh) August 20, 2025

Repo extractions

Sometimes I yank code out of a larger repo, or glue pieces together from a few repos, and build something smaller and more focused. By narrowing the scope, I create a pool that’s easier to wade into.

Because LLMs are so great at taking different information and translating it for your needs, this can happen in interesting ways. Where I would look for more abstractions and shared libraries in the past, it’s more murky now when I can easily create exactly what I need.

It turns out, all of these are me making temporary pools of extraction. And once I noticed that pattern in my own work, I started seeing it everywhere—especially in how LLMs and even our own brains operate.


Pools of Extraction in LLMs

LLMs have two big pools to fish from:

  • The latent pool: everything baked into their weights from training. Big, deep, and fuzzy.
  • The contextual pool: whatever you feed them right now—repos, docs, examples. Sharp, specific, but fleeting.

If you ask a model a question without any context, it dives into the murky depths of its latent pool and hopes to surface something useful. Hand it your relevant code repo, and suddenly it’s swimming in a nice, clear backyard pool with everything it needs right there.

That repo context might not live beyond the session, but while it’s there, it massively increases the odds of good output.


Pools of Extraction in Humans

Humans work in a parallel but messier way. If someone asks me about a library I used years ago, I might dredge up the answer from long-term memory. But if I’ve got my old notes or an example in front of me? Way easier.

Psychologists call these retrieval cues—they reactivate the right pathways so memory reconstructs itself. And unlike LLMs, every time I do recall something, I actually change the memory. My brain’s wiring shifts a little. With LLMs, nothing changes in the weights—context pools evaporate when the session closes.


Conclusion: Build Better Pools

Specs, toy projects, repo extractions—these are all ways I give myself better pools to draw from. LLMs need the same treatment: without context they flail, with it they shine. Humans too: notes, cues, and scaffolds make the difference between blank stares and “aha!”

So now when I’m stuck, I don’t just hope the big pool will have the answer. I ask myself: what’s the smallest pool I can create right now to make extraction easier?

It’s all open book tests from here.

That’s when things click.

Stitch Design Variants: A Picture Really Is Worth a Thousand Words?

September 16, 2025

My head has been in the world of code generation for the last few years. Now that I’m also thinking about design generation, I’m seeing a lot of similarities but also some fascinating differences.

For example, it is cheap to ask AI for things. I will ask it to start from scratch and do a lot of extra work that I would never ask of my team mates.

What isn’t cheap though is the human side of reviewing all of the code that AI creates. Reading through them, parsing the differences, holding them in working memory—it’s real cognitive load. It takes time, and time is expensive.

With designs though, the calculus is a bit different. It is still cheap for AI to spin up variations, but human review? Our visual brain is ridiculously efficient. You can glance at six screens side-by-side and instantly know which one sparks “ugh” and which one sparks “oh yes!”. That “emergence” moment—that’s the magic.

🚢 We are excited to share 3 new features that shipped in Stitch today!

– Variants: Generate design variations of any screen in one click (custom prompts coming soon).
– Organizer: Magically clean up your messy canvas.
– Sharing: Send a link to share your canvas with coworkers… pic.twitter.com/lbDtPLz5KZ

— Stitch by Google (@stitchbygoogle) September 17, 2025

Which brings me to a new Stitch feature: Design Variants. It does exactly this. When looking at one of your app UI screens, simply click on “Create variants” and you get a spread of designs.

Here you see the entropy, the unexpected angles, the one idea you wouldn’t have thought of yourself. You select, delete, iterate. Fast. Natural. Fun.

This is just the first version. We’ll be adding a lot more ways to let the AI flood the canvas with possibilities while keeping the cost of selection close to zero.

A picture is worth a thousand words. Design Variants is here to give you … ten thousand?

NOTE: The example image above is a shared public project … another feature the team shipped this week! I would love to see any of your Stitch projects!

Stitch Prompt: A CLI for Design Variety

September 9, 2025

I love how you can have an idea and often make it real in short order. I have been exploring how to make sure that when I design my frontend I get as much variety as possible as, after all, it’s all about taste!

When seeing folks prompt for app UI, 99% of the time I see them ignoring prompting for the style they want. They rush to say what they want the app to do, which is obviously important, but giving the models more information on the style can bring you gold.

I have been collecting styles, and making it easy to pick one to explore… where the tool will fill out the right information to pass in to the models.

I wrapped this in a simple CLI, stitch-prompt, which simply takes:

  • A simple spec that has info on what you are building
  • A style to try, from a curated list
  • An optional prompt that packages it all together

I have noticed that once I get into the habit… I start to ask for a variety of styles and look at them to get a feel for what I fancy with the particular app. Sometimes my mood and the app itself has me in a minimalistic style, at other times I go for more “fun”. With the flick of a --all you get prompts for all styles to grab. It’s been delightful to put these into Stitch and see what comes out the other side:

It’s been delightful to take a series of Stitch screens and put them side by side with different styles:

And, since these are just prompts, you can fire them up with any LLM that can speak image and put them together:

We are in a world where generation is cheap, so make sure to use that fact and do a lot more curation!

We are embracing this in Stitch and hope you give it a try! We have a lot coming.

Primary Sidebar

Twitter

My Tweets

Recent Posts

  • Stitching with the new Jules API
  • Pools of Extraction: How I Hack on Software Projects with LLMs
  • Stitch Design Variants: A Picture Really Is Worth a Thousand Words?
  • Stitch Prompt: A CLI for Design Variety
  • Stitch: A Tasteful Idea

Follow

  • LinkedIn
  • Medium
  • RSS
  • Twitter

Tags

3d Touch 2016 Active Recall Adaptive Design Agile AI Native Dev AI Software Design AI Software Development Amazon Echo Android Android Development Apple Application Apps Artificial Intelligence Autocorrect blog Bots Brain Calendar Career Advice Cloud Computing Coding Cognitive Bias Commerce Communication Companies Conference Consciousness Cooking Cricket Cross Platform Deadline Delivery Design Design Systems Desktop Developer Advocacy Developer Experience Developer Platform Developer Productivity Developer Relations Developers Developer Tools Development Distributed Teams Documentation DX Ecosystem Education Energy Engineering Engineering Mangement Entrepreneurship Exercise Eyes Family Fitness Football Founders Future GenAI Gender Equality Google Google Developer Google IO Google Labs Habits Health Hill Climbing HR Integrations JavaScript Jobs Jquery Jules Kids Stories Kotlin Language LASIK Leadership Learning LLMs Lottery Machine Learning Management Messaging Metrics Micro Learning Microservices Microsoft Mobile Mobile App Development Mobile Apps Mobile Web Moving On NPM Open Source Organization Organization Design Pair Programming Paren Parenting Path Performance Platform Platform Thinking Politics Product Design Product Development Productivity Product Management Product Metrics Programming Progress Progressive Enhancement Progressive Web App Project Management Psychology Push Notifications pwa QA Rails React Reactive Remix Remote Working Resilience Ruby on Rails Screentime Self Improvement Service Worker Sharing Economy Shipping Shopify Short Story Silicon Valley Slack Soccer Software Software Development Spaced Repetition Speaking Startup Steve Jobs Stitch Study Teaching Team Building Tech Tech Ecosystems Technical Writing Technology Tools Transportation TV Series Twitter Typescript Uber UI Unknown User Experience User Testing UX vitals Voice Walmart Web Web Components Web Development Web Extensions Web Frameworks Web Performance Web Platform WWDC Yarn

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Archives

  • October 2025
  • September 2025
  • August 2025
  • January 2025
  • December 2024
  • November 2024
  • September 2024
  • May 2024
  • April 2024
  • December 2023
  • October 2023
  • August 2023
  • June 2023
  • May 2023
  • March 2023
  • February 2023
  • January 2023
  • September 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • November 2021
  • August 2021
  • July 2021
  • February 2021
  • January 2021
  • May 2020
  • April 2020
  • October 2019
  • August 2019
  • July 2019
  • June 2019
  • April 2019
  • March 2019
  • January 2019
  • October 2018
  • August 2018
  • July 2018
  • May 2018
  • February 2018
  • December 2017
  • November 2017
  • September 2017
  • August 2017
  • July 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012

Search

Subscribe

RSS feed RSS - Posts

The right thing to do, is the right thing to do.

The right thing to do, is the right thing to do.

Dion Almaer

Copyright © 2026 · Log in

 

Loading Comments...