• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Dion Almaer

Software, Development, Products

  • @dalmaer
  • LinkedIn
  • Medium
  • RSS
  • Show Search
Hide Search

Web Development

ai-hints.json: how the ecosystem will help the bots

June 13, 2023

As I use LLMs to help me build software I keep running into situations where there is a missing piece and leverage point, that if injected will dramatically raise the quality of creation: subject expert turtles.

Triangle showing how app devs, experts, and bots come together

Much of this formed when creating mock.shop and seeing what it takes to go from a demo to production, but let me explain via another recent experience: a web app framework migration.

Framework migration: switching between Next.js and Remix

Next and Remix merging

LLMs are helpful for tasks such as porting. I have used this often, especially working between Python and JavaScript for some recent AI projects. I wanted to explore taking a web application using Next.js and have it ported to use Remix, or vice versa.

Out of the box, with ChatGPT, it would get some of the high level changes correct, but it would be very surface level. For example, actions and loaders may be created, but imports would have the form of @/components/foo and next/image.

Our LLM friend has a galaxy of information, but we don’t know what’s actually in there, and software keeps evolving so the information that may be there is probably outdated to some degree. This is where us humans come in. We can use that juicy context window to share:

  • The latest information from documentation that maps to our versions. Querying embeddings from this content can be plucked into context.
  • Rules and reasoning for translations. What are the steps that someone knowledgeable of both frameworks would write?
  • Quality examples of before and after. If you go through these steps what are solid mappings where patterns can be learned?

Depending on the quality of this work, you will see a massive upgrade in the results. They go from “some nice hints but wow so much is wrong” to “this is kinda usable out of the box!”

At the end of this process, “What are the steps that someone knowledgeable would write?” stuck with me. Someone else was going to go through the same migration, and it doesn’t make sense for them to have to build out all of the mappings. This is a waste of effort!

Time for the knowledgable turtles, already!

I have some knowledge of Next.js and Remix, but I am hardly The Expert. What if true experts (core team, folks from the community, etc) were the ones to package the relevant information about their frameworks?

Gonna live stream at 4pm PT (in 2 hours) and migrate an older Next.js application over to the App Router.

Will be just coding and playing tunes (strictly bangers) if you wanna hang out.https://t.co/LGZDMJiYzw

— Lee Robinson (@leeerob) June 7, 2023
Lee does great streams like this!

Lee does a great job showing a conversion from one version of next.js to another (to App Router land). This knowledge can be codified for anyone else doing this.

Picture an app developer creating a new project and installing all of their dependencies, and this time each one of them comes with hints from the projects themselves. And it’s turtles all the way down as each dependency comes with it’s own dependencies.

In this world you are building with a world of expertise funneling information into the amazing reasoning engine that is AI via LLMs.

What knowledge can we funnel?

A `.chat` file in every repo prompting AI assistant (e.g., Ghostwriter) to be most helpful in this project.

— Amjad Masad (@amasad) June 5, 2023
Others are talking about this

Each project has an ai-hints.json file, which is the router to correct information. It is a simple configuration that links out, or contains some inline information, for the given project.

It contains items such as:

  • URL to the source of the library (e.g. GitHub URL)
  • URL to the home page of the library
  • Description for the library
  • URL to issue tracker of the library. Given the variability of quality in here, pinches of salt are included, and can map to answers from trusted folk / voted up
  • URL to forums (e.g. StackOverflow / tags)
  • URL to documentation site(s)
  • URL to high quality community content (e.g. great blogs, YouTube, etc)
    • Popular libraries often bring in examples, and other projects that use them, and run their test suites as a great way to catch regressions that your consumers will run into. We can follow this pattern to get wisdom from customers not just official content
  • Versioning scheme:
    • One current issue is that LLMs aren’t aware of the differences between versions and thus you sometimes get feedback that is tied to an old version, which is frustrating!
  • URL or direct inline prompts that can be used to generate great tests
  • URL, or inline docs, to prompts and reasoning
    • This can become a store of knowledge. E.g. it can be where conversion knowledge goes
  • URL to project settings such as package.json in node / js ecosystem to start to find all of the turtles
  • Polymath services: URL(s) to polymath services that have knowledge of the project
  • Embedding stores: URL to a store, or a local placement of embeddings that can be used and aggregated
    • This way we can share embeddings versus recreating them time and time again

Speaking your full language

There have been some moments where my AI pair has been a true partner. A pattern in most of the best moments has been how the back and forth can be so much more concrete. Often, if you are building something for your own application, you end up following a path of translation.

You want to do concrete thing X using library Y. This would often resort with finding the documentation in various places and learning the abstract thing closest to X (which may not be easy to even find!) and then working out how to translate this information into what could work for the concrete task.

Now, you can *explain* the concrete task, explain that you want to use it with your set of tools, and the initial answers can be speaking in that language. And you may not even know which library to use, and you can ask for thoughts and implementations with those thoughts too!

Being able to aggregate the dependencies is huge.

One of the reasons I enjoy working on Polymath is it’s federated nature. If I am working on a project that uses Remix with Preact I can write a query that asks for information from both the Remix polymath and the one for Preact.

More knowledge? More context

Meet LTM-1: LLM with *5,000,000 prompt tokens*

That's ~500k lines of code or ~5k files, enough to fully cover most repositories.

LTM-1 is a prototype of a neural network architecture we designed for giant context windows. pic.twitter.com/neNIfTVipt

— Magic.dev (@magicailabs) June 6, 2023
Billions of tokens!

As we build out larger knowledge sets, we need new ways to feed our AI’s creativity. Fortunately, we are seeing various models get significantly larger capacity for prompt tokens, including updates today from OpenAI:

“gpt-3.5-turbo-16k offers 4 times the context length of gpt-3.5-turbo at twice the price: $0.003 per 1K input tokens and $0.004 per 1K output tokens. 16k context means the model can now support ~20 pages of text in a single request.”

OpenAI announcement on June 13th 2023

We are also getting smarter with how we can chain reasoning together. Instead of firing off one prompt as a shot, you can do multiple, and ask various questions to drastically improve quality too.

E.g.

  • Ask questions differently in parallel
    • Use different prompts
    • With different settings (e.g. multiple temperature values)
    • With different context
    • And even different models entirely
  • Using the output from above, ask for a critique
  • Feed the critique AND the options from above, and ask for a unified solution.

Ecosystem scratches it’s own back

Trying out @sourcegraph's Batch Changes to sunset a GitLab CI configuration across bunch of repositories at once. This is trully magic! pic.twitter.com/aWtm72TlIA

— Gvntr 零 (@47px) June 8, 2023
Large refactoring that works

By coming together and curating the information, we not only scratch the backs of all developers using our products, but in turn we are helping ecosystem health.

If you have worked on a platform, you know that one of the most important things to do is setup incentives to keep the platform evolving and healthy.

This is hard to do, and often goes wrong. You have probably worked with tools where updates break things more often than not. What does that teach you to do? Lock in to a particular version that is working, and only do upgrades when you have the time to deal with it.

If instead, upgrades work well? Then you should be game to continuously keep up to date. Doing well would look like:

  • Clear understanding of what’s in the upgrade
  • Codemods that can run to help you update. We will soon be in a world where our nano bots will see an update, create a PR, run all of our tests, and you will have a strong starting point, or maybe even more. I’m ready for my nanobots to be cleaning things up for me, handling updates, performance improvements, checking for security, for accessibility, etc etc.
    • We will just need great UX so we don’t feel swamped with this work. I don’t want a poor open source maintainer to be slammed with PRs in a way that it feels like spam. Categorization etc on GitHub will be a big winner here 🙂

An entirely revolution is happening when the world of developers and shared knowledge combines with the new world of LLMs.

We don’t want LLMs to use the overall corpus of code that is out there, because if the surface of the code is small, it may not have any great answers, and when the commons is huge, you can end up with a lowest common denominator such as:

Copilot always knows exactly what I'm about to do. pic.twitter.com/iewWWbg0kq

— antony  (@antony) June 9, 2023
The commons sometimes catches div-itis and worse

When it comes to code, hallucination is the enemy not just because it is very unhelpful, but also due to side effects such as security:

* People ask LLMs to write code
* LLMs recommend imports that don't actually exist
* Attackers work out what these imports' names are, and create & upload them with malicious payloads
* People using LLM-written code then auto-add malware themselves https://t.co/Va9w18RpWu

— LLM Security (@llm_sec) June 10, 2023
Humans write a lot of bad code too, so let’s be more vigilant for all?

It’s not just about the quality of the code and the ease of use. It’s also about the speed at which we can learn, adapt, and grow as developers. With a wealth of knowledge at our fingertips, we can quickly understand new technologies, make informed decisions about the tools we use, and stay ahead of the curve in an ever-evolving industry. This revolution in software development is empowering us to build better, more efficient, and more innovative applications than ever before.

I’m very much here for it, and Yet Another robots.txt^H^H^Hjson.

/fin

ps. In some time some of the expert turtles will become robot turtles 😉

Building a modern design system in layers

May 15, 2023

So often, when building a design system, we end up building something rigid that we will struggle with as time goes by. When this is done, we can try to evolve it well, and to make it so good that developers are somewhat happy with it even if they don’t like the rigid choices that were made.

This happens in all eras. Most recently, you will find many design systems that are React design systems vs. Web design systems that offer idiomatic React as an awesome option. As soon as you have made that choice you have locked in an audience and a lot of option value is taken off of the table.

Let’s consider that you are building a design system at a company that is on the path to becoming a 100 year company where you aspire to think long term. I contend that it makes sense to build your design system in layers that:

  • Have the wiggle room to move independently
  • A layer can even be replaced
  • Developers can swap out layers, especially those higher up in the stack

If I were holding a React design system today, and I was offered the opportunity to go back in time, I would swap it out for a layered Web design system.

What would this look like?

Let’s quickly talk about these layers.

Design foundational layer

Modern CSS can do so much these days. Start building out as much of the design system as possible with HTML and CSS. Components that used to be complex nested can not be a with some sprinkles. If you want some inspiration, check out the fine work of folks such as: Adam Argyle, Una, Jhey, and Josh W. Comeau.

Here you create your helpful guardrails via design tokens, your low level primitives, and your higher level components. I would consider using something like Adam’s Open Props as a strong foundation.

With some exploration, you will probably find that this layer gets you quite far these days, and with a nice story book playground developers will love to tinker as they learn it.

Interactivity layer

Next up you can loosely wire up the pieces via custom elements. This should be a relatively thin layer that brings to full life. You can choose a helpful tool such as Lit or Stencil to make it even easier.

These components run on top of the Web Platform, and are thus incredibly future proof.

Framework layer

Some developers will happily take the custom elements and use them directly, but most will probably want to use some bindings that feel idiomatic in the framework of their choice. There is no need for a holy war of “Framework vs Web Components!” They can happily work together these days. In fact, tools such as Lit have wrapper tools to make it easier to take your components and vend them as idiomatic framework components such as React.

Reach, Value and Future Proofing

With this approach you have set yourself on a solid long term path. Your work can now reach web developers that are choosing a variety of frameworks. If there is one thing we know about the web, it’s that there is healthy innovation and evolution on this layer. We can’t predict the future, but we both know that there will be new frameworks with significant developer share AND there will be a ton of apps running React and jQuery and … for some time. Both are true, so why not support both?

Now, you may be thinking: “We aren’t resourced to support all of the items in the framework later!” This is often true, however you don’t have to support them all, you can choose levels of support, such as:

  • First class / Well lit path: you make sure yourself that everything fully works end to end using code that you write and maintain.
  • Community support: with a well lit path or two out there, the community can take a look at the end to end solution, along with the layer below that it relies on, and create their own idiomatic bindings. The more you document the first class stack, the easier it will be for the community to take high quality code, with tests, and a spec of sorts and build something of high quality themselves. Make sure to elevate the work and effort that they put into it!
  • Individual usage: if there isn’t a library itself, a developer using their framework of choice can just use the custom elements to build on. Chances are one of these will jump up to the level of community support… especially if you incentivize and foster this.

I don’t know about you, but it feels like we are in a frothy time for the Web framework space. React has an army of developers, but there is some confusion on which direction to go. Will RSC fully pan out? When should you use Next.js or Remix?

This shines through when you see videos showing up putting forth points of view such as always bet on react! and I don’t hate react, i’m just moving on. It’s a time of change, right when there are amazing non-React options such as Solid, Svelte, Vue, Preact, and more. This is healthy, and having written web applications with more different frameworks than hot dinners that I have consumed, they can all help you deliver something great for users. So, it’s kinda win win.

It does make you think about…

Learn in Layers

Some wanna-be-gatekeepers have poo poo’d developers who come in and learn React first, and often skim some of the knowledge of the Web platform. There’s no need for the gate keeping, and this can be a great starting point.

That being said, I have always been a believer in Glen Vanderburg’s philosophy that it’s very much worth your time to understand one layer of abstraction below and potentially above you.

This means that you should have a solid understanding of the Web Platform APIs, as well as the core technology of JavaScript, CSS, and HTML. Often this naturally bleeds through, and although we are sometimes taught that a good abstraction doesn’t leak, some of the best abstractions are known as onion skin APIs when ”leaking” becomes a feature, an escape hatch.

ActiveRecord is a great example, where SQL isn’t hidden from you. Git has long built layers where you have porcelain and plumbing.

My good friend Dimitri has recently written about porcelains in the context of how we changed the API of Polymath with respect to talking to OpenAI. Instead of abstracting all of the fetch calls, we embrace the fact that developers probably know fetch well, and may want to use advanced features. We instead vend an API that understands the service via a request and response, so you end up with something such as:

While there has been a lot written in the form of “Web Components vs. $FRAMEWORK”, you find that this is totally the wrong frame. There are a variety of Web Platform APIs in the umbrella of Web Components, such as Custom Elements and Shadow DOM. If you take the time to learn this layer, you may find reason to use it with your web framework of choice. And if you do so, this knowledge will be durable no matter what other frameworks you use now and in the future. The browser moves slowly, and these APIs are here ~forever.

I recently worked with a team that deliver a high quality design system that is tied to React. If I could go back in time I would switch to this layered system in a second. It pained me to talk to developers that used the platform that the design system was used for but hadn’t chosen React. They want to deliver the same look and feel for users, so what do they end up doing? Many would view source and copy HTML and CSS and add interactivity. That’s a LOT of toil, and they have to keep up with changes in a painful manner with lots of diffing. If they could grab the lowest level, or maybe the custom elements with it, they would be off to the races in a sustainable manner.

Others felt they had to use React for these pieces, and hired consultants to do that work. This ate into their profits, and in dire situations could change the entire ROI of their solution (for small apps with a one or two person team).

I believe this design system will iterate and change over the lifetime of this company, that aims to be a 100 year one. You could argue that they are big enough to always make sure the React version is solid and updated and that developers have resources to keep with it.

Or, the evolution could happen at each level of the stack. Long time developers would understand the lower levels, and as they changed the highest framework level, they would be able to reuse that knowledge, use a community layer, or maybe the company has changed the first class framework and can use that solution.

If you have a modern design system, learn from my mistakes, and build it in layers.

That way a developer that chooses a different graph of tools from the subset of the options as Kent shows here, versus your exact path (which you will change too in the future), can play too.

/fin

Help Mario Reach The Next Platform!

March 28, 2023

Don’t leave him behind, or with a jump that’s just too far!

Nothing is static. The world is moving, and it’s the job of a platform to help an ecosystem evolve at the right pace.

When the pace is good, as the screen moves right, Mario sees where he needs to jump next and can time it well. It’s fun to be in the flow jumping from improvement to improvement!

When the pace is too fast, Mario feels stuck and either disappears off screen, or does a Hail Mary jump without a real chance to land on the next platform, falling into fire in bowsers castle.

Platforms need to treat the time that developers have to spend on evolving alongside us as precious. They should strive to minimize their toil, keeping a high level of trust with the developer community.

What are the keys to success here? How should we, as platform owners, drive things? This post will detail:

  • Understanding the use cases
  • Building enough of the new platform
  • Having everything we can to help you get there
  • Sharing pieces early
  • Starting the deprecation clock appropriately
  • Staying close to the platform
  • Do you need a new platform?
Researching use cases at the library

Understanding the use cases

We need to understand Mario’s needs and why it will be better for him to be on the new platform

When bringing up a v.next of a service, the platform needs an understanding of what the current version is being used for. A new version is shipping for a reason, and there should be clarity on basic questions such as:

  • How will developers be able to deliver the functionality they are offering?
  • Are there any capabilities that are not offered in the new version yet and what is the impact?
    • When options are restricted, it’s obviously a different set of timing should be considered
  • When will replacement capabilities show up (in the cases when they do)?
  • What are the new capabilities that we will be bringing to developers and what will they unlock?

Seems blindingly obvious, but having deep knowledge here is far from universal, and past decisions are often lost, not allowing us to apply Chestertons Fence.


Gymnast landing on a platform

Building enough of the new platform

We want the new platform wide enough that Mario can stick the landing

With a strong understanding, the new platform starts the journey of getting built and iterating. It’s vital to make sure that we have an appropriate amount of it built out before sharing it with the developer community. With a minimal surface area, you are at risk of not finding enough information and thus ending up making large changes in the future, and developers are left touching a small part of the elephant and extrapolating the rest.

The more we can get the new version in close range, the better the chance we have of enticing Mario, and having him get across to the other side with a cheer.

When building the new platform, we should also make sure to do a good job with our layering. As we do this, Mario will not have to learn new things for each part of his journey, and will instead accrue understanding. Great layering also means that we will be able to compose our solutions better, resulting in less churn as we make changes. This should result in fewer massive migrations.


Mario with a Jetpack

Having everything in place to help evolution

We want to give Mario jet packs and tools to make the leap

When making these platform changes, we are often placing toil on developers. Hopefully, there is much value too, but there will often be times where the changes we impose have a strong overall ecosystem value, but maybe not always the same value for the individual developer. The tragedy of the commons are real, and we can recognize this by going above and beyond with our help for developers.

What does the jet pack look like?

World class documentation on the why and the how. This is foundational, and includes great reference docs, tutorials & workshops, and samples & solutions.

World class tooling, where developers live all day long. Linters and codemods that give clear guidance and nudging on what changes are needed. With everything that it changing with development right now (e.g. AI copilots) imagine how far we could take this? Why can’t we have a future that has platform help in our developers code editors giving suggestions, and sending PRs to GitHub with changes that keep their projects up to date. If we did this right we could change the feeling from “ugh I feel like I am constantly being nagged to make some change! $PLATFORM understands that I have features to write and a business to run!!” to “Wow, $PLATFORM is helping me keep up to date and improving my app! I can see the improvements, and merchants are loving it!”


Mario looking into the future

Sharing early

Show Mario a glimpse into what’s coming up in the level so he can prepare

The window can be pretty small for Mario, and it can be helpful to offer a view of what’s coming, as long as we aren’t flooding him with information (see: building enough of the platform!).

Depending on what kind of changes we are doing, we may be able to allow developers to play with the future pieces early.

Remix does a great job of this. It allows you to opt into future flags, and then when the future becomes the present, you are ready for it!

How does that work? Let’s look at an example. Remix 2 is coming out soon, but the changes and new features are coming online in a way that you can opt in your Remix 1.* application today. The way that you name routes and their mapping with the file system is changing from this to a new system that includes flat routes.

Instead of waiting for Remix 2, today I can update my app with a couple simple steps:

  1. Tell my v1 app that I am ready to use the new feature via a simple declaration in my remix.config.js:

    future: { v2_routeConvention: true }
  2. Update my directories and files to map to the new system

When the feature is ready, communicating it could ramp up over time. You can start small and see a few early adopters find it and offer feedback. Recently the dev server started to console.log the fact that it’s ready, reaching more developers, and more feedback can flow in.

When Remix 2 ships, those who opted in will be able to delete the future flags and everything will just work. Now picture this for a larger number of flags. I can opt in and then I will be fully ready, finding myself on the next platform without even jumping… I just kept walking and got there.


The two roads of deprecation

Starting the deprecation clock appropriately

Don’t disintegrate the current platform too early!

You know those blocks that fall away when Mario has been stood on them for a bit? I hate those. They mean I have to think really quickly, anxiety rises, and I make mistakes and fall to my doom.

While it can be great to share information on the new platform early, as discussed, we should be careful when choosing when the clock starts ticking on deprecating the existing platform.

NOTE: A recent example of this was OpenAI deprecating the Codex API where the team maybe didn’t quite appreciate that although other APIs had somewhat transcended it, the work to make changes is real. To their credit, they got feedback and changed the deprecation by at least allowing longer access in the research program.

The clock shouldn’t start until the entire new platform is built, and we have the jetpacks ready. There have been times in which we build piece A of the new and deprecate the equivalent piece on the old. The problem is that the developer can’t actually migrate everything over, and they can become stuck.

In general we should cluster our changes and have clear times for most of our developers to do upgrades (the ones who aren’t jumping early to changes).

And we should be very careful not to end up in the situation that the meme shows above, where the new platform isn’t ready and the old one is deprecated. It’s so common to as often be the norm, and we need to fight entropy to change this.

I always somewhat appreciated the fact that I could schedule time after a major iOS SDK release to update our apps. The business understood this, and we had the space to make this happen in one chunk, and get our new app into the app store ready for the consumer releases. Contrast this with a drip, drip, drip of being asked to make small changes constantly.


Very thin bricks

Staying close to the platform

Favor the lightest abstractions that aren’t proprietary. Developers should be spending their time learning the platform, not new technology for the sake of it. For example, before creating a proprietary layout system that every developer has to learn, can we use CSS with our own special variables and styles and a sandboxed container to limit it? Or instead of creating a custom abstraction to fetch content, how about using the standard fetch(), even if you have to monkey patch it to add in specific auth, just make sure it’s well tested! No uncanny valley here please. Let developers bring their skills, and StackOverflow and ChatGPT along with them.


Mario on a relaxing walk

Do you need a new platform?

Mario would be happy to walk along and maybe take some stairs?

Before falling for second system syndrome, double and triple check that the right path forward is a new platform at all, or if there are smaller steps that can be made that over time will get Mario where he needs to be.

Respecting developers time

Let’s treat the time that developers have as a precious commodity. It actually takes work to stand still, as there is no such thing as stable. Browsers are changing. Libraries and SDKs and tools are changing, and we add to that. The more we can do to minimize it by putting in work on our side, the more leverage we get across the ecosystem. We want them spending as much time as possible on amazing features for our merchants, and jumping their way to success along the way.

🍄 Let’s a go! 🍄

/fin

Next Page »

Primary Sidebar

Twitter

My Tweets

Recent Posts

  • Stitching with the new Jules API
  • Pools of Extraction: How I Hack on Software Projects with LLMs
  • Stitch Design Variants: A Picture Really Is Worth a Thousand Words?
  • Stitch Prompt: A CLI for Design Variety
  • Stitch: A Tasteful Idea

Follow

  • LinkedIn
  • Medium
  • RSS
  • Twitter

Tags

3d Touch 2016 Active Recall Adaptive Design Agile AI Native Dev AI Software Design AI Software Development Amazon Echo Android Android Development Apple Application Apps Artificial Intelligence Autocorrect blog Bots Brain Calendar Career Advice Cloud Computing Coding Cognitive Bias Commerce Communication Companies Conference Consciousness Cooking Cricket Cross Platform Deadline Delivery Design Design Systems Desktop Developer Advocacy Developer Experience Developer Platform Developer Productivity Developer Relations Developers Developer Tools Development Distributed Teams Documentation DX Ecosystem Education Energy Engineering Engineering Mangement Entrepreneurship Exercise Eyes Family Fitness Football Founders Future GenAI Gender Equality Google Google Developer Google IO Google Labs Habits Health Hill Climbing HR Integrations JavaScript Jobs Jquery Jules Kids Stories Kotlin Language LASIK Leadership Learning LLMs Lottery Machine Learning Management Messaging Metrics Micro Learning Microservices Microsoft Mobile Mobile App Development Mobile Apps Mobile Web Moving On NPM Open Source Organization Organization Design Pair Programming Paren Parenting Path Performance Platform Platform Thinking Politics Product Design Product Development Productivity Product Management Product Metrics Programming Progress Progressive Enhancement Progressive Web App Project Management Psychology Push Notifications pwa QA Rails React Reactive Remix Remote Working Resilience Ruby on Rails Screentime Self Improvement Service Worker Sharing Economy Shipping Shopify Short Story Silicon Valley Slack Soccer Software Software Development Spaced Repetition Speaking Startup Steve Jobs Stitch Study Teaching Team Building Tech Tech Ecosystems Technical Writing Technology Tools Transportation TV Series Twitter Typescript Uber UI Unknown User Experience User Testing UX vitals Voice Walmart Web Web Components Web Development Web Extensions Web Frameworks Web Performance Web Platform WWDC Yarn

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Archives

  • October 2025
  • September 2025
  • August 2025
  • January 2025
  • December 2024
  • November 2024
  • September 2024
  • May 2024
  • April 2024
  • December 2023
  • October 2023
  • August 2023
  • June 2023
  • May 2023
  • March 2023
  • February 2023
  • January 2023
  • September 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • November 2021
  • August 2021
  • July 2021
  • February 2021
  • January 2021
  • May 2020
  • April 2020
  • October 2019
  • August 2019
  • July 2019
  • June 2019
  • April 2019
  • March 2019
  • January 2019
  • October 2018
  • August 2018
  • July 2018
  • May 2018
  • February 2018
  • December 2017
  • November 2017
  • September 2017
  • August 2017
  • July 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012

Search

Subscribe

RSS feed RSS - Posts

The right thing to do, is the right thing to do.

The right thing to do, is the right thing to do.

Dion Almaer

Copyright © 2026 · Log in

 

Loading Comments...