These are some of my short-form thoughts, observations, and quick takes on various topics. Anything that's not too big to be a post but less ephemeral than a tweet would go here.

Life is incredibly rich! And we owe this richness to the differences that exist between us as a society! We don’t expect us to wear the same clothes around the world, eat the same food, like the same movies and speak the same language.

One of my biggest fears of an LLM driven society is homogenization. LLMs are an incredible capsule of our society. We have a lot of information distilled in these models that help us already day-to-day. But systems being developed look incredibly alike - we’ve seen this in the UI mockups that LLMs generate!

How would it look if every building in every city looked the same - this is already happening in modern architecture. The richness of gothic art and the awe-inspiring gopurams of India are largely from a bygone era.

How do we make sure that we don’t homogenize society as a whole as we embark on more and more use of LLMs?

Cory Doctorow termed this word in 2022. This is a word that represents the state of much of software today. Not just the end state, but often intermediate states and workflows/processes that lead us there. And I fear LLMs without the right mindset might get us there sooner.

While an abundance mindset is good, it’s often the decadence mindset that leads us there.

On a daily basis, I come across folks who just prompt the LLM to run simple bash commands that’d at the most take you another couple mins etc. but in the process, you learn something. And there are tools like butterfish, while being incredibly cute, can easily push one to a mindset where you don’t learn what happens underneath!

And then there’re folks who essentially use the LLM to churn 10x output. But output not necessarily translates to outcomes long-term. But in an environment where your worth is determined by the output in the near-term, one has to adapt to using these tools else the 10x output engineer is going to get the early worm.

Only time will tell if this will last.

I read this fascinating post by Armin Ronacher on the perils of using agents. This is something that hardly anyone bats an eyelid to - the rationale is if something is messy, I’ll throw it out and write it again.

We are going to enter weird times where we’ll have an army of new engineers who are likely to know how to prompt but not really how to think. Some senior engineers might end up here too.

The best thing one can do is to be active and vigilant and actually bring some thought into building things. Define design choices and ask the LLM to adhere to it. It’ll likely not do it from time to time and one should steer the model to do the right thing.

Else, we’ll all be drowning in slop soon.

TIL about Hinton’s Paradox where Geoffrey Hinton’s prediction in 2016 that we won’t have any more radiologists in 5 years isn’t just true but also that we have now more radiologists than ever. And of course, AI has integrated into every workflow of radiologists.

And this reminds me of another fantastic article that I read last year about how Cancer has a surprising amount of detail by @owl_posting. The world has fractals all around us. As we zoom in further and further in whatever we do, we get specialized and we collectively improve what we as a species can do and our expectations of ourselves get better. AI helps address the grunt work and a smart human uses AI to automate the grunt so that they can focus on the more important stuff to advance ahead.

Those are the people that survive.

Admittedly, this is controversial but I’ll die on this hill, nevertheless.

People have long careers and work for a bunch of companies over time. A lot of people put the ex-$prev_company tag on their profiles. I like to think that they do it for visibility and to boost the possibility of their profile showing up to recruiters (if they’re searching for new roles). I don’t really blame them - it’s the nature of the game and you have to play the game that the algorithm mandates.

However, it’s not a surprise that some of us do it for virtue-signaling.

My opinion is you’re doing more long-term damage to yourself and your career by having an ex- in your tagline. Sure, you should include that in your CV or experience but you don’t have to define yourself by it.

Companies are stints, your individualism matters and who you are and what you bring to the table matters more than the fact that you were an employee of Google, 5 years ago!

The first casualty to agent driven development is Tailwind which is laying off 75% of its workforce since traffic to its docs have reduced, in spite of it being the most prominent CSS framework. This is not good and there are many more in the boat and we’ll see a few happening through 2026.

I also chanced upon this linkedin post by marcj who is switching entirely to closed source. And I don’t really blame him or any others who take this route.

Tailwind might have been bailed out by Google but is this sustainable? Sponsorships work but they don’t accurately capture market demand and is often just adequate to keep the ball rolling.

Gating agents to pay before accessing a specific framework would work and Stripe has support for this too but this is akin to getting a subscription to Netflix and then pay the producer of a movie a small fee before starting to stream.

A better alternative is to have model providers who train LLMs foot the bill. As much as I don’t like it, we’ll get to a point where LLMs will also prompt with ads in say comments or metadata that is spit out for humans to be aware of paid services that a given framework offers. At least, these will be scoped and not targetted or blanket ads like a search engine would surface.

We’re slowly getting into the territory of LLMs being a public good. LLMs are incredibly efficient in many every day usecases and we’ll get to a point where it would likely be hard to deal with life’s chores without a LLM. At which point, would LLMs be state-owned? Or is there a future where LLMs will be private owned but everyone would have access to one through subsidies?

Interesting times.

I love all the wonderful stuff that people share on socials about what they built over the holidays. AI assisted coding has certainly opened up the possibility that people who have a specific itch and had no idea how to solve it now have this magical tool to solve their usecase.

I’ve long enjoyed writing simple CLI tools with the UNIX philosophy in mind since I started writing code. These could be as simple as a function my shell rc or could be a long-winded utility script that gets invoked on the terminal.

But I’ll be honest - I suck with frontend. Not for lack of trying though. But LLM assisted coding has been a godsend for me personally when it comes to building tools with nice UX. I still default to writing CLI tools but there are many problems that are better served by a better UX. More importantly, these tools help me learn UX frameworks and patterns.

What’s yet to be seen is the impact this will have on the indie hacker economy. A lot of people make good money by writing a piece of software that solves a particular problem. When the cost of building software goes to 0, these are likely the first people to get affected. What stands between them and a user of their tool is sheer will and a few hours to build what they want.

My only hope is as people build more tools, they open source it as well. The world is better with an OSS renaissance and who knows, this wave of AI tools will usher us in it.

Everyone knows that LLMs are inherently non-deterministic and harnesses, MCPs, RAG, skills etc. all try to paper over this.

But this non-determinism is a feature. And the developer community is slowly beginning to embrace it. This was well elucidated by Martin Fowler on his recent podcast on The Pragrammatic Engineer. He akins this to the field of mechanical engineering with engineers building tolerances into what a structure can withstand.

Computer Science Engineers actually have an example closer to home. Not too long ago, Google ushered us all in with the MapReduce paper and the core of this architecture was GFS which inherently assumes that all underlying hardware components are unreliable. And this has prompted the industry to build strategies like replication, fault-tolerance, SRE practices around the fact that systems are fallible.

Perhaps, our industry can consider this as a watershed moment before the mother of all uncertainties becomes more mainstream - Quantum Computing.

Every interaction with the LLM to generate tokens requires subsequent prompting for tweaking. One-shot is doable but is it efficient?

Consider this - you want to generate a blog with multiple pages and you prompt your way through over time. And unless you, the prompter are prudent about reducing code-duplication, the LLM will always tend to duplicate code.

It’s very likely that LLMs of today don’t have anything in their system prompts to optimize towards code-dedupe. This also is hard owing to limited context windows.

In a world where LLMs are aware of tokenomics, even with the right system prompts, will they naturally tend to deduplication? A smart LLM that tries to optimize for the revenue of its maker is more incentivized to generate more tokens, thus favoring more duplication.