These are some of my short-form thoughts, observations, and quick takes on various topics. Anything that's not too big to be a post but less ephemeral than a tweet would go here.

TIL about Hinton’s Paradox where Geoffrey Hinton’s prediction in 2016 that we won’t have any more radiologists in 5 years isn’t just true but also that we have now more radiologists than ever. And of course, AI has integrated into every workflow of radiologists.

And this reminds me of another fantastic article that I read last year about how Cancer has a surprising amount of detail by @owl_posting. The world has fractals all around us. As we zoom in further and further in whatever we do, we get specialized and we collectively improve what we as a species can do and our expectations of ourselves get better. AI helps address the grunt work and a smart human uses AI to automate the grunt so that they can focus on the more important stuff to advance ahead.

Those are the people that survive.

Admittedly, this is controversial but I’ll die on this hill, nevertheless.

People have long careers and work for a bunch of companies over time. A lot of people put the ex-$prev_company tag on their profiles. I like to think that they do it for visibility and to boost the possibility of their profile showing up to recruiters (if they’re searching for new roles). I don’t really blame them - it’s the nature of the game and you have to play the game that the algorithm mandates.

However, it’s not a surprise that some of us do it for virtue-signaling.

My opinion is you’re doing more long-term damage to yourself and your career by having an ex- in your tagline. Sure, you should include that in your CV or experience but you don’t have to define yourself by it.

Companies are stints, your individualism matters and who you are and what you bring to the table matters more than the fact that you were an employee of Google, 5 years ago!

The first casualty to agent driven development is Tailwind which is laying off 75% of its workforce since traffic to its docs have reduced, in spite of it being the most prominent CSS framework. This is not good and there are many more in the boat and we’ll see a few happening through 2026.

I also chanced upon this linkedin post by marcj who is switching entirely to closed source. And I don’t really blame him or any others who take this route.

Tailwind might have been bailed out by Google but is this sustainable? Sponsorships work but they don’t accurately capture market demand and is often just adequate to keep the ball rolling.

Gating agents to pay before accessing a specific framework would work and Stripe has support for this too but this is akin to getting a subscription to Netflix and then pay the producer of a movie a small fee before starting to stream.

A better alternative is to have model providers who train LLMs foot the bill. As much as I don’t like it, we’ll get to a point where LLMs will also prompt with ads in say comments or metadata that is spit out for humans to be aware of paid services that a given framework offers. At least, these will be scoped and not targetted or blanket ads like a search engine would surface.

We’re slowly getting into the territory of LLMs being a public good. LLMs are incredibly efficient in many every day usecases and we’ll get to a point where it would likely be hard to deal with life’s chores without a LLM. At which point, would LLMs be state-owned? Or is there a future where LLMs will be private owned but everyone would have access to one through subsidies?

Interesting times.

I love all the wonderful stuff that people share on socials about what they built over the holidays. AI assisted coding has certainly opened up the possibility that people who have a specific itch and had no idea how to solve it now have this magical tool to solve their usecase.

I’ve long enjoyed writing simple CLI tools with the UNIX philosophy in mind since I started writing code. These could be as simple as a function my shell rc or could be a long-winded utility script that gets invoked on the terminal.

But I’ll be honest - I suck with frontend. Not for lack of trying though. But LLM assisted coding has been a godsend for me personally when it comes to building tools with nice UX. I still default to writing CLI tools but there are many problems that are better served by a better UX. More importantly, these tools help me learn UX frameworks and patterns.

What’s yet to be seen is the impact this will have on the indie hacker economy. A lot of people make good money by writing a piece of software that solves a particular problem. When the cost of building software goes to 0, these are likely the first people to get affected. What stands between them and a user of their tool is sheer will and a few hours to build what they want.

My only hope is as people build more tools, they open source it as well. The world is better with an OSS renaissance and who knows, this wave of AI tools will usher us in it.

Everyone knows that LLMs are inherently non-deterministic and harnesses, MCPs, RAG, skills etc. all try to paper over this.

But this non-determinism is a feature. And the developer community is slowly beginning to embrace it. This was well elucidated by Martin Fowler on his recent podcast on The Pragrammatic Engineer. He akins this to the field of mechanical engineering with engineers building tolerances into what a structure can withstand.

Computer Science Engineers actually have an example closer to home. Not too long ago, Google ushered us all in with the MapReduce paper and the core of this architecture was GFS which inherently assumes that all underlying hardware components are unreliable. And this has prompted the industry to build strategies like replication, fault-tolerance, SRE practices around the fact that systems are fallible.

Perhaps, our industry can consider this as a watershed moment before the mother of all uncertainties becomes more mainstream - Quantum Computing.

This tweet from Andrej Karpathy has been doing the rounds and I felt this quoted tweet from rahulgs was on-point based on my experience working with LLMs. The frontier is moving quickly for writing code at the least and the more harnesses and tools you provide for your engineer and the more context you provide your models, the better it is going to get over time.

Security is still an very important and sadly overlooked part at this point.

Every interaction with the LLM to generate tokens requires subsequent prompting for tweaking. One-shot is doable but is it efficient?

Consider this - you want to generate a blog with multiple pages and you prompt your way through over time. And unless you, the prompter are prudent about reducing code-duplication, the LLM will always tend to duplicate code.

It’s very likely that LLMs of today don’t have anything in their system prompts to optimize towards code-dedupe. This also is hard owing to limited context windows.

In a world where LLMs are aware of tokenomics, even with the right system prompts, will they naturally tend to deduplication? A smart LLM that tries to optimize for the revenue of its maker is more incentivized to generate more tokens, thus favoring more duplication.

Of course, everyone is talking about this tweet by Andrej. I do feel this at times - there’s just a lot to keep up and honestly the best way I’m making progress is learning a thing or two every day and accepting that I can’t be knowing everything in one-go and trusting my future self to figure it out.

Having a bunch of pet-projects/problems to work on and putting these tools to use to work on these helps over time.

I’m proud to admit that I’m bitten by the Rich Hickey bug. As many in reddit convey, I strarted watching a video or two during my commute to work and a lot of things made so much sense that I decided to give Clojure a try. Of course, I’m not at the level where I want to run my fingers in Rich’s Bob Ross like hair. :P

What are my first impressions? There are many positives already about Clojure. I’m not a Lisp aficionado and have always despised the need for many many parentheses in a program. But Clojure, although it has a healthy dose of parentheses is different from Lisps in many regards. It’s much more simpler and consistent that Common Lisp, I’ve heard.

But the striking feature of Clojure for me so far is Programming to Abstractions. Most OO programs and C programs do not program for abstractions but to implementations. Java took a different spin on it by introducing interfaces that lets one focus on programming to conform to certain interfaces. That gave us the extremely rich and elaborate Collections library of Java. Clojure takes it to the next level since it creates intermediary structures of a certain type (say seq) and completely relies on abstractions.

Python programmers have a different name for this. I believe it originated from the Ruby world - Duck Typing. Duck Typing dictates that if something can quack, it’s a duck. Of course, IRL this is a blatant lie. I can quack too but I’m not a duck. A car has a steering wheel but it’s not a cruise boat or vice versa. While this appears to be an issue, it’s not that big a deal in an environment that strongly promotes writing small composable libraries.