← Back to search

#llm

Commentary

Agent Psychosis

I read this fascinating post by Armin Ronacher on the perils of using agents. This is something that hardly anyone bats an eyelid to - the rationale is if something is messy, I’ll throw it out and write it again.

We are going to enter weird times where we’ll have an army of new engineers who are likely to know how to prompt but not really how to think. Some senior engineers might end up here too.

The best thing one can do is to be active and vigilant and actually bring some thought into building things. Define design choices and ask the LLM to adhere to it. It’ll likely not do it from time to time and one should steer the model to do the right thing.

Else, we’ll all be drowning in slop soon.

Incentives for LLM

Every interaction with the LLM to generate tokens requires subsequent prompting for tweaking. One-shot is doable but is it efficient?

Consider this - you want to generate a blog with multiple pages and you prompt your way through over time. And unless you, the prompter are prudent about reducing code-duplication, the LLM will always tend to duplicate code.

It’s very likely that LLMs of today don’t have anything in their system prompts to optimize towards code-dedupe. This also is hard owing to limited context windows.

In a world where LLMs are aware of tokenomics, even with the right system prompts, will they naturally tend to deduplication? A smart LLM that tries to optimize for the revenue of its maker is more incentivized to generate more tokens, thus favoring more duplication.