← Back to search

#tokenomics

Commentary

Incentives for LLM

Every interaction with the LLM to generate tokens requires subsequent prompting for tweaking. One-shot is doable but is it efficient?

Consider this - you want to generate a blog with multiple pages and you prompt your way through over time. And unless you, the prompter are prudent about reducing code-duplication, the LLM will always tend to duplicate code.

It’s very likely that LLMs of today don’t have anything in their system prompts to optimize towards code-dedupe. This also is hard owing to limited context windows.

In a world where LLMs are aware of tokenomics, even with the right system prompts, will they naturally tend to deduplication? A smart LLM that tries to optimize for the revenue of its maker is more incentivized to generate more tokens, thus favoring more duplication.