Back to all blogs

Tokenmaxxing and ToS

Why tokenmaxxing makes AI worse for everyone, and how to maximize your usage while staying ToS compliant

Recently I saw a post regarding the "tokenmaxxing" phenomenon. Someone shared a script whose sole purpose was to send heartbeats to a Claude AI client at set times throughout the day, to start the usage windows before the user actually began using the service. The user benefits from this service because their usage window is shorter when they start using it after working with the AI. However, using these scripts and helping people use them violates the ToS, so one shouldn’t proceed. While using programmatic communication methods with the AI, other than the API, is prohibited, it’s worth noting that the outcome is desirable, and can be achieved without breaking the ToS. 

ToS compliant solution

Instead of using automation, one would greatly benefit from setting up a Project in Claude AI that’s instructed to gather information from your email, calendar and Jira, and present this information summarized. Once you wake up before work, send a message to this Project, requesting a day summary. This way you would only use resources when you actually need them, and benefit from doing so, all while not breaching the ToS. 

Once you get to work, the usage window has already started, and once the next one begins, send another message requesting summary for the afternoon, and your second window for the day will begin. All this without breaching the ToS.

The act matters

One could argue that it’s essentially the same outcome, and why would anyone care? Whether the outcome is the same, or even if the intent is benign, since the ToS exhaustively prohibit “any kind of automated or non-human means, whether through a bot, script or otherwise”. The user risks suspension or immediate termination of contract, without a chance of refund, if Anthropic “believes you have materially breached the terms”. Imagine you had just paid the $200/month subscription, and decided to breach the ToS. There goes $2,400. 

Why limit it in the first place

Why a five-hour window then? When I started using Claude, there were no time-based limitations other than what came from the Acceptable Use and what the Plan provided. Then, some entities began abusing the Claude client by “using 100% of their allowance all the time”. This led to five-hour limits and a weekly limit.

There’s nothing bad about the five-hour window. However, one would reasonably ask “why not an eight-hour window”, and they’re right to question that. If we assumed all of the users were humans, and used Claude as per ToS, there would be no problem with the eight-hour window. However, since certain entities don’t want to follow the rules, a five-hour window works well for making abusive usage a little harder. Because the day on Earth consists of 24 consecutive hours, or three consecutive eight-hour periods, it would be too easy for a computer program to abuse Claude, because a process could be timed to start at the same time every single day. A great way of making abuse a little harder is to bend the concept of time. By adding one hour to the day, automated processes will eventually iterate to a position where they start their jobs at the wrong time, meaning they either use too little or too much resources per window. Whether this was Anthropic's actual design intent remains unconfirmed, but the mathematical properties favor abuse resistance.

The 25-hour day isn’t too big of a deal for a human user, but for an automated system it’s, hopefully, painful enough to implement the timing so that it could take maximum benefit from the AI. 

Looking forward

As of writing this on the 8th of January, 2026, Anthropic hasn’t yet come up with a solution to restrict programmatic manipulation of time windows. The patterns probably have already been flagged, and we’re facing yet another use restrictions in the future. Until then, don’t break the rules and be friends with AI.

Blog photo by Nano Banana

Back to all blogs
Software
Outside broadcast without the truck.
Autonomous radio automation.
Broadcast-quality audio normalization.
Guru

Dynamic Score

Making decisions on automated dynamic range compression requires a scoring system that takes three different measurements into consideration.

Audio encoding artifacts

How audio encoders use psychoacoustic masking to reduce file sizes, and why this process creates audible artifacts in compressed formats.

Loudness and normalization

Understanding what is loudness, how it's measured and why standards exist.

Bit depth in digital audio

Understanding bit depth, quantization and why float sample rates are needed.

Audio dithering

How randomization helps to alleviate the effects of quantization.

Contact
Hamburger menu icon from Fremen webpage. Fremen is a part of Collins Group.