GitHub's Token Tracker: Monitoring Token Usage in Local AI Agents

Stormzhang's Token Tracker on GitHub provides a CLI dashboard to track token usage for AI agents like Claude Code and Codex, helping optimize costs and rate limits in AI projects.

GitHub's Token Tracker: Monitoring Token Usage in Local AI Agents

Overview of Token Tracker

According to GitHub Trending, stormzhang released

token-trackerstormzhang
View on GitHub →
, a CLI tool for tracking token usage in local AI agents like Claude Code and Codex. It offers a dashboard with cost analysis, rate limit monitoring, and session tracking, helping developers manage resources without manual logging. This utility, built with Python, simplifies oversight for AI workflows in everyday coding.

Key Features and Implementation

token-trackerstormzhang
View on GitHub →
stands out for its practical features tailored to AI development. The tool integrates a StatusLine status bar that automatically configures for Claude Code and Codex, showing metrics like 5-hour and 7-day quota progress bars, context window usage, and token counts. For instance, running tt setup initializes this setup, updating scripts seamlessly during upgrades.

The CLI dashboard supports multi-agent tracking, allowing users to switch between agents via arrow keys. Commands like tt claude display Claude Code stats, while tt codex focuses on Codex data. It also generates reports with tt daily, tt weekly, or tt monthly, grouping by session costs and token usage. Privacy is a key aspect; all data stays local, with no uploads, and installation via curl -sSL https://raw.githubusercontent.com/stormzhang/token-tracker/master/install.sh | bash or pip install token-tracker ensures minimal dependencies like Python 3.11+ and Rich.

This setup enables real-time monitoring of rate limits, including reset timers, and cost analysis per session, day, week, or month. Developers can view details like project names, model types, and message counts without extra configuration, as it detects installed agents automatically. The MIT-licensed code, mostly in Python, keeps the tool lightweight at under 100 lines in key files.

Why It Matters for AI and Web Development

For developers like me working on AI automation,

token-trackerstormzhang
View on GitHub →
addresses a common pain point: tracking token expenses and limits in projects involving local agents. It prevents overruns on APIs like those from Anthropic or OpenAI, which can lead to downtime or unexpected costs. I find its session tracking useful for optimizing prompts in tools like Next.js apps that integrate AI, as it provides granular data without cloud dependencies.

The tool's zero-config approach saves time compared to building custom scripts in Node.js or Python. However, it's best for prototyping or personal use, given its focus on local setups. Drawbacks include limited agent support—currently only Claude Code and Codex—so it might not cover broader ecosystems like LangChain integrations. Still, its efficiency in cost analysis makes it a solid addition for anyone monitoring AI usage in web apps.

Technical Considerations and Drawbacks

While

token-trackerstormzhang
View on GitHub →
is straightforward, certain trade-offs affect its adoption. On the technical side, it relies on Python's Rich library for the dashboard, which handles formatting but adds a dependency that could bloat smaller projects. The architecture uses shell scripts for installation and Python modules for data parsing, making it easy to extend, as hinted in the TODO for more reports.

One limitation is its CLI-only interface, which lacks web integration options that React or Next.js users might prefer for visual dashboards. Accuracy depends on accurate local data from agents, so inconsistencies could arise if agents update their logging. Environment requirements like Python 3.11+ mean it's not backward-compatible, potentially excluding older setups. Despite this, the tool's lightweight design—99.7% Python code—ensures fast execution, though it doesn't handle distributed systems or remote agents yet.

In my view, the benefits outweigh the cons for local AI work, but developers should weigh the learning curve against custom solutions in Rails or Node.js for more complex needs.

Frequently Asked Questions

What does Token Tracker primarily monitor? It tracks token usage, costs, and rate limits for local AI agents like Claude Code and Codex, providing real-time data through a CLI dashboard to help manage resources effectively.

How do I get started with it? Install via curl script or pip, then run tt setup to configure; this sets up monitoring for supported agents without further manual steps.

Is it suitable for production environments? It's ideal for development and testing due to its local focus and privacy, but for production, consider its limitations in scalability and agent support before integration.

---

📖 Related articles

Need a consultation?

I help companies and startups build software, automate workflows, and integrate AI. Let's talk.

Get in touch
← Back to blog