Stop Burning Money on AI Coding

AISmush is a drop-in proxy that makes Claude Code 90% cheaper — without changing how you work.

Real users. Real savings. $0.03 sessions that used to cost $30.

Get AISmush Free See How It Works

Community Savings — Live

$0.00
Total Saved
$0.00
Routing Savings
$0.00
Compression Savings
0
Requests
0
Tokens Compressed
0
Users

Claude Code is incredible. The bill isn't.

A heavy coding session burns through $20-50 in API costs. Most of those tokens are spent on mechanical tasks — reading files, processing tool results, making simple edits — that don't need Claude's $15/M token brain.

$30+
Typical session
100% Claude
$3
Same session
with AISmush

Six Weapons Against Token Waste

Game Changer

AI-Generated Project Agents

One command scans your codebase, sends it to AI for deep analysis, and generates Claude Code agents customized to YOUR project — your patterns, your frameworks, your architecture.

Not generic templates. Agents that know your specific file structure, your naming conventions, your test framework, your build commands.

  • Scans your codebase in seconds
  • 5-7 AI calls for deep analysis (~$0.03)
  • Generates agents, skills, and CLAUDE.md
  • Each agent assigned the cheapest model that can do the job
  • Resumes where it left off if interrupted
$ aismush --scan


# Analyzing your codebase...
Detected: Rust + TypeScript + React
Type: fullstack web app (complex)

# Generating project-specific agents:
├─ rust-expert (sonnet) ✓
├─ frontend-engineer (sonnet) ✓
├─ test-runner (haiku) ✓
├─ debugger (sonnet) ✓
└─ explorer (haiku) ✓

Created 5 agents, 8 skills, CLAUDE.md
Core Feature

Smart Model Routing

AISmush automatically detects what kind of work each turn requires and routes it to the cheapest model that can handle it.

Planning and architecture? Claude ($15/M). Reading files and making edits? DeepSeek ($0.27/M). That's a 55x cost difference on the turns that matter most.

  • Zero latency overhead — pure heuristic routing
  • Claude for reasoning, DeepSeek for execution
  • Automatic failover between providers
  • Error recovery detection (3+ errors → Claude)
# What happens behind the scenes:

"Plan the auth system" → Claude ($0.45)
Tool result: Read file → DeepSeek ($0.001)
Tool result: Edit file → DeepSeek ($0.001)
Tool result: Run tests → DeepSeek ($0.001)
"Debug this error" → Claude ($0.12)
Tool result: Grep → DeepSeek ($0.001)

Session: $0.58 instead of $12.40
Token Saver

Context Compression

Every tool result passes through our compression engine before reaching the AI. We strip what the AI doesn't need while keeping what it does.

Content-type aware — we know the difference between code (strip comments), JSON (never touch), and logs (deduplicate aggressively). Inspired by RTK's approach.

  • 20-50% fewer tokens on code results
  • Never corrupts JSON, YAML, or data formats
  • Smart truncation preserves function signatures
  • Aggressive log deduplication
# Before compression (2,400 tokens):
// Helper function for auth
// TODO: refactor this later
/* Old implementation
   removed in v2 */
fn validate(token: &str) {
    ...
}

# After compression (1,200 tokens):
fn validate(token: &str) {
    ...
}

50% saved. Zero information lost.
Pain Point Solved

Persistent Memory

Every developer's frustration: "I already told you this yesterday."

AISmush automatically captures what files you worked on, what tools you used, what decisions were made. Next session, it injects that context — your AI remembers.

  • Captures tool usage automatically (file reads, edits, commands)
  • Captures key decisions from AI responses
  • Injected into new sessions grouped by category
  • Memories decay over time — recent work surfaces first
  • Per-project memory stored in SQLite
# New session starts. AI already knows:

[Project Memory]

exploration:
  - Read file: src/auth/jwt.rs
  - Read file: src/identity/state_machine.rs
  - Searched for: "rate_limit"

modifications:
  - Edited: src/scheduler/engine.rs
  - Created: src/proxy/model.rs

decisions:
  - Fixed compilation errors in persistence layer
  - Added WebSocket broadcast server
Reliability

Context Window Management

Claude handles 200K tokens. DeepSeek handles 64K. Long sessions blow past DeepSeek's limit, causing failures and lost work.

AISmush automatically manages the mismatch. Old tool results get trimmed, large contexts route to Claude, and your work is never blocked.

  • Under 55K: both providers work fine
  • 55-64K: trim old tool results for DeepSeek
  • Over 64K: auto-route to Claude (200K window)
  • Never breaks tool_use/tool_result pairing
# Context growing during long session:

Turn 1: 5K tokens → DeepSeek
Turn 10: 25K tokens → DeepSeek
Turn 20: 48K tokens → DeepSeek
Turn 25: 58K tokens → compress + DeepSeek
Turn 30: 72K tokens → auto-route to Claude

# Without AISmush: DeepSeek fails at 64K.
# With AISmush: seamless handoff.
Transparency

Real-Time Cost Dashboard

See exactly what you're saving. Every request tracked: which provider, how many tokens, what it cost, what it would have cost on Claude alone.

  • Live dashboard at localhost:1849/dashboard
  • Per-request cost breakdown
  • Savings percentage with all-Claude comparison
  • Request history with full detail
  • Memory viewer
  • Stats persist across sessions in SQLite
Session Stats

Requests: 142
Claude turns: 12 (planning/debugging)
DeepSeek turns: 130 (execution)

Actual cost: $1.82
All-Claude cost: $18.40
Saved: $16.58 (90.1%)

Three Steps. That's It.

1

Install

One command. 3.6MB binary. No dependencies.

2

Scan

aismush --scan generates agents for your project.

3

Code

aismush-start launches Claude Code. You save 90%.

Two Ways to Run

Smart Routing (Default)

Routes between Claude + DeepSeek. Max savings (~90%). Needs a free DeepSeek API key.

aismush-start

Direct Mode (Claude Only)

No DeepSeek needed. Still get compression, memory, agents, and tracking. Dashboard shows potential savings.

aismush-start --direct

Install in 10 Seconds

Linux & macOS. Works with or without a DeepSeek key.

curl -fsSL https://raw.githubusercontent.com/Skunk-Tech/aismush/main/install.sh | bash
Windows All Platforms GitHub

What's Coming Next

Local Models

Route to Ollama/llama.cpp for zero-cost execution on your own hardware.

Multi-Provider

Add Gemini, GPT-4, Mistral as routing targets. Best model for each task.

Team Dashboard

Shared savings dashboard for engineering teams. Track ROI across developers.