Troubleshooting
commitbee doctor
Your first stop for diagnosing issues. It checks:
- Config file location and existence
- Provider connectivity (can CommitBee reach Ollama/OpenAI/Anthropic?)
- Model availability (is the configured model actually pulled?)
- Git repository detection
Common Issues
âEmpty response from LLMâ
The model ran out of tokens before producing output. Usually caused by thinking models consuming the token budget with thinking blocks.
Fix: Either switch to qwen3.5:4b (default, no thinking overhead) or increase num_predict:
num_predict = 8192
think = true
âFirst line is X chars (max 72)â
The LLM generated a subject line thatâs too long. CommitBee will retry up to 3 times with correction instructions.
If it still fails, the error tells you exactly how long the line was. This is rare with the default model.
âNo staged changes foundâ
You need to git add files before running CommitBee.
âCannot connect to Ollamaâ
Ollama isnât running. Start it with ollama serve or check that the configured ollama_host is correct.
âModel not foundâ
The configured model isnât pulled. Run ollama pull qwen3.5:4b (or whichever model youâve configured).
âPotential secrets detectedâ
CommitBee found something that looks like an API key or credential in your staged changes. If itâs a false positive and youâre using Ollama (local), use --allow-secrets. For cloud providers, this is a hard block â remove the secret from your staged changes.
Debug Mode
For deep debugging, use --show-prompt to see the exact prompt sent to the LLM:
commitbee --dry-run --show-prompt
This prints the full prompt including the diff, evidence flags, constraints, symbol list, and character budget. Very useful for understanding why the LLM made a particular choice.
For internal tracing:
COMMITBEE_LOG=debug commitbee --dry-run
This shows config loading, symbol counts, sanitizer steps, validation violations, and retry attempts.