AI-powered command-line workflow tool for developers. Without telemetry.
Installation • Quick Start • Features • Documentation • Contributing
Sceni Code is a fork of Qwen code, which in turn is a fork of Gemini used within our company
- No telemetry - We manually review all upstream code for telemetry and strip it out
- Less flexibility - We stripped all authentication and model integrations which we do not use. Only Open AI compatible remains
- No token limit - Upstream warns at 32K tokens. This fork doesn't
Ensure you have Node.js version 20 or higher installed.
curl -qL https://www.npmjs.com/install.sh | shnpm install -g @scenius/sceni-code@latest
sceni --versiongit clone https://github.com/scenius-software/sceni-code.git
cd sceni-code
npm install
npm install -g .Mount both the current directory and user srettings:
Unix/Linux/macOS:
docker run -it \
-v "$(pwd):/workspace" \
-v "$HOME/.sceni:/home/node/.sceni" \
nickheskes/sceni-codePowerShell (Windows):
docker run -it `
-v "${PWD}:/workspace" `
-v "$env:USERPROFILE\.sceni:/home/node/.sceni" `
nickheskes/sceni-code# Start Sceni Code
sceni
# Example commands
> Explain this codebase structure
> Help me refactor this function
> Generate unit tests for this module/compress- Compress conversation history/clear- Clear all conversation history and start fresh/status- Check current token usage
/help- Display available commands/clear- Clear conversation history/compress- Compress history to save tokens/status- Show current session information/exitor/quit- Exit Sceni Code
Ctrl+C- Cancel current operationCtrl+D- Exit (on empty line)Up/Down- Navigate command history
If you encounter issues, check the troubleshooting guide.
This project is based on Qwen. Our hero's for LLM development and tooling. Which is based on, This project is based on Google Gemini CLI. We acknowledge and appreciate the excellent work of the Gemini CLI team. Our main contribution focuses on parser-level adaptations to better support Qwen-Coder models.
