Connect your coding tools
Install the Rye agent on developer machines. It sits between your AI coding tools and LLM providers, capturing every prompt and completion without changing your workflow.
Security for AI coding tools
Rye sits between your developers' AI coding tools (Cursor, Windsurf, Claude Code) and LLM providers. Review every prompt, block secrets from leaking, enforce policies, and keep audit trails for compliance.
Architecture
Features
See what your developers are prompting, stop sensitive code from reaching LLM providers, and maintain the audit trail your compliance team needs.
Prompt visibility
Full-text search across all AI coding interactions. Filter by developer, tool, model, repository, or risk score. Know exactly what code context is leaving your org.
Policy engine
Block API keys, credentials, and sensitive source files from reaching LLM providers. Enforce model allow-lists and scope what each team can access.
Device authorization
Register developer laptops and CI runners that can access AI coding tools. Revoke access instantly when someone leaves. See every active device.
Audit trail
Every AI interaction logged with developer identity, device, tool, model, policy evaluation, and full prompt/response. Export to your SIEM or pull via API.
How it works
Install the Rye agent on developer machines. It sits between your AI coding tools and LLM providers, capturing every prompt and completion without changing your workflow.
Define what developers can send to LLMs. Block secrets, credentials, and proprietary code from leaving your network. Set model allow-lists and token budgets per team.
See what every developer is prompting in real time. Trace code suggestions back to the original prompt. Export audit trails for SOC 2, ISO 27001, or incident response.
Free for up to 1,000 requests per month. No credit card required. Installs in under five minutes.