Managing AI Keys

Set up AI providers (optional for trials).

New users can start immediately using our Sandbox Mode (powered by our system keys).

For production workloads, unlimited usage, or custom model selection, connect your own API key from OpenAI, Anthropic, or Google Gemini.

OpenAI (Recommended)

Best overall performance. Supports GPT-4o for complex reasoning.

Get OpenAI Key

Anthropic

Excellent for nuanced drafting and large context. Supports Claude 3.5 Sonnet.

Get Anthropic Key

Google Gemini

Budget-friendly and massive context window (1M+ tokens).

Get Gemini Key

Model Architecture Configuration

Once you connect a key, you will unlock the Model Architecture panel. This allows you to assign specific models to specific tasks within the platform.

Auto-Discovery

Velocibid automatically detects all models available to your API key. If OpenAI releases "GPT-5" tomorrow, it will instantly appear in your dropdown menu without any software updates required.

Task Optimization

You can mix and match providers! For example, use Claude 3.5 Sonnet for writing proposals (better tone) and GPT-4o Mini for compliance checks (faster/cheaper).

Engine TaskRecommended Model
Complex DraftingGPT-4o or Claude 3.5 Sonnet (High Intelligence)
Compliance AuditingGPT-4o Mini or Gemini Flash (Speed & Cost)
SME InterviewsGPT-4o Mini (Conversational Speed)

Practical Implementation Notes

This guide is designed to help proposal, security, and operations teams execute Managing AI Keys consistently inside Velocibid. If you are onboarding new contributors, use this page as a standard operating reference and pair it with related documentation so your team follows one repeatable workflow.

For stronger outcomes, combine this playbook with adjacent guides in the same lifecycle stage (discovery, response, compliance, and delivery). Cross-functional teams that align their process across these modules typically reduce rework and improve proposal quality.