Available models
GPT-4o
Context: 128K tokens
Strengths: Balanced performance, multimodal, fastBest general-purpose model for most tasks
Strengths: Balanced performance, multimodal, fastBest general-purpose model for most tasks
GPT-4o-mini
Context: 128K tokens
Strengths: Speed, cost-efficiency, large contextIdeal for quick tasks and high-volume use
Strengths: Speed, cost-efficiency, large contextIdeal for quick tasks and high-volume use
o1 / o3
Context: 200K tokens
Strengths: Advanced reasoning, problem-solvingSpecialized for complex analysis and coding
Strengths: Advanced reasoning, problem-solvingSpecialized for complex analysis and coding
GPT-4-turbo
Context: 128K tokens
Strengths: Enhanced capabilities, visionPrevious generation flagship model
Strengths: Enhanced capabilities, visionPrevious generation flagship model
Model details
GPT-4o (Omni)
OpenAI’s flagship multimodal model with enhanced capabilities:Large context for processing extensive documents and long conversations
- Text generation and analysis
- Code generation and debugging
- Image understanding (vision)
- Function calling and tool use
- JSON mode for structured outputs
- Multimodal reasoning
- Complex coding tasks
- Document analysis and summarization
- Multimodal tasks (text + images)
- API integrations with function calling
- General-purpose chat and assistance
- You need the best balance of capability and speed
- Working with images and text together
- Building applications with function calling
- Processing large documents
- Most production use cases
GPT-4o-mini
Cost-effective model with impressive capabilities:Same large context as GPT-4o
- Fast text generation
- Code completion and basic debugging
- Document processing
- Function calling
- JSON mode
- Efficient for high-volume use
- Quick queries and simple tasks
- High-frequency API calls
- Chat applications
- Content moderation
- Data extraction and transformation
- Cost-sensitive applications
- Speed is more important than maximum capability
- Processing many simple requests
- Budget constraints
- Real-time applications
- Simple code generation
o1 and o3 (Reasoning models)
Advanced reasoning models for complex problem-solving:Extended context for comprehensive analysis
- Extended reasoning and chain-of-thought
- Complex problem decomposition
- Advanced mathematical reasoning
- Multi-step logical analysis
- Code optimization and refactoring
- Research and analysis tasks
- Complex algorithmic problems
- Mathematical proofs and analysis
- Advanced code architecture decisions
- Research paper analysis
- Multi-step reasoning tasks
- Debugging complex issues
- Problem requires deep reasoning
- Multiple steps of logical thinking needed
- Complex code refactoring or optimization
- Mathematical or scientific analysis
- When accuracy is more important than speed
Reasoning models take longer to respond as they perform extended internal reasoning before generating output. Use them when the quality of reasoning justifies the wait.
GPT-4-turbo
Previous generation model, still highly capable:Large context support
- Strong text and code generation
- Vision capabilities
- Function calling
- JSON mode
- Reliable performance
- Legacy applications
- Proven performance for specific use cases
- When GPT-4o is unavailable
Capabilities comparison
| Capability | GPT-4o | GPT-4o-mini | o1 / o3 | GPT-4-turbo |
|---|---|---|---|---|
| Text generation | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
| Code generation | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
| Reasoning | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
| Speed | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐ |
| Vision | ✅ | ✅ | ❌ | ✅ |
| Function calling | ✅ | ✅ | ✅ | ✅ |
| Cost efficiency | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐ |
Use cases by model
GPT-4o use cases
Software development
Software development
- Full-stack application development
- API design and implementation
- Code review and optimization
- Architecture planning
- Debug complex issues
Content creation
Content creation
- Blog posts and articles
- Marketing copy
- Technical documentation
- Creative writing
- Social media content
Data analysis
Data analysis
- Document summarization
- Trend analysis
- Report generation
- Data extraction from text
- Insight generation
Multimodal tasks
Multimodal tasks
- Image description and analysis
- OCR and text extraction from images
- Visual content understanding
- Screenshot analysis
- Diagram interpretation
GPT-4o-mini use cases
- Quick code completions
- Simple API integrations
- Chat and customer support
- Content moderation
- Simple data transformations
- Rapid prototyping
- High-volume processing
o1 / o3 use cases
- Complex algorithm design
- Mathematical proofs
- Research paper analysis
- Advanced debugging
- System architecture design
- Security vulnerability analysis
- Complex refactoring
Tips for using OpenAI models
1
Choose the right model
Start with GPT-4o for general use. Use GPT-4o-mini for simple tasks or when speed matters. Use o1/o3 for complex reasoning.
2
Leverage function calling
OpenAI models excel at function calling. Use this for API integrations and structured outputs.
3
Use JSON mode for structured output
When you need consistent JSON responses, enable JSON mode in your prompts.
4
Optimize context usage
- Place most important information early in prompts
- Remove unnecessary context to save tokens
- Use system messages for persistent instructions
Limitations and considerations
Pricing tiers
OpenAI models are available on ZeroTwo Pro and Enterprise plans. Model access and rate limits vary by subscription tier.
- Free tier: Limited access to GPT-4o-mini
- Pro tier: Full access to GPT-4o, GPT-4o-mini, o1
- Enterprise tier: Enhanced limits, priority access, o3 access
Switching models
In ZeroTwo, you can switch between OpenAI models mid-conversation:1
Open model selector
Click the model name in the chat header or use the keyboard shortcut.
2
Select OpenAI model
Choose from available OpenAI models based on your subscription.
3
Continue conversation
The conversation continues with the new model, maintaining context.

