Skip to main content
OpenAI provides some of the most advanced and widely-used language models available in ZeroTwo, from the versatile GPT-4o series to the specialized reasoning models like o1 and o3.

Available models

GPT-4o

Context: 128K tokens
Strengths: Balanced performance, multimodal, fast
Best general-purpose model for most tasks

GPT-4o-mini

Context: 128K tokens
Strengths: Speed, cost-efficiency, large context
Ideal for quick tasks and high-volume use

o1 / o3

Context: 200K tokens
Strengths: Advanced reasoning, problem-solving
Specialized for complex analysis and coding

GPT-4-turbo

Context: 128K tokens
Strengths: Enhanced capabilities, vision
Previous generation flagship model

Model details

GPT-4o (Omni)

OpenAI’s flagship multimodal model with enhanced capabilities:
Context window
128,000 tokens
Large context for processing extensive documents and long conversations
Capabilities
features
  • Text generation and analysis
  • Code generation and debugging
  • Image understanding (vision)
  • Function calling and tool use
  • JSON mode for structured outputs
  • Multimodal reasoning
Best for
use cases
  • Complex coding tasks
  • Document analysis and summarization
  • Multimodal tasks (text + images)
  • API integrations with function calling
  • General-purpose chat and assistance
When to use GPT-4o:
  • You need the best balance of capability and speed
  • Working with images and text together
  • Building applications with function calling
  • Processing large documents
  • Most production use cases

GPT-4o-mini

Cost-effective model with impressive capabilities:
Context window
128,000 tokens
Same large context as GPT-4o
Capabilities
features
  • Fast text generation
  • Code completion and basic debugging
  • Document processing
  • Function calling
  • JSON mode
  • Efficient for high-volume use
Best for
use cases
  • Quick queries and simple tasks
  • High-frequency API calls
  • Chat applications
  • Content moderation
  • Data extraction and transformation
  • Cost-sensitive applications
When to use GPT-4o-mini:
  • Speed is more important than maximum capability
  • Processing many simple requests
  • Budget constraints
  • Real-time applications
  • Simple code generation

o1 and o3 (Reasoning models)

Advanced reasoning models for complex problem-solving:
Context window
200,000 tokens
Extended context for comprehensive analysis
Capabilities
features
  • Extended reasoning and chain-of-thought
  • Complex problem decomposition
  • Advanced mathematical reasoning
  • Multi-step logical analysis
  • Code optimization and refactoring
  • Research and analysis tasks
Best for
use cases
  • Complex algorithmic problems
  • Mathematical proofs and analysis
  • Advanced code architecture decisions
  • Research paper analysis
  • Multi-step reasoning tasks
  • Debugging complex issues
When to use o1/o3:
  • Problem requires deep reasoning
  • Multiple steps of logical thinking needed
  • Complex code refactoring or optimization
  • Mathematical or scientific analysis
  • When accuracy is more important than speed
Reasoning models take longer to respond as they perform extended internal reasoning before generating output. Use them when the quality of reasoning justifies the wait.

GPT-4-turbo

Previous generation model, still highly capable:
Context window
128,000 tokens
Large context support
Capabilities
features
  • Strong text and code generation
  • Vision capabilities
  • Function calling
  • JSON mode
  • Reliable performance
When to use GPT-4-turbo:
  • Legacy applications
  • Proven performance for specific use cases
  • When GPT-4o is unavailable

Capabilities comparison

CapabilityGPT-4oGPT-4o-minio1 / o3GPT-4-turbo
Text generation⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Code generation⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Reasoning⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Speed⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Vision
Function calling
Cost efficiency⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐

Use cases by model

GPT-4o use cases

  • Full-stack application development
  • API design and implementation
  • Code review and optimization
  • Architecture planning
  • Debug complex issues
  • Blog posts and articles
  • Marketing copy
  • Technical documentation
  • Creative writing
  • Social media content
  • Document summarization
  • Trend analysis
  • Report generation
  • Data extraction from text
  • Insight generation
  • Image description and analysis
  • OCR and text extraction from images
  • Visual content understanding
  • Screenshot analysis
  • Diagram interpretation

GPT-4o-mini use cases

  • Quick code completions
  • Simple API integrations
  • Chat and customer support
  • Content moderation
  • Simple data transformations
  • Rapid prototyping
  • High-volume processing

o1 / o3 use cases

  • Complex algorithm design
  • Mathematical proofs
  • Research paper analysis
  • Advanced debugging
  • System architecture design
  • Security vulnerability analysis
  • Complex refactoring

Tips for using OpenAI models

1

Choose the right model

Start with GPT-4o for general use. Use GPT-4o-mini for simple tasks or when speed matters. Use o1/o3 for complex reasoning.
2

Leverage function calling

OpenAI models excel at function calling. Use this for API integrations and structured outputs.
// Example: Function calling for structured data
{
  "tools": [{
    "type": "function",
    "function": {
      "name": "get_weather",
      "description": "Get weather for a location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {"type": "string"},
          "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
        }
      }
    }
  }]
}
3

Use JSON mode for structured output

When you need consistent JSON responses, enable JSON mode in your prompts.
"Return the result as JSON with fields: title, summary, keyPoints"
4

Optimize context usage

  • Place most important information early in prompts
  • Remove unnecessary context to save tokens
  • Use system messages for persistent instructions
Pro tips:
  • GPT-4o works great with images—upload screenshots for debugging
  • o1/o3 models benefit from letting them “think”—don’t interrupt reasoning
  • GPT-4o-mini is surprisingly capable for its speed and cost
  • Use temperature=0 for consistent, deterministic outputs

Limitations and considerations

Be aware of:Knowledge cutoff: Models have training data up to a specific date (varies by model). Use web search for current information.Hallucinations: Models may generate plausible but incorrect information. Verify critical facts.Context limits: Even with 128K tokens, extremely long contexts may degrade performance.Reasoning model latency: o1/o3 are slower due to extended reasoning—not suitable for real-time use.Vision limitations: Image understanding is advanced but not perfect—verify critical visual information.

Pricing tiers

OpenAI models are available on ZeroTwo Pro and Enterprise plans. Model access and rate limits vary by subscription tier.
  • Free tier: Limited access to GPT-4o-mini
  • Pro tier: Full access to GPT-4o, GPT-4o-mini, o1
  • Enterprise tier: Enhanced limits, priority access, o3 access
See subscriptions for detailed pricing.

Switching models

In ZeroTwo, you can switch between OpenAI models mid-conversation:
1

Open model selector

Click the model name in the chat header or use the keyboard shortcut.
2

Select OpenAI model

Choose from available OpenAI models based on your subscription.
3

Continue conversation

The conversation continues with the new model, maintaining context.
Learn more about switching models.