Skip to main content
Prompt engineering is the art and science of crafting effective instructions for AI models. In ZeroTwo, well-designed prompts help you achieve more accurate, relevant, and useful responses across all supported models.

What is prompt engineering?

Prompt engineering involves structuring your requests to AI models in ways that optimize their responses. Rather than treating the AI as a search engine, effective prompting treats it as a collaborative partner that needs clear context, goals, and constraints.
The quality of AI responses directly correlates with the quality of your prompts. Investing time in prompt engineering pays dividends in productivity and output quality.

Core principles of effective prompts

Be specific and clear

Vague prompts lead to vague responses. Provide concrete details about what you need.
Write code for a website.

Provide context

Help the AI understand the broader situation and constraints.
Example with context
I'm building a SaaS application for project management. I need to design a database schema for tracking tasks, projects, team members, and their assignments. The system should support multiple workspaces per user and role-based permissions. Can you suggest a PostgreSQL schema?

Specify the desired format

Tell the AI how you want the response structured.
Generate a Python function that validates email addresses. Return only the function code with docstring comments.

Set the appropriate tone

Indicate the expertise level and style you need.
Explain async/await in JavaScript as if I'm a junior developer who just learned callbacks. Use simple analogies and provide code examples with detailed comments.

Prompt components in ZeroTwo

ZeroTwo constructs the full prompt sent to AI models by combining several elements:
1

System prompt

The base instructions that define the AI’s behavior, capabilities, and constraints. This is set automatically but can be customized through custom instructions or assistants.
2

Custom instructions

Your personal or project-level instructions that persist across conversations. These augment the system prompt with your preferences.
3

Conversation history

Recent messages that provide context for continuing the conversation coherently.
4

Your current message

The specific request or question you’re asking right now.
5

Tool and file context

Additional context from attached files, images, or enabled tools like web search or code interpreter.

Prompt strategies by task type

For code generation

Always mention the programming language, framework versions, and any relevant libraries.
Create a Next.js 14 server action using TypeScript that handles user registration. Use Zod for validation and return properly typed responses.
Write a Python function to fetch data from an API endpoint. Include proper error handling for network timeouts, 404 errors, and invalid JSON responses. Use the requests library.
Refactor this function to use functional programming principles. Avoid mutations, use pure functions, and prefer map/filter/reduce over loops.

For analysis and research

Use the Deep Research tool for comprehensive research, or structure your prompt for focused analysis:
Analyze the pros and cons of microservices architecture for a startup with 5 developers. Consider factors like development velocity, operational complexity, and scalability needs. Provide specific recommendations.

For creative content

Provide style references, target audience, and key messaging:
Write a product announcement blog post for our new AI code assistant feature. Target audience is professional developers. Tone should be informative but exciting. Include a section on key benefits and use cases. Aim for 500-700 words.

For documentation

Specify documentation standards and audience:
Document this API endpoint using OpenAPI 3.0 specification. Include request/response examples, all possible error codes, and rate limiting information. Assume the audience is experienced API consumers.

Advanced prompting techniques

Chain of thought

Ask the AI to show its reasoning process:
Let's debug this React component step by step. First, identify potential issues. Then, explain why each is problematic. Finally, provide the corrected code with explanations for each change.

[paste code here]

Role-based prompting

Assign the AI a specific expertise role:
Act as a senior database architect. Review this SQL query for performance issues and security vulnerabilities. Provide optimization recommendations with explanations.

[paste query here]

Iterative refinement

Build on previous responses:
Take the component you just created and:
1. Add TypeScript types
2. Implement error boundaries
3. Add unit tests using Jest and React Testing Library

Model-specific considerations

Different models excel at different tasks. ZeroTwo supports multiple AI providers with varying strengths.

OpenAI GPT-4

Excellent for complex reasoning, code generation, and instruction following. Works well with structured prompts.

Claude (Anthropic)

Superior for long-form content, analysis, and nuanced understanding. Responds well to conversational prompts.

Gemini (Google)

Strong multimodal capabilities. Excels at tasks involving images, videos, and code analysis.

Specialized models

Use reasoning models (o1, o3) for complex problem-solving. Use fast models (GPT-4o-mini, Claude Haiku) for simple tasks.

Common pitfalls to avoid

Avoid these common mistakes:
  • Overly broad requests: “Tell me about programming” is too vague
  • Missing constraints: Not specifying requirements, limitations, or edge cases
  • Assuming context: The AI doesn’t remember previous sessions unless using Memory
  • Ignoring token limits: Very long prompts may be truncated. See tokens and limits
  • Not iterating: First responses may not be perfect—refine and clarify

Testing and refining prompts

1

Start with a clear goal

Define exactly what you want to achieve before writing your prompt.
2

Try your prompt

Submit your prompt and evaluate the response quality.
3

Identify gaps

What’s missing? What’s unclear? What’s incorrect?
4

Refine and retry

Adjust your prompt based on the gaps you identified. Add more context, constraints, or examples.
5

Save successful patterns

Use custom instructions or create an assistant to preserve effective prompt patterns.

Next steps

Additional resources

For assistance-specific prompting, see Assistant Behavior and Tone. For setting persistent preferences, explore Custom Instructions.