Skip to main content

Prompt Engineering Guide

Master the art of crafting effective prompts for LLMs.

Understanding Prompts

A prompt is more than just a question—it's a carefully crafted instruction that guides an LLM to produce desired outputs. Good prompts can dramatically improve the quality, reliability, and usefulness of LLM responses.

Core Principles

1. Clarity and Specificity

Always be clear and specific about what you want. Compare these prompts:

Poor: "Tell me about the code" ✅ Better: "Analyze this Python function for potential performance bottlenecks and suggest specific optimizations"

2. Context Setting

Provide relevant context to help the model understand its role and constraints:

system_message = """
You are an expert Python developer specializing in performance optimization.
Focus on:
- Time complexity analysis
- Memory usage
- Algorithmic improvements
- Code readability
"""

user_message = "Review this sorting implementation..."

3. Structured Output Format

Specify the desired output format when needed:

prompt = """
Analyze this code and provide feedback in the following format:
1. Performance Issues:
- [List specific issues]
2. Optimization Suggestions:
- [List actionable improvements]
3. Code Examples:
- [Show optimized versions]
"""

Advanced Techniques

1. Role-Based Prompting

Assign specific roles to guide behavior:

client = OpenAI(
base_url="api.brilliantai.co",
api_key="YOUR_API_KEY"
)

messages = [
{
"role": "system",
"content": """
You are a senior software architect with expertise in:
- Distributed systems
- Scalable architectures
- Security best practices

Provide detailed, technical responses with concrete examples.
"""
},
{
"role": "user",
"content": "How should we design a scalable microservices architecture?"
}
]

response = client.chat.completions.create(
model="llama-3.3-70b",
messages=messages,
temperature=0.7
)

2. Few-Shot Learning

Demonstrate patterns through examples:

prompt = """
Convert these requirements into user stories:

Example 1:
Requirement: "Users need to reset their passwords"
User Story: "As a registered user, I want to reset my password so that I can regain access to my account if I forget it"

Example 2:
Requirement: "System should send notifications"
User Story: "As a user, I want to receive notifications so that I stay informed about important updates"

Now convert this:
Requirement: "Admins should be able to ban users"
"""

3. Chain-of-Thought Prompting

Guide the model through logical steps:

prompt = """
Let's solve this programming challenge step by step:

1. First, understand the requirements:
- What are the inputs?
- What is the expected output?
- What are the constraints?

2. Then, plan the solution:
- What data structures should we use?
- What algorithms would be most efficient?
- How can we handle edge cases?

3. Finally, implement the solution:
- Write the code
- Add error handling
- Include comments for clarity

Problem: Implement a function to find the longest palindromic substring.
"""

Task-Specific Techniques

1. Code Generation

prompt = """
Generate a Python class for a REST API client with these requirements:

Specifications:
- Handle GET, POST, PUT, DELETE methods
- Automatic retry with exponential backoff
- JSON request/response handling
- Authentication header management

Include:
- Type hints
- Docstrings
- Error handling
- Usage examples
"""

2. Code Review

prompt = """
Review this code focusing on:

1. Security:
- Input validation
- Authentication/Authorization
- Data protection

2. Performance:
- Algorithm efficiency
- Resource usage
- Caching opportunities

3. Maintainability:
- Code organization
- Documentation
- Test coverage

Provide specific recommendations with code examples.
"""

3. Technical Documentation

prompt = """
Create documentation for this API endpoint:

Template:
## Endpoint Name
- **Method**: [HTTP Method]
- **Path**: [URL Path]
- **Description**: [Clear explanation]

### Request
- Headers:
- [Required headers]
- Parameters:
- [Query/Path parameters]
- Body:
- [Request body schema]

### Response
- Success (200):
- [Response schema]
- Errors:
- [Possible error codes]

### Examples
- [Request/Response examples]
"""

Best Practices

1. Temperature Control

Adjust temperature based on the task:

  • Low (0.1-0.3): Factual responses, code generation
  • Medium (0.4-0.7): Creative suggestions, brainstorming
  • High (0.8-1.0): Creative writing, idea generation
# Factual code review
response = client.chat.completions.create(
model="llama-3.3-70b",
messages=[...],
temperature=0.2 # Low for precision
)

# Creative solution brainstorming
response = client.chat.completions.create(
model="llama-3.3-70b",
messages=[...],
temperature=0.7 # Higher for creativity
)

2. Iterative Refinement

Break complex tasks into steps:

# Step 1: High-level design
design_prompt = """
Provide a high-level system design for a real-time chat application.
Focus on main components and their interactions.
"""

# Step 2: Detailed component design
component_prompt = """
Based on the high-level design, detail the implementation of the
message handling service, including:
1. Data structures
2. API endpoints
3. Database schema
4. Error handling
"""

# Step 3: Implementation details
implementation_prompt = """
Provide the implementation for the message processing pipeline,
including code for:
1. Message validation
2. Persistence
3. Real-time delivery
4. Error recovery
"""

3. Validation and Constraints

Include validation requirements:

prompt = """
Generate a Python function that:
1. Takes a list of integers as input
2. MUST validate:
- Input is non-empty
- All elements are integers
- Values are within range [-1000, 1000]
3. MUST raise ValueError with specific messages
4. MUST include type hints
5. MUST have comprehensive docstring
"""

Common Pitfalls

  1. Ambiguous Instructions

    • ❌ "Make it better"
    • ✅ "Optimize the function for memory usage by using generators instead of lists"
  2. Missing Context

    • ❌ "Fix the bug"
    • ✅ "Fix the race condition in the user authentication flow where..."
  3. Overloaded Prompts

    • ❌ Asking for too many things at once
    • ✅ Breaking down complex tasks into smaller, focused prompts
  4. Lack of Constraints

    • ❌ "Generate a sorting algorithm"
    • ✅ "Generate a sorting algorithm that must run in O(n log n) time and O(1) space"

Prompt Templates

1. Code Review Template

code_review_template = """
Review the following code with these criteria:

1. Security
- SQL injection risks
- XSS vulnerabilities
- Authentication checks
- Input validation

2. Performance
- Time complexity
- Space complexity
- Database query optimization
- Caching opportunities

3. Maintainability
- Code organization
- Variable/function naming
- Documentation
- Test coverage

4. Best Practices
- Design patterns
- Error handling
- Logging
- Configuration management

Provide specific recommendations with code examples.

Code to review:
[CODE_TO_REVIEW]
"""

2. API Documentation Template

api_doc_template = """
Create comprehensive API documentation:

# [API_ENDPOINT]

## Overview
- **Method**: [HTTP_METHOD]
- **Path**: [PATH]
- **Description**: [DESCRIPTION]

## Authentication
- Required: [YES/NO]
- Type: [AUTH_TYPE]

## Request
### Headers
[HEADERS_LIST]

### Parameters
[PARAMETERS_LIST]

### Body Schema
[REQUEST_BODY_SCHEMA]

## Response
### Success Response (200)
[SUCCESS_RESPONSE_SCHEMA]

### Error Responses
[ERROR_CODES_AND_DESCRIPTIONS]

## Examples
### Example Request
[REQUEST_EXAMPLE]

### Example Response
[RESPONSE_EXAMPLE]

## Notes
- Rate limits
- Caching behavior
- Special considerations
"""

3. Technical Design Template

design_template = """
# [SYSTEM_NAME] Technical Design

## Overview
- Purpose
- Scope
- Assumptions
- Constraints

## System Architecture
- Components
- Interactions
- Data flow
- APIs

## Data Model
- Entities
- Relationships
- Schema

## Implementation Details
- Technologies
- Libraries
- Frameworks
- Development practices

## Security Considerations
- Authentication
- Authorization
- Data protection
- Compliance

## Performance Considerations
- Scalability
- Reliability
- Monitoring
- Metrics

## Deployment
- Infrastructure
- Configuration
- CI/CD
- Monitoring

## Timeline
- Phases
- Milestones
- Dependencies
"""

Testing and Iteration

Always test prompts with different inputs and refine based on:

  1. Output quality
  2. Consistency
  3. Error handling
  4. Edge cases
  5. Performance

Use this feedback loop to continuously improve your prompts:

def test_prompt(prompt: str, test_cases: list) -> list:
"""Test a prompt with multiple test cases."""
results = []
for test in test_cases:
response = client.chat.completions.create(
model="llama-3.3-70b",
messages=[
{"role": "system", "content": prompt},
{"role": "user", "content": test["input"]}
],
temperature=test.get("temperature", 0.7)
)
results.append({
"test_case": test["input"],
"expected": test["expected"],
"actual": response.choices[0].message.content,
"success": evaluate_response(
response.choices[0].message.content,
test["expected"]
)
})
return results

Conclusion

Effective prompt engineering is a crucial skill for working with LLMs. Remember:

  • Be clear and specific
  • Provide relevant context
  • Use appropriate techniques for the task
  • Test and iterate
  • Consider edge cases
  • Document successful patterns