Language Models Overview
BrilliantAI offers a variety of state-of-the-art language models for different use cases. Our models are designed to provide excellent performance while being cost-effective.
Available Models
DeepSeek-R1
- Parameters: 380B
- Context Length: 128K tokens
- Pricing: $0.0/1M tokens
- Best for: Advanced reasoning, complex problem-solving, and tasks requiring deep analytical thinking
llama-3.3-70b
- Parameters: 70B
- Context Length: 8K tokens
- Best for: Production applications requiring high-quality outputs and strong reasoning capabilities
llama-3.2-8b
- Parameters: 8B
- Context Length: 8K tokens
- Best for: Cost-effective deployments with good performance on general tasks
mistral-nemo
- Parameters: 12B
- Context Length: 8K tokens
- Best for: General-purpose text generation with excellent quality-to-cost ratio
Model Selection Guide
When choosing a model, consider:
- Task Complexity: More complex tasks generally benefit from larger models
- Budget Constraints: Smaller models offer lower costs with reasonable quality
- Latency Requirements: Smaller models typically have lower inference latency
- Context Length Needs: Consider how much context your application requires
For more details on specific use cases, see the LLM Inference documentation.
Getting Started
To start using our language models, check out: