Skip to content

Instantly share code, notes, and snippets.

@ahmed-bhs
Last active January 31, 2025 21:14
Show Gist options
  • Save ahmed-bhs/6d4c789334dd3a6e3b2e329bc703c2ac to your computer and use it in GitHub Desktop.
Save ahmed-bhs/6d4c789334dd3a6e3b2e329bc703c2ac to your computer and use it in GitHub Desktop.
Criterion OpenAI DeepSeek
Token Limitations - GPT-3.5: 4,096 tokens (input + output).
- GPT-4: 8k tokens (input + output) for the standard version and 32k tokens for the "Extended" version.
- Limits include both input and output tokens.
- Most DeepSeek models have a 2,048 token limit (input + output).
- Some models can support up to 4,096 tokens, but this is still lower than OpenAI's.
Hallucination - Frequent, especially in contexts where information is not explicitly available in the training data.
- OpenAI regularly updates its models to reduce hallucinations, but they remain an issue, especially on less-known topics.
- Present as well, especially on non-factual or ambiguous tasks. DeepSeek is working on models designed to be more factual, but 100% accuracy is not guaranteed. Hallucinations can also be more frequent in lower-capacity models.
API Batching - Full support for batching via the API.
- Allows sending multiple requests simultaneously, optimizing performance for users with a high volume of requests.
- The API can process multiple calls in a single request, reducing latency costs and improving efficiency.
- Batching supported, but less flexible than OpenAI. Batch management capabilities may depend on the model or subscription plan.
- The maximum number of requests per batch may be more limited than OpenAI, but the service can still handle bulk tasks.
Rate Limit - Limits based on subscriptions: for example, for paid users, this may range from 20 to 80 requests per minute depending on the plan.
- Usage limits may be applied based on monthly usage, with adjustments available for premium users.
- Rate limits depending on the subscription plan. Users may have monthly quotas or per-minute/hour limits.
- Limits are often stricter for free-tier users, but DeepSeek offers flexible options for premium subscribers.
Price for 1,000 Tokens - GPT-3.5: Approximately $0.0020 per 1,000 tokens (standard API pricing).
- GPT-4: Around $0.03 per 1,000 tokens for the 8k model (pricing varies based on the model and volume).
- Prices may increase for high-volume usage.
- Cheaper than OpenAI for comparable models in terms of capabilities. Prices range from $0.0015 - $0.02 per 1,000 tokens depending on the model and volume.
- DeepSeek aims to make its services more accessible in terms of cost per token, which is an advantage for low-budget users.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment