AI Glossaryfundamentals
Temperature
A parameter that controls the randomness of AI model outputs, with lower values producing more deterministic responses and higher values producing more creative ones.
How It Works
Temperature ranges from 0 to 2 in most APIs. At temperature 0, the model always picks the most probable next token (deterministic, good for factual tasks). At temperature 1, outputs are more varied and creative. At temperature 2, outputs become chaotic. For production apps: use 0-0.3 for data extraction and classification, 0.5-0.7 for balanced generation, 0.8-1.0 for creative writing. Note: GPT-5-mini does not support the temperature parameter.
Common Use Cases
- 1Controlling output consistency
- 2Creative content generation
- 3Factual question answering
- 4A/B testing output quality
Related Terms
Large Language Model (LLM)
A neural network trained on massive text datasets that can generate, understand, and reason about human language.
Prompt EngineeringThe practice of crafting effective instructions for AI models to produce desired outputs consistently.
InferenceThe process of running a trained AI model to generate predictions or outputs from new inputs, as opposed to training the model.
Need help implementing Temperature?
AI 4U Labs builds production AI apps in 2-4 weeks. We use Temperature in real products every day.
Let's Talk