Fallacies of LLMs
Inspired by the Fallacies of Distributed Computing, here are some common misconceptions about Large Language Models
1. The model knows facts.
LLMs don’t “know” – they generate plausible text based on training patterns. It might produce correct answers, but correctness is not guaranteed.
2. Bigger models always mean better results.
Scale helps, but alignment, fine-tuning, prompting, and context management matter just as much.
3. The AI knows when it’s wrong.
Most AI systems are poorly calibrated, often overconfident in incorrect predictions.
4. The training data is unbiased.
Training corpora reflect human biases, errors, and gaps – and so do outputs.
5. The model will stay up-to-date.
Without retrieval or retraining, an LLM is stuck with its knowledge cutoff.
6. The model is consistent.
The same query can yield different results depending on wording, context, or randomness (temperature).
7. Prompting is easy.
Small changes in phrasing, structure, or ordering can drastically affect outputs. Prompt engineering is non-trivial.
8. The model can explain itself.
Explanations generated by the model are themselves predictions – not ground-truth insights into its internal reasoning.
9. LLMs are cheap to run.
Training and inference costs can be substantial, especially at scale.
10. More context always helps.
Long context windows don’t guarantee better performance. Models may still forget, distort, or hallucinate.
Visit the Hugo website!