New research indicates that directing an AI chatbot to keep responses brief can actually increase the chances of it providing inaccurate or made-up information. Paris-based startup Giskard discovered that concise prompts, especially when questions are ambiguous, often result in more AI hallucinations from advanced AI models.
The team at Giskard observed that even minor adjustments in system instructions can significantly alter how frequently a model generates false information. This is particularly important for businesses that prioritize shorter outputs to save on data and improve response times.
Impact on Leading AI Models
Giskard tested popular AI systems including OpenAI’s GPT-4o, Mistral Large, and Anthropic’s Claude 3.7 Sonnet, finding that each experienced a marked drop in accuracy when answers were required to be short. In scenarios where users requested brief explanations, the models had less opportunity to refute incorrect assumptions or point out errors.
According to the study, cutting responses down can prevent models from providing sufficient context to challenge misleading or mistaken prompts. The researchers noted that commands like “be concise” could unintentionally undermine a chatbot’s effectiveness at debunking false claims.
The study also uncovered that when users deliver controversial statements with confidence, the models are even less likely to correct them. In some cases, the systems users find most pleasant to interact with are not always the ones that provide the most factual information.
Developers face a difficult balance between aligning AI behavior with user preferences and ensuring the underlying information remains accurate. The report highlights a growing tension, as optimizing for a positive user experience might reduce a model’s willingness to challenge misinformation or false premises.