Smart, Smooth, and Sometimes Dangerously Wrong: AI’s Hidden Risks in Medicine
As millions of people and thousands of clinicians begin using general-purpose AI tools (such as ChatGPT, Grok, Gemini, and others) for medical questions and image interpretation, new case reports and peer-reviewed studies show these systems can confidently produce convincing but false medical information — in some cases directly misleading patients and contributing to harm.