UTHSC In the Media


Smart, Smooth, and Sometimes Dangerously Wrong: AI’s Hidden Risks in Medicine

In the Media Icon

As millions of people and thousands of clinicians begin using general-purpose AI tools (such as ChatGPT, Grok, Gemini, and others) for medical questions and image interpretation, new case reports and peer-reviewed studies show these systems can confidently produce convincing but false medical information — in some cases directly misleading patients and contributing to harm.


Smart, Smooth, and Sometimes Dangerously Wrong: AI’s Hidden Risks in Medicine

In the Media Icon

As millions of people and thousands of clinicians begin using general-purpose AI tools (such as ChatGPT, Grok, Gemini, and others) for medical questions and image interpretation, new case reports and peer-reviewed studies show these systems can confidently produce convincing but false medical information – in some cases directly misleading patients and contributing to harm.


Smart, Smooth, and Sometimes Dangerously Wrong: AI’s Hidden Risks in Medicine

In the Media Icon

As millions of people and thousands of clinicians begin using general-purpose AI tools (such as ChatGPT, Grok, Gemini, and others) for medical questions and image interpretation, new case reports and peer-reviewed studies show these systems can confidently produce convincing but false medical information — in some cases directly misleading patients and contributing to harm.


Smart, Smooth, and Sometimes Dangerously Wrong: AI’s Hidden Risks in Medicine

In the Media Icon

As millions of people and thousands of clinicians begin using general-purpose AI tools (such as ChatGPT, Grok, Gemini, and others) for medical questions and image interpretation, new case reports and peer-reviewed studies show these systems can confidently produce convincing but false medical information — in some cases directly misleading patients and contributing to harm.


Smart, Smooth, and Sometimes Dangerously Wrong: AI’s Hidden Risks in Medicine

In the Media Icon

As millions of people and thousands of clinicians begin using general-purpose AI tools (such as ChatGPT, Grok, Gemini, and others) for medical questions and image interpretation, new case reports and peer-reviewed studies show these systems can confidently produce convincing but false medical information — in some cases directly misleading patients and contributing to harm.


UT Health Science Center cares for Memphis and across Tennessee

In the Media Icon

Doctoral student Ishita Kathuria was influenced by her family’s history of heart disease to pursue cardiovascular research.


Second Dose Boosts Shingles Protection in Adults Aged 65 Years or Older

In the Media Icon

The recombinant herpes zoster vaccine is effective in older adults regardless of a patient’s immunocompromised status, based on data from more than 3 million adults aged 65 years or older.


Grant studied how bulimia affects teeth

In the Media Icon

A recent graduate from the University of Arkansas (UA) at Little Rock used a $4,000 grant to study how bulimia affects teeth.