Original Article

Performance of Large Language Models on Diagnostic Radiology Board–Style Questions: A Comparative Evaluation of GPT-4o, Perplexity AI, and OpenEvidence

Objective: The objective of this study was to compare the diagnostic accuracy and internal consistency of GPT-4o (Generative Pre-Trained Transformer-4 omni), Perplexity AI (artificial intelligence), and OpenEvidence when applied to text-based, specialty-level radiology board questions. Methods: A total of 161 text-based multiple-choice questions from the American College of Radiology (ACR)…

Posted in: artificial intelligence 4 diagnostic accuracy 2 radiology 3

Original Article

The Artist versus the Machine: Evaluating ChatGPT Efficacy in Antimicrobial Management for Pediatric Traumatic Wounds

Objectives: To evaluate the efficacy of Chat Generative Pre-Trained Transformer (ChatGPT) in generating antimicrobial management recommendations for pediatric patients with contaminated traumatic wounds, particularly in the absence of infectious disease (ID) consultation. Methods: Three pediatric cases involving severely contaminated traumatic injuries were retrospectively presented to ChatGPT-4, including clinical data such…

Posted in: artificial intelligence 4 clinical decision support 4 Infectious Disease 6 pediatric trauma 2

Original Article

Performance of Chat Generative Pre-Trained Transformer on Personal Review of Learning in Obstetrics and Gynecology

Objectives: Chat Generative Pre-Trained Transformer (ChatGPT) is a popular natural-language processor that is able to analyze and respond to a variety of prompts, providing eloquent answers based on a collection of Internet data. ChatGPT has been considered an avenue for the education of resident physicians in the form of board…

Posted in: artificial intelligence 4 ChatGPT 2 education 20 machine learning 2 resident 5

Original Article

Comparison of the Usability and Reliability of Answers to Clinical Questions: AI-Generated ChatGPT versus a Human-Authored Resource

Objectives: Our aim was to compare the usability and reliability of answers to clinical questions posed of Chat-Generative Pre-Trained Transformer (ChatGPT) compared to those of a human-authored Web source (www.Pearls4Peers.com) in response to “real-world” clinical questions raised during the care of patients. Methods: Two domains of clinical information quality were…

Posted in: artificial intelligence 4 ChatGPT 2
SMA Menu