This, too, is the tip of the iceberg.nyone who has been to the emergency room knows the wait times before a bed becomes available for a possible hospital admission for further care. Now, two studies show that using OpenAI’s Chat GPT-4 has the potential to help emergency room providers determine which patients need the most urgent treatment and which patients will ultimately require hospitalization.
In the first study, published in JAMA Network Open , researchers at the University of California, San Francisco, fed 10,000 pairs of patient information from recent emergency room visits into GPT-4 to see if the AI tool could identify which patient had the most serious conditions. The pairs included one patient with a serious condition like a stroke and another with a less urgent need like a broken wrist. The AI correctly selected the patient with the more serious condition 89 percent of the industry email list time. A subset of 500 pairs of patient information was then assessed by both GPT-4 and doctors. The result? GPT-4 was accurate 88 percent of the time, a slight edge over doctors at 86 percent.
help clinicians allocate their time efficiently and serve as a decision-making aid, study author Dr. Christopher Williams said in a UCSF article.
“Imagine two patients needing to be transported to the hospital but there’s only one ambulance,” the article reads, “or a doctor on duty and there are three people calling him at the same time and he has to decide who to answer first.” However, Williams noted that it’s not yet ready for responsible use in an emergency room without further validation and clinical trials, as well as efforts to eliminate racial and gender bias.
In a second study, published in the Journal of the American Medical Informatics Association , researchers at the Icahn School of Medicine at Mount Sinai found that GPT-4 chat also has the potential to predict which emergency room patients will be admitted to the hospital.