Glass Health is building an AI for suggesting medical diagnoses


Glass Health
Glass Health
Spread the love

Dereck Paul became worried that the advancement of medical software lagged behind that of other industries, such as banking and aerospace, while a medical student at UC San Francisco. He eventually concluded that patients are best served when doctors have access to cutting-edge software, and he dreamed of founding a business that would put patients’ and doctors’ demands ahead of those of hospital managers or insurance companies.

To establish Glass Health in 2021, Paul teamed up with Graham Ramsey, a friend and engineer at the women’s health software startup Modern Fertility. Glass Health offers a notebook that doctors may use as a “personal knowledge management system” for studying and practicing medicine. Ramsey refers to it as a “personal knowledge management system” for storing, organizing, and sharing their methods for diagnosing and treating illnesses throughout their careers.

Early adoption of Glass Health by doctors, nurses, and doctors-in-training on social media, particularly X (previously Twitter), led to the company’s first round of funding in 2022, a $1.5M pre-seed round headed by Breyer Capital. The Winter 2023 round of Y Combinator then accepted Glass Health. But early this year, Paul and Ramsey changed the company’s direction to embrace generative AI. 

Glass Health offers an AI tool akin to the one employed by OpenAI’s ChatGPT, which assists healthcare providers in delivering diagnoses and suggesting treatment options based on evidence. When doctors input descriptions such as “A 71-year-old male with a history of myocardial infarction experiences subacute progressive dyspnea on exertion” or “A 65-year-old woman with a history of diabetes and hyperlipidemia presents with sudden chest pain and sweating,” Glass Health’s AI furnishes a probable prognosis and a clinical strategy.

Paul explained, “Clinicians input a patient summary, also referred to as a problem representation, encompassing pertinent patient demographics, medical history, manifestations, and details from laboratory and radiology examinations relevant to the patient’s condition—essentially the information they would typically use to present a case to another healthcare professional.” Subsequently, Glass Health’s system scrutinizes this patient summary and suggests five to ten potential diagnoses that the clinician may wish to consider and further explore.

Glass Health can also write a case evaluation paragraph explaining any potentially pertinent diagnostic studies for practitioners to review. These justifications can be shared with the larger Glass Health community or amended and utilized for clinical notes and records.

Glass Health is building an AI for suggesting medical diagnoses
Source of Image: Techcrunch.com

The technology from Glass Health is quite practical in theory. However, it has been demonstrated that even modern LLMs could be better at offering health advice.

Babylon Health, an AI firm supported by the National Health Service of the United Kingdom, has come under fire numerous times for claiming that its disease-diagnosing technology can outperform clinicians.

In a disastrous experiment, the National Eating Disorders Association (NEDA) and AI startup Cass launched a chatbot to support those with eating disorders. A generative AI system upgrade caused Cass to repeat damaging “diet culture” ideas like calorie restriction, prompting NEDA to turn off the tool.

A medical expert was recently hired by Health News to assess the validity of ChatGPT’s health recommendations across a wide range of topics. The expert discovered that the chatbot misrepresented recent research, made false claims (such as that “wine might prevent cancer” and that prostate cancer examinations should be based on “personal values”), and copied text from other health news sources.

In a more forgiving article in Stat News, a research assistant and two Harvard professors discovered that ChatGPT correctly identified the illness in 39 out of 45 vignettes (among the top three alternatives). The vignettes, however, were commonly used to assess medical students, and the researchers cautioned that they might not accurately represent how people, particularly those for whom English is a second language, explain their symptoms in the real world.

Paul stressed numerous times during our discussion that while Glass Health’s AI is primarily concerned with offering probable diagnoses, its responses shouldn’t be seen as final or prescriptive. Here’s my theory on the covert reason: If it were, Glass Health would be subject to more intense legal review and possibly even FDA regulation.

Others are hedging besides Paul. In its marketing materials, Google, which is testing the Med-PaLM 2 medical-focused language model, has avoided implying that the model can replace the knowledge of medical professionals in a clinical context. Similarly, Hippocratic, a start-up developing an LLM specifically suited for healthcare applications (but not diagnosing), has done so.

However, Paul contends that Glass Health’s methodology allows “fine control” over its AI’s outputs and directs it to “reflect state-of-the-art medical knowledge and guidelines.” A component of that strategy entails gathering user data to enhance Glass’ underlying LLMs, a move that may only be popular with some patients. With its $6.5 million in funding, Glass intends to use it for physician creation, clinical guidelines evaluation and update for the platform, AI development, and general R&D. Glass, according to Paul, has a four-year runway.


Spread the love

Disclaimer -We have collected this information from our direct sources, various trustworthy sources on the internet and the facts have been checked manually and verified by our in-house team.