ChatGPT is not your cancer doctor

The war against ChatGPT in the workplace continues

AI 4 Healthcare Newsletter

Welcome to our newsletter, a meticulously curated platform intended to deliver the most recent news, breakthroughs, and published research from the exciting crossroads of AI and Healthcare.

As summer fades away, the AI world keeps bustling with exciting happenings. Dive in for some intriguing insights! 😁 

AI News in Healthcare

Forbes. Inside Google's Plans To Fix Healthcare With Generative AI. A Nashville hospital employed Med-PaLM 2, a Google-developed large language model (similar to GPT, which powers ChatGPT) specifically trained on medical data. It was used to summarize physician notes shared at the end of shifts—an intriguing application of generative AI in healthcare.

Microsoft and Epic expand AI collaboration to accelerate generative AI’s impact in healthcare, addressing the industry’s most pressing needs. Epic, one of the largest electronic healthcare providers in the world, will utilize Microsoft's generative AI to assist physicians in transcribing and summarizing their notes. Why this is important: I would love for AI to write my patients' notes 😅 so I can have more time to spend with them.

Healthcare IT News. Where generative AI can make headway in healthcare. Another article highlighting the use of large language models in note-taking and summarization for physicians. I believe this application has even broader potential to enhance productivity across various aspects, not limited to note-taking and summarization.

AI News in the World

The hottest news this summer: ChatGPT Introduced their enterprise edition. What dose that mean: companies will have: 1) unlimited access to ChatGPT-4, 2) higher speed (up to 2x), 3) 4x longer context windows for prompts or files, 4) free OpenAI credits, and 5) teams can share helpful chats and prompts. Why this is important: ChatGPT Enterprise is SOC 2 compliant and does not use your data to train their models. However: be careful as OpenAI is facing criticism over allegations of creative plagiarism, a matter that might concern certain companies.

ChatGPT-maker OpenAI accused of string of data protection breaches in GDPR complaint filed by privacy researcher. The complaint alleges that the U.S.-based AI giant, OpenAI, is violating the European Union's General Data Protection Regulation (GDPR) on several fronts. It asserts that OpenAI breaches EU privacy rules in areas such as lawful basis, transparency, fairness, data access rights, and privacy by design, which correspond to Articles 5(1)(a), 12, 15, 16, and 25(1) of the GDPR. Why this is important: There is increasing public awareness regarding the ethical and regulatory considerations surrounding the use of ChatGPT, particularly in healthcare, where data privacy, fairness, and bias are of utmost importance.

Disney, The New York Times and CNN are among a dozen major media companies blocking access to ChatGPT as they wage a cold war on A.I. Several companies, particularly those in media and content generation, are restricting their employees' access to ChatGPT. Why this is important: This move raises questions about its effectiveness, given the availability of alternative chatbots like Bard and Claude. Additionally, employees can access these bots from personal email accounts. An alternative and potentially more effective approach could involve educating employees on responsible use of this technology. 

Walmart will give 50,000 office workers a generative AI app. Walmart is announcing a program that will give its roughly 50,000 non-store employees access to a generative AI app trained on corporate information. Why this is important: Instead of prohibiting employees from using generative AI tools that can enhance their productivity, offer them these tools, which can empower them while being trained on their data and used safely without risks.

US curbs AI chip exports from Nvidia and AMD to some Middle East countries. The U.S. has broadened export restrictions on advanced artificial intelligence chips, including those from Nvidia and AMD, extending beyond China to encompass additional regions, including certain Middle Eastern countries. Why this is important: The US is striving to establish and maintain global leadership in AI. In this century, the countries that leverage AI to its fullest potential will emerge as the winners.

Baidu, SenseTime Among First Firms to Win China AI Approval. China is on the verge of approving its first public generative AI services, allowing its tech giants like Baidu, Alibaba, and Tencent to compete with leading U.S.-based models. This move comes shortly after China introduced its initial AI regulations, signaling strong support for the industry. Baidu's Ernie Bot is poised to be the first to receive government approval. While U.S. semiconductor export sanctions may influence the competition, Baidu claims Ernie already outperforms ChatGPT in certain aspects. Why this is important: This development marks the rise of a new AI powerhouse, but its immediate impact on current leaders remains uncertain.

Impactful Publications

New England Journal of Medicine. Considering Biased Data as Informative Artifacts in AI-Assisted Health Care. Very interesting review article that shed the light on very important topic in AI and healthcare which is bias in the data. Here are three take aways: 1) The review emphasizes the importance of data and bias in healthcare, particularly in the context of AI development. 2) it challenges the notion that AI-related harms are solely a data bias issue, calling for a broader perspective that considers historical and social contexts. 3)It suggests that treating healthcare data as artifacts can lead to a more comprehensive approach, benefiting public health by addressing population inequities and discovering new ways for AI to detect relevant data patterns.

JAMA Oncology. Use of Artificial Intelligence Chatbots for Cancer Treatment Information. In this study, the authors utilized ChatGPT 3.5 to respond to queries regarding cancer diagnosis. They created four zero-shot prompt templates for soliciting treatment recommendations and evaluated the chatbot's recommendations against the 2021 NCCN guidelines. To determine concordance with NCCN guidelines, three out of four board-certified oncologists assessed the chatbot's output, and the final score was determined by majority rule. In cases of complete disagreement, the oncologist who had not previously seen the output provided adjudication. Agreement among all three annotators was achieved for 322 out of 520 (61.9%) scores. Disagreements typically occurred when the output lacked clarity, such as not specifying which multiple treatments to combine. Additionally, 13 out of 104 (12.5%) outputs contained hallucinated responses, meaning they were not part of any recommended treatment. Why this is important: When faced with a new cancer diagnosis, the choice between relying on a chatbot's answer or seeking recommendations from the world's best cancer doctor is significant. However, this doesn't prevent cancer patients from searching their symptoms online and, more recently, seeking assistance from ChatGPT in their cancer journey.

JAMA Network Open. Use of GPT-4 to Analyze Medical Records of Patients With Extensive Investigations and Delayed Diagnosis. The authors utilized the medical histories of 6 patients aged 65 years or older and had experienced a delay in receiving a definitive diagnosis lasting longer than 1 month. The authors inputed this information into the GPT-4 model without including the final diagnosis. Responses from both GPT-4 and clinicians were collected and subsequently compared. The accuracy of the primary diagnoses made by GPT-4, clinicians, and Isabel DDx Companion was as follows: GPT-4 correctly diagnosed 4 out of 6 patients (66.7%), clinicians correctly diagnosed 2 out of 6 patients (33.3%), and Isabel DDx Companion did not yield any correct diagnoses. When including differential diagnoses, the accuracy improved to 5 out of 6 (83.3%) for GPT-4, 3 out of 6 (50.0%) for clinicians, and 2 out of 6 (33.3%) for Isabel DDx Companion. Interestingly, GPT-4 was able to suggest diagnoses that had not been considered by clinicians before definitive investigations. Why this is important: While I do not believe ChatGPT will replace physicians, as medicine relies on more than just Q&A sessions and heavily depends on physicians' experience to personalize treatments for patients, I certainly believe that ChatGPT can assist physicians in making better and quicker diagnoses.

The Lancet Digital Health. Development and validation of an interpretable machine learning-based calculator for predicting 5-year weight trajectories after bariatric surgery: a multinational retrospective cohort SOPHIA study. In this study, the authors aimed to develop a machine learning model to predict individual preoperative 5-year weight loss trajectories after surgery. This multinational retrospective observational study enrolled adult participants (aged ≄18 years) from ten prospective cohorts (including ABOS [NCT01129297], BAREVAL [NCT02310178], the Swedish Obese Subjects study, and a large cohort from the Dutch Obesity Clinic [Nederlandse Obesitas Kliniek]) and two randomized trials (SleevePass [NCT00793143] and SM-BOSS [NCT00356213]) in Europe, the Americas, and Asia, with a 5-year follow-up after Roux-en-Y gastric bypass, sleeve gastrectomy, or gastric band. A total of 10,231 patients from 12 centers in ten countries were included in the analysis, corresponding to 30,602 patient-years. Among participants in all 12 cohorts, 434 baseline attributes were available in the training cohort, and seven variables were selected: height, weight, intervention type, age, diabetes status, diabetes duration, and smoking status. At 5 years, across external testing cohorts, the overall mean MAD BMI was 2.8 kg/m2 (95% CI 2.6–3.0), the mean RMSE BMI was 4.7 kg/m2 (4.4–5.0), and the mean difference between predicted and observed BMI was –0.3 kg/m2 (SD 4.7).

Interesting Tools

A new toll to detect whether a text is written by AI or human, I tried it, interesting but not really accurate.

Check out our courses from AI 4 Healthcare

ChatGPT for Healthcare. Discover all you need to know about ChatGPT and its applications in healthcare. Join over 2000 students worldwide who have already enrolled in this course.

No code-low code Machine Learning for Healthcare. This is the only course that provides hands-on examples of machine learning in healthcare. Start building real models immediately upon completion of the course.

Artificial Intelligence and Machine Learning in Healthcare (The Basics). Learn everything you need to know about the basics of AI application in healthcare, explained in simple terminology, without requiring any coding or mathematical experience.

t