AI Harm vs Hype

AI can determine personal information from VR data

AI 4 Healthcare Newsletter

Welcome to our newsletter, a meticulously curated platform intended to deliver the most recent news, breakthroughs, and published research from the exciting crossroads of AI and Healthcare.

I hope you are enjoying your summer. It seems that a lot is going on with AI these days, especially for healthcare. We hope this letter will bring you up to speed on the latest developments.

AI News in Healthcare

Hospital bosses love AI. Doctors and nurses are worried. Hospitals are racing to implement flashy new AI tools, but are doctors and nurses ready to welcome their robot overlords? Not so fast! Pushing unfamiliar tech on overwhelmed staff risks more harm than good. Why this is important: we need to build a culture ready for AI before pushing the technology in people throat.

Americans are concerned about AI in healthcare. 70% of Americans are concerned about AI in healthcare, even younger generations like Gen X and Z. While more comfortable using AI for administrative tasks, they remain wary of its use for diagnoses and treatment recommendations. Let me say it in other way: If, God forbid, you are diagnosed with cancer, would you prefer AI to interpret your scan, or would you rather have the expertise of the best radiologist?

Cigna partners with Virgin Pulse for AI health platform. This platform allows Cigna patients to track their healthcare progress. Based on their logged data, they get connected to patient appropriate programs such as pre diabetic management or behavioral health services.

Mental Health App Tests Limits of Using AI for Medical Care. Yana, an app whose name means "you are not alone," aims to provide support with ethical care. The idea of an AI counseling app raises red flags, but Yana has worked to address concerns. If the app senses a user is suicidal, it immediately connects them to a compassionate voice on a crisis hotline. Still, some wonder if an algorithm can truly empathize. As mental health needs soar, does turning to tech for connection narrow or widen the loneliness gap?

AI News in the World

AI can determine personal information through AR, VR users' motion data, studies say. Researchers tasked an AI with identifying people purely through their motion data. After observing someone for just 5 minutes, the AI could accurately pinpoint them 73% of the time in 10 seconds, and 94% in 100 seconds - nearing human ability. Why this is important: This study raises intriguing questions about biometrics and anonymity, and whether AI will steer society towards insight or surveillance.

Pentagon establishes Task Force Lima to study generative AI issues. With the growing public interest in AI, the United States Defense Department has set up this task force for national security purposes. “As we navigate the transformative power of generative AI, our focus remains steadfast on ensuring national security, minimizing risks, and responsibly integrating these technologies” said the Deputy Defense Secretary Kathleen Hicks.

Is Open AI heading for bankruptcy in 2024?. ChatGPT costs $700,000 per day to operate. All of this money is coming from Microsoft and other recent investors' pockets, which may eventually empty if ChatGPT does not become profitable soon. OpenAI aims to reach $200 million in revenue this year and $1 billion in 2024, but experts are skeptical of these projections.

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: Some experts believe language models like ChatGPT can never fully grasp human reasoning as they are pattern-finders rather than truth-finders. Others argue we should accept AI as a different form of intelligence and embrace its alternative perspectives rather than forcing it to mimic human thinking.

AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype. Interesting take on this. The existing dangers of AI tools include wrongful arrests, expanding surveillance, defamation, deep-fake pornography, routine discrimination, hate speech, and misinformation already exist today and we should focus on that rather than focusing on the hype of ending humanity.

Nice now requires annual audits of AI used in hiring, sparking a major change as the first law of its kind. The audits aim to uncover biases and lack of transparency in automated recruitment tools that deeply impact people's livelihoods. Why this is important: This NYC law could prompt other locations to follow suit and ask tough questions about AI to ensure fairness in the future of work.

Americans distrust AI giants. In a recent survey, a staggering 82% prefer federal regulation over corporate self-governance, revealing deep public mistrust of Big Tech's benevolence. Further, willingness to accept AI regulations varies across many counties around the world.

Why this is important: With AI progress outracing government, we urgently need lawmakers who understand these technologies' risks and can collaborate with experts to govern AI with insight and agility.

AI skills worth 900,000$ or more these days. Salaries rise as employers such as Netflix and Walmart seek candidates with artificial-intelligence skills. Why this is important: I think I chose the wrong career path 😅.

Impactful Publications

Creation and Adoption of Large Language Models in Medicine. A very nicely written special commentary in JAMA regarding LLMs in healthcare. In summary: Large language models like ChatGPT promise to revolutionize medicine, but doctors must steer their development. Training these AI systems requires massive datasets - should medical records be used without patients' consent? Before rapidly deploying unproven tech, clinicians should verify benefits and watch for risks in the real world. With thoughtful collaboration, doctors and AI could together achieve what neither can alone.

Large language models + vision model = AI that can read x ray images and their reports. Researchers at Google developed ELIXR (Embeddings for Language/Image-aligned X-Rays), which integrates a language-aligned image encoder with the fixed language model PaLM 2 for chest X-ray analysis. ELIXR attains state-of-the-art results, including a mean AUC of 0.850 for 13 findings in chest X-ray classification, mean AUCs of 0.893 and 0.898 for atelectasis, cardiomegaly, consolidation, pleural effusion, and pulmonary edema with 1% and 10% training data, and a normalized discounted cumulative gain (NDCG) of 0.76 across nineteen queries in semantic search. It outperforms existing methods, requiring significantly less data for similar performance, and shows promise in vision-language tasks with accuracies of 58.7% and 62.5% for visual question answering and report quality assurance, respectively

How accurate ChatGTP in citing journal articles. In this study, researchers tested the value of the ChatGPT copilot in creating content for training of learning health systems (LHS). Fact-checking 162 reference journal articles using the default GPT-3.5 model revealed 159 (98.1% [95% CI, 94.7%-99.6%]) as fake articles, while fact-checking 257 articles with the GPT-4 model identified 53 (20.6% [95% CI, 15.8%-26.1%]) as fake articles. Why this important: watch out and double check ChatGPT 😅.

Can AI predict lung cancer mortality by looking at CT scan images?. In this study, researchers conducted a secondary analysis of the National Lung Screening Trial aimed to assess the enhanced predictive capabilities of AI-derived body composition measurements from CT scans for lung cancer incidence, lung cancer death, cardiovascular disease (CVD) death, and all-cause mortality. Among 20,768 participants, the AI-derived measurements improved risk prediction significantly for lung cancer death (male: χ2 = 23.09, P < .001; female: χ2 = 15.04, P = .002), CVD death (males: χ2 = 69.94, P < .001; females: χ2 = 16.60, P < .001), and all-cause mortality (males: χ2 = 248.13, P < .001; females: χ2 = 94.54, P < .001), but not for lung cancer incidence (male: χ2 = 2.53, P = .11; female: χ2 = 1.73, P = .19).

Is AI better in predicting whether missense variety are damaging? The study introduces a novel approach using the ESM1b protein language model to predict the effects of all possible missense variants in the human genome (~450 million variants) and offers predictions via a web portal. ESM1b outperforms existing methods in classifying ClinVar/HGMD missense variants and predicting measurements from deep mutational scanning datasets, even for complex variants. This establishes protein language models as a powerful and versatile strategy for accurate variant effect prediction.

Interesting AI tools

DoctorGPT. In an attempt to GPT everything, now we have a doctor GPT. The developers claimed that this GPT was trained on medical data and can pass the USMLE exam. I have not tried it so I can not tell.

MeMemes - Turn yourself into memes: from GigaChad to DiCaprio.

Check out our courses from AI 4 Healthcare

ChatGPT for Healthcare. Discover all you need to know about ChatGPT and its applications in healthcare. Join over 2000 students worldwide who have already enrolled in this course.

No code-low code Machine Learning for Healthcare. This is the only course that provides hands-on examples of machine learning in healthcare. Start building real models immediately upon completion of the course.

Artificial Intelligence and Machine Learning in Healthcare (The Basics). Learn everything you need to know about the basics of AI application in healthcare, explained in simple terminology, without requiring any coding or mathematical experience.