**HACKED!** It was a wonderful experience working with my teammates Xinhui Li and Samin Mahdipour to win the first MEXA hackathon! Thanks to MEXA for putting on an exciting challenge and supporting us with highly engaged technical consultants and lived-experience experts. Our goal was to leverage LLMs to offer psychiatrists objective, behavioral measurements of inattention in support of their clinical ADHD diagnoses. To accomplish this goal, we tried to generate realistic transcripts of ADHD patient interviews (using single-shot learning with LLM) and then use these transcripts to quantify attentional lapses (e.g., errors of omission, commission, distraction, and perseveration) using a second LLM prompt. Here are a few takeaways from my experience: 1. Our LLM can reliably generate realistic interview content (where patients describe symptoms of inattention). 2. But our LLM struggles to simulate a skeptical psychiatrist who presses, prompts, and challenges patients to better understand the significant consequences of their inattention. 3. And the LLM struggles to simulate (and accurately quantify) realistic, inattentive response behavior in patients. #mentalhealth #AI #Hackathon #ADHD #psychiatry #googlehealth #deepmind #MEXA #Wellcome
The first MEXA Hackathon was a great success! Congratulations to our Hackathon Winners!!🎉 🥇MoodMates - Lance Middleton, Xinhui Li, Samin Mahdipour 🥈JollyGenies - Thibaud Dalavy, Gary Brown, Tolulope Oladele, Viviana Greco 🥉RO Hackers - Liam Swift, Harry Ross, Seshu Pavan Mutyala, Jack Glenn, Jamie Ferguson, Sandra Garrido We look forward to seeing everyone at our next Hackathon! Details coming soon....✨
Sounds interesting. Congrats on the winning project!
Congratulations!
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
3wThe use of single-shot learning for LLM transcript generation is promising but requires careful fine-tuning on datasets mimicking diverse patient presentations and interview styles. Simulating a skeptical psychiatrist's probing questions could benefit from incorporating reinforcement learning techniques to enable the LLM to adapt its questioning strategy based on patient responses. How would you incorporate adversarial training into the second LLM to better capture the nuances of inattentive response behavior, such as filler words and tangential remarks?