Healthcare Technologies 2024: Student and Early Career Awards Evening
Hear from some of the winners of the IET Healthcare Technologies Student and Early Career Awards 2024
About
Please note that this is an in person event at IET London: Savoy Place
** Registration closes on 21st November **
Join us to hear from the J.A Lodge winner and the Dennis Hill winner in person and the William James winners recorded talk as part of the IET Healthcare Technologies Student and Early Career Awards 2024.
After hearing the winners present their ground-breaking work, we'll be joined by a keynote speaker, Professor Mandic speaking on 'Hearables: Real World Applications of AI for eHealth'.
Award winners:
J.A Lodge Award 2024 (Suitable for *early career engineers)
Dr Harry Davies, Imperial College of Science, Technology and Medicine, A Deep Matched Filter: Harnessing Noisy Ear-ECG
William James Award 2024 (Suitable for students)
Fenglin Liu, University of Oxford, A medical multimodal large language model for future pandemics
Dennis Hill Award 2024 (Suitable for students)
Farheen Muhammed, University of Oxford - Microbubble generation using an acousto-fluidic device
As usual, you'll have the opportunity to ask questions to all presenters.
2
Continuing Professional Development
This event can contribute towards your Continuing Professional Development (CPD) hours as part of the IET's CPD monitoring scheme.
28 Nov 2024
6:00pm - 8:30pm
Reasons to attend
Award winning presentations
Keynote industry expert
CPD
Networking opportunities
Unique opportunity to learn directly from active and experienced professionals in their respective fields
Comprehensive overview of subjects with latest industry trends, developments, and challenges
Q&A to allow you to explore specific, related issues
Programme
Evening Programme (subject to change):
Arrival 6pm for a 6.30 start with light refreshments
Keynote speaker:
Professor Mandic speaking on 'Hearables: Real World Applications of AI for eHealth'
Award winners:
J.A Lodge Award 2024 (Suitable for *early career engineers): Dr Harry Davies, Imperial College of Science, Technology and Medicine , The Deep-Match Framework: R-Peak Detection in Ear-ECG
The Ear-ECG promises continuous monitoring of the electrical activity of the heart (electrocardiography) by measuring the potential difference across the heart with electrodes embedded within earphones. The increased wearability of the Ear-ECG often comes with a degradation in signal quality. To make full use of the Ear-ECG, even in cases where it is particularly noisy, we created an efficient and interpretable deep-learning based "matched filter" for precise R-peak detection in wearable ECG signals with a poor signal-to-noise ratio. This convolutional neural network is built to behave as a "matched filter", a signal processing concept involving pattern matching, that originated in radar systems 80 years ago. We initialise our matched filter network with the domain knowledge of the ECG signal and demonstrate that it even learns to enhance some aspects of these templates to accurately detect the peaks of the ear-ECG. Our model achieves state-of-the-art results with the benefit of being fully interpretable.
Dennis Hill Award 2024 (Suitable for students): Farheen Muhammed, University of Oxford - Microbubble generation using an acousto-fluidic device
Synopsis will be provided soon.
Due to circumstances outwith our control we will not have a speaker for the William James Award 2024 (Suitable for students).
The winner was Fenglin Liu, University of Oxford, A medical multimodal large language model for future pandemics
Deep neural networks have been integrated into the whole clinical decision procedure which can improve the efficiency of diagnosis and alleviate the heavy workload of physicians. Since most neural networks are supervised, their performance heavily depends on the volume and quality of available labels. However, few such labels exist for rare diseases (e.g., new pandemics). Here we report a medical multimodal large language model (Med-MLLM) for radiograph representation learning, which can learn broad medical knowledge (e.g., image understanding, text semantics, and clinical phenotypes) from unlabelled data. As a result, when encountering a rare disease, our Med-MLLM can be rapidly deployed and easily adapted to them with limited labels. Furthermore, our model supports medical data across visual modality (e.g., chest X-ray and CT) and textual modality (e.g., medical report and free-text clinical note); therefore, it can be used for clinical tasks that involve both visual and textual data. We demonstrate the effectiveness of our Med-MLLM by showing how it would perform using the COVID-19 pandemic "in replay". In the retrospective setting, we test the model on the early COVID-19 datasets; and in the prospective setting, we test the model on the new variant COVID-19-Omicron. The experiments are conducted on 1) three kinds of input data; 2) three kinds of downstream tasks, including disease reporting, diagnosis, and prognosis; 3) five COVID-19 datasets; and 4) three different languages, including English, Chinese, and Spanish. All experiments show that our model can make accurate and robust COVID-19 decision-support with little labelled data.
Networking till 8.30
(Programme can change)