Yiqiao Yin | Seeing Deep into the Lungs with Deep Learning

Nov 9, 2022 | Engineering & Computer Science, Medical & Health Sciences

X-rays and other forms of medical imaging let doctors peer into the body, revealing the internal structure of organs and tissues without invasive surgery. Doctors use the results to identify abnormalities such as broken bones, diagnose diseases such as cancer, or even monitor the health of a foetus within the womb. Although this technology is remarkable, the images aren’t useful in isolation. Experts must analyse the resulting data and parse what is healthy or unhealthy from the noise. Yiqiao Yin, Jaiden Schraut, Leon Liu and Jonathan Gong have created new machine learning technologies to support that crucial interpretation, focusing on X-rays and lung health.

A Window into the Body

By 2010, over 5 billion medical imaging studies had been conducted, including X-ray images revealing the shapes of bone fractures, the presence of breast tumours, the location of kidney stones, the health of the lungs, and much more.

Skin, muscle, bone – and just about every part of your body – is opaque to visible light. This means that these tissues absorb light with wavelengths of between 380 and 700 nanometres: the visible part of the electromagnetic spectrum. However, if we go down to light with wavelengths of just 10 nanometres and below, we find X-rays. The lower the wavelength of light, the more energy it contains, and so X-rays carry much more energy than visible light.

These X-rays tend to pass straight through soft tissues, barely interacting with them, just as visible light passes through glass. However, harder tissues contain heavier elements such as calcium, which can absorb X-rays. This means X-rays will pass through soft tissue, but bones, teeth and other hard tissues will be opaque, blocking X-rays just as your hand blocks visible light.

Decoding the Images

X-ray imaging makes it easy to spot a broken bone, and even to diagnose what kind of fracture has occurred – is it a partial hairline fracture, or a break all the way through? However, not all uses of X-rays are as simple as spotting cracks in bone.

Yiqiao Yin and his colleagues, Jaiden Schraut, Leon Liu and Jonathan Gong, focus on chest X-rays for lung health, which require careful interpretation. Machine learning can support this interpretation by automating some aspects of the data processing, but this requires a delicately constructed artificial intelligence system, as well as the trust of both the doctor and patient.

Yin and his colleagues have developed new cutting-edge algorithms to process the raw data for doctors, while providing them with more context and additional information than existing solutions.

While X-rays of broken bones tend to be 2D images, in more complex health issues, multiple images can be stitched together to create a 3D representation. This process gives more detail, and allows doctors to navigate the 3D space to make more precise conclusions. However, this can mean that it is even more complex to make those conclusions.

One important step is segmentation, in which the pixels of the images are divided according to their brightness, texture, colour, or other criteria. This can be done manually by radiologists, but this is incredibly time-consuming, and introduces the potential for human error in interpreting the data.

This is especially challenging in the lungs, which evolved to be incredibly complex to maximise surface area for gas exchange. This means that, for example, it can be difficult to distinguish the damage done to the lungs by pneumonia versus COVID-19, which each have different treatments. Recent research has produced models that can automate the process of segmentation, and Yin’s team has further developed this approach by building a hybrid model that both segments and classifies the data.

Inside the Algorithm

Neural networks have become a popular tool for processing lung scans. These are a powerful class of algorithms that are inspired by the structure of the brain, consisting of ‘neurons’ and connections between those neurons. You can input data in one end of the network, and it will flow through to an output layer, just as information enters your eye and travels through the neurons in your brain. This processes the raw data, generating the image we see in our mind.

However, each connection must be fine-tuned for the model to work. This is hugely complex, and can’t be done manually. Instead, Yin and his colleagues trained and tested their neural network using the data and results from 21,165 chest X-ray images.

Ironically in this case, a perennial drawback of neural networks is their lack of transparency. They provide incredibly accurate results, but it is almost impossible to figure out exactly how they came to that conclusion. Yin and his team tackled this by combining the classification and segmentation processes.

While one part of their model builds a segmented 3D image, another part identifies any pixels or areas that were particularly useful in the neural network’s recommended diagnosis. These can be combined to produce a heatmap on top of the 3D image, automatically highlighting those areas of the lungs that distinguish between different diagnoses.

This means doctors get a recommended classification (for example, the model might recommend a diagnosis of pneumonia induced lung cancer), while also presenting a 3D heatmap for them to explore. This boosts the trust in the model, since doctors can see the underlying data displayed clearly, and patients can be lead through an explanation of how the conclusion was reached, and where exactly the problems lie.

The team tested their final model using more real-life examples from the COVID-19 Radiography database, and their final model achieved an accuracy of 95%, meaning 19 out of 20 of their diagnoses matched the data. This is a remarkable achievement, and provides doctors with an excellent tool to interpret and explain the diagnosis.

Improving Medical Diagnostics

The more accurately we can diagnose patients’ diseases, the more effective our treatments are. For instance, distinguishing bacterial pneumonia from COVID-19 is the difference between a simple course of antibiotics and two weeks of worsening symptoms.

Furthermore, although this combination of automated segmentation and classification is relatively novel, this approach isn’t restricted to just chest X-rays: it could be applied to many essential medical exams, such as CT scans to diagnose thyroid cancer, MRI to identify stroke in the brain, ultrasound to measure the health of a foetus, and many others.

The more we improve our non-invasive scanning technologies, the more patients can be spared mis-diagnoses, delays, or even exploratory surgery, all of which can cause unnecessary harm. Yin’s technology advances our ability to image, diagnose, and ultimately ensure patients receive the treatment they need.

SHARE

DOWNLOAD E-BOOK

REFERENCE

https://doi.org/10.33548/SCIENTIA849

MEET THE RESEARCHER


Yiqiao Yin
Senior Data Scientist
LabCorp
Princeton, NJ
USA

Yiqiao Yin studied machine learning and applied statistics at Columbia University, obtaining an MA in 2019 for his work on predicting relapse in breast cancer patients, achieving an accuracy of 92% compared to the industry standard of 60–70%. In 2020, Yin became a data scientist at Bayer Crop Science in New York, designing and prototyping new algorithms and techniques for the Harvest Analytic Division. Since 2020, Yin has also worked part-time as the Head of Curriculum Development at Veritas AI in New York, developing e-learning and interactive teaching materials for a pre-college AI program. This year, he joined LabCorp in New Jersey, leading projects that tackle drug development AI challenges using tools such as convolutional neural networks, long-short term memory, and recurrent neural networks. He’s worked on technologies that assist pathologists and radiologists with prognoses, and continues to develop state-of-the-art tools that help deliver safe and effective healthcare.

CONTACT

E: Eagle0504@Gmail.com

W: https://www.yiqiao-yin.com/

KEY COLLABORATORS

Jaiden Schraut

Leon Liu

Jonathan Gong

FURTHER READING

JX Shraut, L Liu, J Gong, Y Yin, A multi-output network with U-net enhanced class activation map and robust classification performance for medical imaging analysis, Discover Artificial Intelligence, 2023, 3, 1. https://doi.org/10.1007/s44163-022-00045-1

REPUBLISH OUR ARTICLES

We encourage all formats of sharing and republishing of our articles. Whether you want to host on your website, publication or blog, we welcome this. Find out more

Creative Commons Licence (CC BY 4.0)

This work is licensed under a Creative Commons Attribution 4.0 International License. Creative Commons License

What does this mean?

Share: You can copy and redistribute the material in any medium or format

Adapt: You can change, and build upon the material for any purpose, even commercially.

Credit: You must give appropriate credit, provide a link to the license, and indicate if changes were made.

SUBSCRIBE NOW


Follow Us

MORE ARTICLES YOU MAY LIKE

Dr Yong Teng | Improving the Outlook for Head and Neck Cancer Patients

Dr Yong Teng | Improving the Outlook for Head and Neck Cancer Patients

Dr Yong Teng at the Emory University School of Medicine is working with colleagues to overcome the high mortality of individuals diagnosed with cancers affecting the head and neck. One of his approaches is based on understanding the particular mechanisms of the ATAD3A gene, which new insights suggest are closely related to cancers affecting the head and neck.

Professor Toni Miles | Why Understanding Bereavement Matters

Professor Toni Miles | Why Understanding Bereavement Matters

Professor Toni Miles has dedicated her research efforts to measuring bereavement and its impact on population health. Individual experience with bereavement is commonplace, but we know little about its impact on society when there is an instantaneous experience by a large number of individuals, i.e., mass bereavement. To measure its occurrence, her research with colleagues first confirmed that bereavement can be effectively measured in population surveys. Professor Miles argues that we should use such approaches to deliver interventions aiming to reduce the negative consequences of bereavement on individuals. By measuring bereavement in communities, these data become a cost-effective way to increase resilience, reduce demands on healthcare systems, and enhance public safety.

Dr Omar Islam | Portable Magnetic Resonance Imaging: An Important Innovation

Dr Omar Islam | Portable Magnetic Resonance Imaging: An Important Innovation

Imaging technologies are vital in modern medicine and have revolutionised how clinicians make diagnoses and monitor disease progression. However, the necessary equipment – such as a scanner for magnetic resonance imaging (MRI) – is very large and expensive, requiring patients to go to the scanner rather than receiving scans as bedside care. This takes up valuable staff time and resources, and can present further risks to patients. Dr Omar Islam from Queen’s University and Drs Aditya Bharatha and Amy Lin from the University of Toronto are showing how portable MRI scanners may offer a viable alternative that benefits patients and healthcare systems.

Dr Sébastien Weber | PyMoDAQ: Navigating the Future of Data Acquisition

Dr Sébastien Weber | PyMoDAQ: Navigating the Future of Data Acquisition

In an era where data is paramount, Dr Sébastien Weber and his team at CNRS, the French National Centre for Scientific Research, are changing the landscape for scientists and engineers with PyMoDAQ, an open-source data acquisition software. Their revolutionary tool stands out for its accessibility, versatility, and the thriving community it fosters.