Cancer-spotting AI and human experts can be fooled by image-tampering attacks

Date:

Mammogram images showing actual cancer-positive (upper left) and cancer-negative (lower left) cases. Cancerous tissue is indicated by white spots. The “Generating Hostile Network” program removed the cancer area from the cancer positive image to create a false negative image (upper right) and inserted the cancer area into the cancer negative image to create a false positive image (lower right). ). Credit: Q. Zhou et al. , Nat. Commune. 2021

Artificial intelligence (AI) models that evaluate medical images have the potential to speed up and improve the accuracy of cancer diagnosis, but they can also be vulnerable to cyber attacks. In a new study, researchers at the University of Pittsburgh simulated an attack that falsified mammogram images, tricking both AI breast cancer diagnostic models and human breast image radiologist experts.

Research published today Nature CommunicationsPays attention to the potential safety issues of medical AI known as “hostile attacks”. This modifies the image and other inputs so that the model reaches the wrong conclusion.

“What we want to show in this study is that this type of attack is possible and the AI ​​model can make a false diagnosis. This is a major patient safety issue,” said senior author Dr. Shandong Wu. Says. Associate Professor of Radiology, Biomedical Informatics and Biological Engineering in Pitt. “Understanding how AI models work under hostile attacks in the medical context can help us start thinking about how to make these models safer and more robust.”

AI-based Image recognition technology Cancer detection has made rapid progress in recent years, with some breast cancer models being approved by the US Food and Drug Administration (FDA). According to Wu, these tools can quickly screen mammogram images and identify the images most likely to be cancerous, making radiologists more efficient and accurate.

However, such technologies also carry the risks of cyber threats such as: Adversary attack..Potential motivations for such attacks include insurance fraud Health provider Companies that aim to increase profits or adjust the results of clinical trials in their favor.Hostile attack against Medical image From small operations that change AI decisions but are imperceptible to the human eye, to more sophisticated versions targeting sensitive content in images such as cancerous areas, they are more likely to fool humans.

To understand how AI works under this more complex type of hostile attack, Wu and his team have developed a model for detecting breast cancer using mammogram images. .. First, researchers trained deep learning algorithms to distinguish between cancerous and benign cases with an accuracy of 80% or higher. Next, they developed the so-called “Generative Adversarial Network” (GAN). This is a computer program that produces fake images by inserting or removing cancerous areas from negative or positive images, respectively, and tested how the model classifies these hostile images.

Of the 44 positive images made to look negative by GAN, 42 were classified as negative by the model and 209 of the 319 negative images made to look positive were classified as positive. Overall, the model was fooled by 69.1% of fake images.

In the second part of the experiment, the researchers asked five human radiologists to distinguish between genuine and fake mammogram images. Experts have accurately identified image reliability with an accuracy of 29% to 71%, depending on the individual.

“Certain fake images that trick AI may be easily found by radiologists, but many of the hostile images in this study are not only fooling models, but also experienced humans. Readers have also been fooled, “said Wu, who is also the director of Intelligent Imaging, for clinical imaging labs and Pittsburgh Center computing for AI innovation in medical imaging. “Such an attack can be very harmful to the patient if it leads to a false diagnosis of cancer.”

According to Wu, the next step is to develop ways to make the AI ​​model more robust. Adversary attack..

“One direction we are looking for is’hostile training’of AI models,” he explained. “This involves pre-generating hostile images and teaching the model that these images will be manipulated.”

As AI can be deployed in the medical infrastructure, Wu said the hospital’s technical systems and personnel are a technical solution for recognizing potential threats, protecting patient data and blocking malware. He said cybersecurity education is also important to ensure that.

“I hope this research will help people think about medical AI. model What we can do to be safe and defend against potential attacks is to ensure that AI systems function safely and improve patient care, “he added.

Other authors who contributed to this study were Dr. Qianwei Zhou of Pitt and Zhejiang Institute of Technology in China. Dr. Margarita Zulei, Dr. Bronwin Nea, Dr. Adrian Bargo, Dr. Susanne Gannam, Dr. Douman Alephan, all Pitt and UPMC. Former Ph.D. in Guo Medicine at Pitt and Guangzhou First People’s Hospital in China. Lu Yang, MD, Ph.D., Cancer Hospital, Pit and Chongqing University, China.


FoolChecker: A platform for checking how robust an image is against hostile attacks


For more information:
A study of machine and human readers on the safety of AI diagnostic models under hostile image attacks, Nature Communications (2021). DOI: 10.1038 / s41467-021-27577-x

Quote: Cancer Detection AI and Human Experts Image Tamper Attack Obtained on December 14, 2021 from https: //medicalxpress.com/news/2021-12-cancer-spotting-ai-human-experts (2021) Can be fooled by (December 14, 2014)-image-tampering.html

This document is subject to copyright. No part may be reproduced without written permission, except for fair transactions for personal investigation or research purposes. Content is provided for informational purposes only.

Cancer-spotting AI and human experts can be fooled by image-tampering attacks Source link Cancer-spotting AI and human experts can be fooled by image-tampering attacks

The post Cancer-spotting AI and human experts can be fooled by image-tampering attacks appeared first on California News Times.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Popular

More like this
Related

Here’s How Bitcoin’s Downtrend May End

Bitcoin may need to close above $26,000 in order...

“When This Rifle Is The Only Thing Standing Between Your Family And a Dozen Angry Democrats In Klan Hoods…”

EPIC: Arizona Congressional Candidate Jerone Davison Exposes The RACIST...

Dismissed Tesla workers file an urgent plea and claim a small severance pay

Two layoffs Tesla Workers filed an emergency motion on...

This is how Bored Apes Yacht Club [BAYC] is back from the burrows

The Bored Ape Yacht Club (BAYC) collection is back...