Researchers discover how artificial intelligence can be trained to detect tumour

Artificial intelligence (AI) can be trained to detect whether a tissue image contains a tumor or not. However, until recently, how he expresses his judgment remained a mystery. A team from the Ruhr-Universitat Bochum Research Center for Protein Diagnostics (PRODI) is working on a new approach that will make the judgment of an AI clear and therefore reliable. Researchers led by Professor Axel Mosig describe the approach in the journal Medical Image Analysis.

For the study, bioinformatics scientist Axel Mosig collaborated with Professor Andrea Tannapfel, head of the Pathology Institute, oncologist Professor Anke Reinacher-Schick of the Ruhr-Universitat St. Josef Hospital, and the biophysicist and founding director of PRODI Professor Klaus Gerwert. The team developed a neural network, aka AI, that can classify whether or not a tissue sample contains a tumor. To this end, they provided the AI ​​with a large number of microscopic images of tissues, some of which contained tumors, while others were tumor-free. “Neural networks are initially a black box: it is not clear which identification characteristics a network learns from the training data,” explains Axel Mosig. Unlike human experts, they lack the ability to explain their decisions. “However, for medical applications in particular, it is important that AI is capable of providing explanations and therefore reliable,” adds bioinformatics scientist David Schuhmacher, who collaborated on the study.

AI is based on falsifiable hypotheses The explainable AI by the Bochum team is therefore based on the only kind of meaningful claims known to science: on falsifiable hypotheses. If a hypothesis is false, this fact must be demonstrable through an experiment. Artificial intelligence usually follows the principle of inductive reasoning: using concrete observations, i.e. training data, AI creates a general model based on which it evaluates all further observations.

The underlying problem was described by the philosopher David Hume 250 years ago and can be easily illustrated: no matter how many white swans we observe, we could never conclude from these data that all swans are white and that there are no black swans whatsoever. Science therefore makes use of the so-called deductive logic. In this approach, a general hypothesis is the starting point. For example, the assumption that all swans are white is falsified when a black swan is spotted. The activation map shows where the tumor is detected

“At first glance, inductive AI and the deductive scientific method seem almost incompatible,” says Stephanie Schorner, a physicist who also contributed to the study. But the researchers found a way. Their new neural network not only provides a classification for whether a tissue sample contains a tumor or is tumor-free, it also generates an activation map of the microscopic image of the tissue. The activation map is based on a falsifiable hypothesis, namely that the activation derived from the neural network corresponds exactly to the tumor regions present in the sample. Site-specific molecular methods can be used to test this hypothesis.

“Thanks to the interdisciplinary structures of PRODI, we have the best prerequisites to incorporate the hypothesis-based approach in the development of a reliable biomarker AI in the future, for example to be able to distinguish between certain tumor subtypes relevant to therapy”, concludes Axel Mosig. (ANI)

(This story was not edited by Devdiscourse staff and is automatically generated from a syndicated feed.)

.

Leave a Comment