AI Diagnostics Need Attention

It’s interesting that now, machine intelligence can help us predict and diagnose life-threatening conditions like stroke, pneumonia and so on. With the most interesting application showing how tiny robots can be injected into the body to go identify and cut cancer’s blood supply.

Call that out of the world technology, but there is also another superb application where AI was able to outperform over 40 dermatologists in analyzing the presence of onychomycosis infection, which causes toe fungus.

More AI Tools Absorbed into Clinical Practice

Source: singularityhub

In addition to a range of artificial intelligence tools that have been cleared and those in process of being examined by the FDA, this year’s HIMSS conference reported new and very promising techs that target health.

AT&T and Aira partnered, to develop a pill bottle reader fit in Smart Glasses that would help patients with low or no vision read prescriptions on their own. On a different occasion, AI is being designed to help treat depressions, not to mention other application that now detects breast cancer.

Exploring AI’s Potential in Diagnostics

The potential in AI diagnosis capabilities is yet to be fully tapped. But researchers agree that there is more power in this tech that can be theoretically explained. It has the capacity to revolutionize the delivery and effectiveness of health care. Neural networks, machine learning, and computing power, which constitute actionable AI are growing faster. Currently, the tech can navigate through hundreds of thousands of tagged disease images to identify conditions unaided.

Worth mentioning also is that these processes of disease diagnosis with agent take extremely short times. And along with that, some scientists seem to be compelled that an algorithm is perfect and acceptable if it can identify a particular condition effectively like an expert would do.

Is That Enough?

The biggest question comes in as to whether the reported AI diagnostic success stories, (like the system’s ability to effectively identify conditions through images and so on) is enough to adopt it into real life clinical application.

To some experts, those success stories are still nothing less than let’s say a newly discovered drug being able to kill pathogens inside a test tube. Reason being, according to the set standards of medicine and research, since time in memorial, a perfect scientific procedure must explain how exactly results are arrived at.

Methods and material need to be analyzable in detail. There needs to be a progression of researchers and studies, as well as clinical trials to ascertain viability and future safety in processes but this doesn’t seem to be taken seriously in AI diagnostics.

Source: sastrarobotics

Complaints are arising that some developers are not willing to dig deeper into research before training systems. In other words, they should consciously apply evidence-based techniques that have long been utilized in mature fields.

For instance, the experts need to adopt the approaches used in drug development, where before a drug is presented for approval, everything including the limits of the project is explained.

Code and Algorithm Testing

What we see on websites and preprints as AI diagnostics shouldn’t just be impressive studies, they need to go through testing, through peer review to verify key details of how the algorithms and codes were developed and more so if there are limitations, plus their risks in unique situations.

The images and all other elemental consideration used as raw material when training the systems need to be validate-able. The experience of the physicians with which the model was compared and the exact parameters the machine used to arrive at decisions need to be exposed to scrutiny.

It is also good that the whole details about a project be presented to the public to understand what happens in the labs. Like for instance, there was this report last year where an AI model was able to outperform 11 pathologists in assessing breast cancer.

The system took less than a minute to write conclusions after analyzing an image. However, after taking away time limitation the pathologists likewise did a good job but surpassed AI in deciding about complicated cases that proved unclear.

AI Diagnostic Tools Should Improve Progressively

Source: techemergence

As a matter of facts, it’s obvious that some issues or loopholes can only be witnessed after the tool is deployed to real-life application. It shouldn’t surprise when a diagnostic algorithm miss-associates an image used to train another device with a condition.

In other words, the algorithm should be trained to reject what it doesn’t understand other than giving possibly untrue results that might lead to serious consequences.

Thanks to Accenture, because it recently revealed that it will be testing AI through its cloud service. Ideally, health control boards need to instruct that these systems be subject to continuous checks. That is, users need to report deficiencies consistently, and proves of correction be confirmed by control boards.

Besides that, slow and careful development of AI-powered diagnostic tools would be a better approach to ensure everything goes right and prevent deaths.

Comments