Site Loader

In recent years, AI has been used to determine whether images have cancers that are not seen or not considered problematic by trained physicians (Liu, 2018; Svoboda, 2022). A recent study by Lebig, Brehmer, Bunk, Byng, Pinker & Umutlu (2022) reported that AI was able to screen for cancer with a 2.26 sensitivity. Although the statistics demonstrate that AI is a fairly sensitive predictive model, AI is still best used as a triage tool to reduce workload rather than a cancer decision-making tool. One reason is that AI disregards the generalized knowledge of radiologists and also the emotional impact of a cancer diagnosis. This is why Lebig et. al (2022) suggests that there needs to be a combination of both AI and radiologists when predicting medical issues.
A collaborative model to intelligence will continue to be the approach of AI technology. Humans will need to collaborate to ensure that machine learning is adequately trained to ensure that the ML tool answers reflect confidence in the model. Humans are the intelligent beings who may face litigation if the technology fails to critically apply ML/AI. Humans will still be needed to engage with possible socio-cultural issues that AI can not predict. Humans will be needed to remove and fight the biases created by the tool.
The collaborative model towards intelligence allows the technology to act as a tool. The tool may provide information, guidance, or discovery, but it is human intelligence that is used for reflection, response, and accountability. Human intelligence may not be removed from this collaborative model. Removal of the human reverts AI to the dumb technologies of the past that relied on garbage in, garbage out approach. Both AI and humans need to work collaboratively to create an intelligence that may be used to inform decision-making.
After IDEA 2022, I am more convinced that a collaborative model to intelligence is needed. The sessions by Engel & Coleman, & Griffey discussed the ethical issues and the technical limitation of Ai/ML. The technologies can and will continue to be limited by the human decisions being made during the collection and machine learning process. These human decisions may stymy the intelligence of the technology. Furthermore, the lack of transparency of many algorithms requires a human to deeply analyze the technology output. A human check or analysis that works in collaboration with the AI/ML technology, leads to a more intelligent system that will better aid humans that are looking at anomalies and irregularities. Together, humans and technology can work towards a more intelligent world.

References

Leibig, C., Brehmer, M., Bunk, S., Byng, D., Pinker, K., & Umutlu, L. (2022). Combining the strengths of radiologists and AI for breast cancer screening: a retrospective analysis. The Lancet Digital Health, 4(7), e507-e519.

Liu, C., Liu, X., Wu, F., Xie, M., Feng, Y., & Hu, C. (2018). Using artificial intelligence (Watson for Oncology) for treatment recommendations amongst Chinese patients with lung cancer: feasibility study. Journal of medical Internet research, 20(9), e11087.

Svoboda, E. (2020). Artificial intelligence is improving the detection of lung cancer. Nature, 587(7834), S20-S20.

Post Author: Sharon Whitfield