President's Page

AI in Medicine

Denise Hanisch, MD
President, South Dakota State Medical Association

April 1, 2024

The integration of artificial intelligence (AI) into the realm of medicine has brought about transformative advancements in healthcare. However, alongside the potential benefits, there exist valid concerns regarding the ethical, social, and practical implications of AI in this field.

One primary concern is the ethical dilemmas surrounding AI algorithms in healthcare. The opacity and complexity of AI decision-making processes raise questions about accountability and bias. To mitigate these concerns, transparency in AI systems must be prioritized.

Implementing explainable AI techniques can help healthcare professionals understand how AI arrives at its conclusions, enabling them to interpret and validate its recommendations.

Another pressing issue is data privacy and security. As AI relies heavily on vast amounts of patient data for training and decision-making, safeguarding this information is paramount. Institutions must adhere to stringent data protection regulations, ensuring that patient confidentiality is maintained. Implementing robust encryption methods, access controls, and regular security audits can enhance data security in AI-driven healthcare systems.

Furthermore, the potential impact of AI on the healthcare workforce raises concerns about job displacement and the redistribution of roles. While AI has the capacity to streamline operations and improve diagnostic accuracy, healthcare professionals may fear redundancy. To address this, investing in reskilling and upskilling programs for healthcare workers can equip them with the necessary skills to collaborate effectively with AI technologies, thereby enhancing patient care outcomes.

The concerns surrounding AI in medicine necessitate a proactive and multidisciplinary approach to ensure ethical AI deployment. By emphasizing transparency, data privacy, and workforce development, the healthcare industry can harness the full potential of AI while mitigating associated risks. Collaborative efforts between policymakers, healthcare providers, technologists, and ethicists are essential to navigate the evolving landscape of AI in medicine responsibly and ethically, ultimately benefiting patients and society at large.

The word “transparency” is used frequently when discussing the use of AI in medicine and needs to be applied to both patients and physicians. So, in full transparency, I used ChatGPT to write the first four paragraphs of this article. It was my first experience using this technology and it was mildly frightening how easy it was. After downloading the program and typing in four key words, the text was generated in less than 2 seconds. It was exciting and concerning at the same time, which is how the majority of physicians feel about using AI in their practice. The AMA conducted a recent survey of 1081 physicians (420 primary care providers and 661 specialists) and found that 70% are either, more concerned than excited, or equally concerned and excited about a future with AI.

The American Medical Association recently published the survey findings and recommendations in a report that can be found on the AMA website. I know very little about technology, and am especially naive regarding the subject of AI but read the report, “Future of Health: The Emerging Landscape of Augmented Intelligence in Health Care” and strongly encourage every physician to familiarize themselves with the rapidly expanding future of AI.

The AMA has adopted terminology that they feel is important when discussing “augmented intelligence.” Describing the technology as “augmented,” as opposed to “artificial,” makes it clear that technology should be used to support, not replace, decisions made by physicians.

I learned several new terms and definitions while reading the report. Most interesting was the explanation of machine learning and deep learning. Machine learning occurs when systems “learn from data without being explicitly programmed.” Machine learning is a subtype of AI and deep learning is a subtype of machine learning and occurs when a systems trains itself based on data it receives. Machines learn by three different models, supervised learning, unsupervised learning and reinforcement training in which an algorithm “receives a reward when its action and output align with the goals of the programmer.” Depending on the data, the output could be accurate or biased in any of these forms of learning.

It may be hard to imagine how a machine could produce bias but a perfect, real world example was provided. An algorithm used health care costs as an indicator of health care needs. The output assumed that White people had more medical illnesses than Black patients and therefore would benefit more from a care management program. AI did not take into account that Black people often have a disadvantage in accessing care and therefore were not represented in a system that focused on costs as opposed to needs.

I feel it will be hard to predict how AI will affect us as physicians and as individuals, but ignoring it’s existence will only be a detriment. We need to be informed, aware and educated in the potential and flaws of this tool. The amazing thing about humans is that we have experiences that are truly unique to each one us. My history cannot be replicated. We all have our own individual story that provides us with the human ability to care for other humans.


South Dakota State Medical Association
2600 W 49th St Ste 100
Sioux Falls, SD 57105
Phone: 605.336.1965 | Fax: 605.274.3274

Powered by Wild Apricot Membership Software