The Growing Role of Artificial Intelligence in Healthcare
- Nisha Shankar

- Nov 13
- 5 min read
Artificial Intelligence (AI) has been taking over the world in many areas — especially healthcare. Healthcare professionals are adapting to the growth of the technology and are now incorporating the AI tools into their practice. This new technology has revolutionized the way diagnoses and treatment plans are delivered, increasing precision, efficiency, and personalized patient care.
Some of these AI tools include scribing tools where providers can speak into a microphone allowing the note-taking tools using AI to record patient visits and transcribe the notes into patient charts.
Introduction of new technology comes with positives and negatives. The positives are that AI can increase efficiency in healthcare and reduce workload for providers, while the negatives include potential errors and unintended biases. While the use of AI is overall a beneficial tool in healthcare, it is important to consider the risks as well.
My Experience Seeing AI in Action
Over the years I have worked and shadowed in healthcare, I have seen an increasing amount of healthcare providers using AI tools in their practice — and they all have had beneficial outcomes.
I would observe these professionals speak into microphones and their words would be immediately transcribed into patient charts, allowing them to close charts quicker and focus on seeing the next patient sooner. Additionally, I have witnessed providers begin visits by informing patients that they are using AI to record their conversation for note-taking and asking for their permission to use the tool. Both of these uses have proven to be largely positive for healthcare providers burdened with an increased workload to be more efficient, improve time management, and reduce stress.
The Permanente Medical Group conducted a pilot study on AI scribes improving the efficiency in healthcare, which authors at NEJM Catalyst built upon. They found that these “generative AI scribes not only saved physicians an estimated 15,791 hours of documentation time—equal to 1,794 eight-hour workdays—but also improved patient-physician interactions and enhanced doctor satisfaction” (Feldheim).
During patient visits, providers are able to have a conversation with their patients with increased eye contact and active listening — instead of scrambling to write down everything the patient is saying or relying on memory after the visit. This allows patients to feel more heard and comfortable during their visit.
These positive aspects of AI in healthcare allow for increased efficiency and decreased workload, improving the overall quality of care that is delivered.
The Downsides: Errors and Technical Glitches
With any type of new technology, there will always be downsides as there are many variables to consider.
One of them is potential errors or glitches that can occur with AI tools. I have noticed instances where providers were frustrated with their tool, trying to troubleshoot in order to get it working — which can take away from valuable patient care time.
A systematic review of data on AI-based transcription systems published on BioMed Central found that some studies “noted increased editing burdens, inconsistent cost-effectiveness and persistent errors with specialized terminology or accented speech” (Ng et al.).
AI can interpret results, decreasing the workload for providers, but there is always a possibility it can misinterpret a result. The BioMed Central review concluded that “refinements in domain-specific training, real-time error correction and interoperability with electronic health records are critical for their effective adoption in clinical practice” (Ng et al.). That’s why it is vital that providers review the results delivered or notes taken by AI before accepting or delivering them to patients.
As a result, some providers may not feel comfortable relying on this type of new technology and may prefer to chart or interpret results themselves. It’s important to consider all variables that can lead to error when introducing a new technology into healthcare — especially when handling patient information.
The Challenge of Bias in AI
One of the biggest issues that comes along with AI is bias, which can show up in multiple ways.
The data that is used to train AI can already have biases. The AI might use that information for every type of patient regardless of whether it matches. This could lead to bias in healthcare.
Dr. Ted James, Physician Executive and Endeavor Health Faculty at Harvard Medical School, published a reflection on biases through AI in healthcare. He describes an example where “AI used across several U.S. health systems exhibited bias by prioritizing healthier white patients over sicker black patients for additional care management because it was trained on cost data, not care needs” (James). Dr. James continues his insights with a proposed step-by-step solution to confronting AI biases, which includes diversifying the inputted data, regular monitoring and updating of AI systems, and introducing interdisciplinary collaboration during the development stage. These steps would help accurately represent the diversity of populations, maintain accurate data models and implement inevitable changes within the population data, and promote healthcare ethics and cultural humility.
Another interesting form of bias that a patient once mentioned is that when they know they are being recorded, it could consciously or unconsciously influence the health concerns and information they share with the healthcare professional.
For example, if a patient is experiencing mental health issues but feels uncomfortable talking about it knowing they are being recorded, they might not bring it up.
The easiest solution would be for the patient to decline the use of AI tools during their visit, but some patients may not fully understand what they are consenting to when the provider asks for permission. This could result in bias from the patient side as well.
Bias is an unwelcome aspect in healthcare, but the introduction of AI reintroduces it in various ways — sometimes going unnoticed by the user. It’s important to account for these variables, especially when trying to promote equitable care.
Finding Balance
The use of AI in healthcare can be positive in its improvement of the quality of care provided by increasing efficiency in workflow and reducing workload for providers. It can improve focus, reduce stress, and allow for more natural, face-to-face conversations with patients.
However, it is just as important to consider the risks. Errors and bias are two big negatives that the healthcare system works hard to combat — and the reintroduction of these issues through AI can be dangerous to patient information and overall care.
It’s important to be vigilant and review all the notes and results that AI assists with, so we can continue to reduce these negatives while still benefiting from the technology’s strengths.
References:
Feldheim, B. (2025). AI scribes save 15,000 hours—and restore the human side of medicine. AMA, https://www.ama-assn.org/practice-management/digital-health/ai-scribes-save-15000-hours-and-restore-human-side-medicine?utm_source=chatgpt.com
Ng, J. J. W. et al. (2025). Evaluating the performance of artificial intelligence-based speech recognition for clinical documentation: a systematic review. BMC Medical Informatics and Decision Making, https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-025-03061-0?utm_source=chatgpt.com
James, T. A. (2024). Confronting the Mirror: Reflecting on Our Biases Through AI in Health Care. Harvard Medical School, https://learn.hms.harvard.edu/insights/all-insights/confronting-mirror-reflecting-our-biases-through-ai-health-care



Comments