Policy Implementation Framework: Ensuring Equitable Use of AI and Emerging Technologies in Healthcare
KATY WOODARD, Oklahoma State University Center for Health Sciences (OSU-CHS)
OSU School of Health Care Administration
BINH PHUNG, DO, MHA, MPH, Oklahoma State University Department of Pediatrics
Oklahoma State University Center for Health Sciences (OSU-CHS)
OSU School of Health Care Administration
EXECUTIVE SUMMARY
Artificial Intelligence (AI) and emerging technologies are rapidly transforming healthcare, offering breakthroughs previously unattainable in clinical practice. The potential of AI in healthcare includes improving clinical decision-making through predictive analytics, enhancing patient outcomes via personalized treatment plans, and streamlining administrative processes. These advancements can help address systemic issues such as limited access to specialists, inefficiencies in care delivery, and workforce limitations. For example, AI-driven diagnostics are capable of early disease detection, paving the way for more effective treatments and improved patient outcomes (Topol, 2019).
However, the widespread adoption of AI in healthcare brings significant challenges that must be addressed to ensure equitable benefit across all patient populations. A primary concern is algorithmic bias, where improperly design AI systems may replicate or amplify existing disparities within healthcare. Research indicates that certain AI algorithms, particularly those predicting health outcomes, underperform for minority populations due to biased training data (Obermeyer et al., 2019). This not only diminishes trust in these technologies but also risks exacerbating health inequities by delivering suboptimal care to those in need.
Additionally, the integration of AI into healthcare raises critical privacy issues. The collection and utilization of large datasets necessary for training AI models require careful attention to the issues surrounding patient consent, confidentiality, and the potential for misuse of this sensitive data. As AI becomes increasingly embedded in patient care, concerns regarding data breaches or abuse could undermine public confidence and impede these technologies’ widespread adoption.
Another pressing challenge is the uneven access to AI-driven healthcare solutions, especially for underrepresented populations. Factors such as socioeconomic status, geographic location, and access to digital infrastructure can create disparities in who benefits from AI advancements. For instance, rural or low-income communities may have limited internet access and/or modern medical technology, posing challenges to reaping AI-driven care benefits. Thus, policies must ensure that these innovations do not marginalize vulnerable populations.
By addressing these critical issues, the proposed policy aims to foster innovation while safeguarding the interests of all patients, especially those from marginalized groups. The overarching goal is to cultivate a healthcare system where AI and emerging technologies are not only effective but also equitable, transparent, and accessible to all, ultimately contributing to improved health outcomes and the reduction of health disparities.
Key words: artificial intelligence, AI, healthcare transformation, AI in healthcare, implementation framework, emerging technologies, equitable healthcare
SCOPE OF THE ISSUE
The integration of artificial intelligence (AI) and other emerging technologies into healthcare systems presents opportunities for transformative enhancements in patient care, such as diagnostic accuracy, personalize treatment plans, and more efficient administrative processes (Topol, 2019). Despite these advancements, challenges persist, including the perpetuation of biases inherent in the training data, which may result in healthcare disparities (Obermeyer et al., 2019). For instance, AI algorithms may underperform for minority populations, potentially exacerbating existing health inequities. Furthermore, concerns surrounding patient data privacy, informed consent, and unequal access to digital health tools underscore the necessity of equitable implementation of AI and emerging technologies (Brown et al., 2022).
POLICY OBJECTIVE
The goal of this policy implementation framework is to responsibly harness AI and emerging technologies to promote equity, minimize biases, and expand access for all patients. Key objectives include:
1. Equitable Data Collection and Algorithm Development.
A foundational element of AI in healthcare is the data used to train algorithms. One of the most significant challenges is ensuring that the data collection process is inclusive and representative of diverse populations. Often, AI algorithms are trained on datasets that are predominantly from homogeneous populations, typically over-representing white and higher-income groups, which can lead to biased health outcomes when applied to broader, more diverse patient populations. The proposed policy will:
• Require comprehensive data collection protocols that ensure diversity across variables such as race, ethnicity, gender, socioeconomic status, and geographical location.
• Promote the inclusion of underrepresented populations, including racial and ethnic minorities, women, and rural populations, to ensure that AI models can accurately represent the healthcare needs of all groups (Obermeyer et al., 2019).
• Support data-sharing initiatives between healthcare institutions and public health departments to create large, diverse datasets that are more reflective of the population.
2. Transparency in AI-Driven Decisions.
The use of AI in clinical decision-making must be transparent to both healthcare providers and patients. One of the most significant barriers to the acceptance of AI technologies is the "black box" nature of many AI algorithms, where the rationale behind decisions is often unclear. This lack of transparency can hinder trust and make it difficult for healthcare professionals and patients to fully understand and trust AI-generated recommendations.
To address this challenge, the policy will:
• Require AI technology developers to implement explainable AI (XAI) models that allow healthcare providers to understand how decisions are made, offering insights into the reasoning behind AI-driven recommendations.
• Mandate that healthcare systems provide clear, accessible information to patients about how AI is being used in their care, including how data is collected, analyzed, and how decisions are made.
• Ensure that clinical decision support tools based on AI are continuously monitored for accuracy and reliability, with oversight to ensure that AI systems align with the clinical guidelines and best practices.
3. Equitable Access to AI Technologies.
One of the core aims of this implementation policy is to bridge the digital divide in healthcare and ensure that all populations, regardless of socioeconomic status or geographic location, have equal access to the benefits of AI technologies. While AI has the potential to revolutionize healthcare, it risks exacerbating existing inequalities if its adoption is not equitable. Access to emerging technologies is often limited by factors such as lack of access to broadband internet, access to medical facilities with AI capabilities, and the ability to navigate digital platforms.
The policy will focus on:
• Targeted outreach programs to ensure that underserved populations—especially those in rural, low-income, and marginalized communities—have access to AI-driven healthcare tools. This may include mobile health applications, telemedicine platforms, and AI-enabled diagnostics.
• Collaborating with telecommunication companies to provide affordable and reliable internet access to communities that are digitally excluded, ensuring that AI-powered health tools can be accessed by all patients, regardless of location.
• Introducing financial assistance programs or government incentives for low-income individuals to access healthcare facilities and AI technologies, removing the cost barrier for those most in need of care.
4. Mitigating Algorithm Bias.
Algorithmic bias is a key concern in the application of AI in healthcare. Research has shown that AI systems, if not designed properly, can unintentionally perpetuate biases in healthcare, leading to disparities in care for certain patient groups. Bias in healthcare AI algorithms may result from biased historical data, insufficient representation of certain populations, or flawed design choices. This bias can manifest in various ways, including misdiagnosis, unequal treatment recommendations, or differences in health outcomes based on race, gender, or socioeconomic status.
To mitigate these risks, the policy will:
• Implement bias audits for AI algorithms at multiple stages of development to assess whether they are producing fair, accurate, and unbiased outcomes across different demographic groups.
• Require AI developers to adjust their algorithms based on feedback and audits, ensuring that models are corrected to account for any discovered biases.
• Promote the use of diverse and multidisciplinary teams in the design, testing, and evaluation of AI systems to ensure that various perspectives are considered, and that potential biases are identified and addressed early in the process.
JUSTIFICATION
AI technologically holds the potential to revolutionize healthcare, but without careful implementation, it risks perpetuating existing disparities. For example, research by Obermeyer et al. (2019) found that an AI algorithm used to predict patient health risks was biased against Black patients due to flawed datasets. Furthermore, Brown et al. (2022) highlighted how digital health transformations, if not designed with equity in mind, can disproportionately benefit certain populations while leaving others behind. Therefore, equitable policy framework is vital to ensure that AI’s benefits are distributed fairly and that technologies do not amplify health inequalities.
Refer to Appendix 1A – Policy Formulation and Appendix 1B – Federal Agencies Involved for more information.
POLICY EVALUATION
Ongoing evaluation is paramount to assessing the effectiveness of this policy implementation framework. As AI and other emerging technologies continue to evolve, continuous monitoring and assessment will be necessary to measure the policy’s impact, identify areas for improvement, and make data-driven adjustments. The evaluation process will involve five components that 1) analysis of health disparities, 2) AI algorithm performance and accuracy, 3) patient outcomes, and while also 4) incorporating stakeholder feedback, and 5) leveraging implementation science to guide the policy’s ongoing refinement.
1. Research and Analysis of Health Disparities
A key component of evaluating this policy will be examining how AI technologies affect health disparities across different demographic groups. While AI has the potential to improve health outcomes for diverse populations, it is essential to measure whether its deployment actually leads to more equitable care or if it inadvertently exacerbates existing disparities. The evaluation process will involve:
• Tracking Health Disparities: Systematically monitor disparities in access to healthcare and health outcomes, especially among historically underserved populations such as racial and ethnic minorities, low-income individuals, and rural communities. Collect data on metrics like diagnostic accuracy, treatment recommendations, and patient satisfaction.
• Evaluating Health Equity Indicators: Assess whether AI deployment reduces health disparities by examining indicators such as access to AI technologies, treatment adherence, health outcomes, and care quality across different demographic groups.
• Conducting Comparative Studies: Utilize data from electronic health records (EHRs), patient surveys, and public health reports to conduct and analyze the equitable improvement – or lack thereof – in health outcomes resulting from AI interventions.
Focusing on health disparities provides vital insights into whether AI adoption is bridging gaps in healthcare or exacerbating inequities, enabling timely policy adjustments as needed.
2. AI Algorithm Performance and Accuracy
Accuracy and fair performance of AI in clinical settings requires rigorous evaluation. The effectiveness and equity of AI systems, particularly in diagnostics, treatment recommendations, and predictive analytics, must be continually monitored. The evaluation will include:
• Regular Performance Audits: Conduct systematic reviews of AI algorithm accuracy across diverse populations, considering demographic factors such as race, ethnicity, age, gender, and socioeconomic status.
• Bias Detection and Correction: AI models will be regularly tested for any biases in their predictions, with particular attention paid to whether certain demographic groups are being under-served or misdiagnosed. The performance of these models will be assessed against established clinical guidelines and compared to outcomes in similar patient populations that may not use AI-based solutions
• Algorithm Updates and Improvements: If discrepancies in performance are found, AI algorithms will be refined to ensure that they produce equitable outcomes. The effectiveness of these modifications will be closely tracked to assess whether improvements in accuracy and fairness are being achieved.
Through these assessments, this policy can evaluate AI tools in practical applications and adjust algorithms and guidelines to maintain their effectiveness and equity.
3. Patient Outcomes Across Demographic Groups
The evaluation will critically assess AI technologies’ impact on patient outcomes across varying demographic groups, touching on both clinical and non-clinical aspects:
• Clinical Outcomes: AI tools will be assessed based on their impact on patient health metrics, such as recovery rates, disease prevention, and mortality rates. For example, AI-driven diagnostic tools will be evaluated on whether they lead to faster, more accurate diagnoses and whether they help in reducing disparities in disease detection rates.
• Patient Experience and Satisfaction: Surveys and qualitative research will be conducted to evaluate patient experiences with AI-driven healthcare. This will include measuring patient trust in AI technologies, their level of understanding of how AI is used in their care, and any concerns they may have about data privacy, bias, or unequal treatment. Patient satisfaction will also be examined to ensure that the introduction of AI does not undermine the patient-provider relationship, particularly among vulnerable groups
• Social Determinants of Health (SDOH): The policy will evaluate how AI-driven healthcare technologies interact with SDOH factors (such as income, education, and access to healthcare) and whether they reduce or exacerbate disparities in health outcomes.
By examining these outcomes, policymakers gain a comprehensive view of AI’s impact on clinical results and the broader social, economic, and cultural determinants of health inequality.
4. Stakeholder Feedback
Ongoing input from stakeholders is essential for evaluating policy success and guiding its evolution. Engagement from healthcare providers, technology developers, and marginalized communities ensures a comprehensive outlook:
• Healthcare Providers: Frontline healthcare workers, including physicians, nurses, and other clinicians, will provide feedback on the usability, effectiveness, and challenges of AI-driven tools. Their feedback helps to identify integration gaps and prioritize patient care improvements.
• Technology Developers: AI technology developers will be involved in tracking how the policy’s implementation influences the design of new technologies. Feedback from developers will help policymakers understand the technological challenges associated with creating equitable, transparent, and accessible AI solutions.
• Marginalized Communities: Solicit input from patients and communities most affected by AI deployment, including racial and ethnic minorities, low-income groups, and rural populations. This can be achieved through focus groups, interviews, and surveys. This will ensure their experiences inform policy refinement to address risks and barriers.
This feedback mechanism ensures that the policy adapts to the needs and challenges faced by healthcare providers and patients, incorporating emerging opportunities in AI healthcare.
5. Policy Implementation Science
Utilizing implementation science enables a methodological assessment of AI healthcare policy’s adoption and integration. This approach incorporates systematic and evidence-based strategies:
• Adoption and Integration Assessment: Research will be conducted on how healthcare systems, both large and small, are incorporating AI into their operations. This includes tracking the adoption rates of AI technologies, barriers to adoption, and the resources required for successful implementation.
• Overcoming Implementation Barriers: Identify and address integration challenges, such as healthcare provider resistance, training gaps, or logistical hurdles, refining the policy to address these concerns effectively over time.
• Continuous Policy Improvement: Using implementation science, ensure the policy evolves through ongoing feedback, smoothing the path to AI technology integration in healthcare systems that maximizes their potential benefits while managing unintended consequences and barriers.
Incorporating implementation science ensures the adaptability of the policy, facilitating AI technologies’ successful integration within healthcare systems to maximizes equitable benefits while mitigating potential drawbacks.
ENGAGEMENT STRATEGIES
Effective stakeholder engagement is essential for the successful implementation of any healthcare policy, particularly one that involves innovative technologies like Artificial Intelligence (AI). The policy’s success depends on the active participation of key stakeholders, including tech companies, healthcare providers, patients, technology developers, policymakers, and advocacy groups.
Refer to Appendix 1C – Key Stakeholders: Supporters vs. Opposers for a detailed stakeholder analysis.
Below, we expand on several engagement strategies that can facilitate the development and successful implementation of AI in healthcare policy:
1. Positioning: Framing the Policy as a Critical Step Toward Ensuring Equitable AI in Healthcare
One of the most important strategies the administration can employ is positioning the policy as a vital and forward-thinking step to address equity, inclusion, and fairness in healthcare. This involves clearly articulating the policy’s benefits to a broad range of stakeholders and ensuring that its primary focus on equity and access is emphasized.
• Communicating the Policy's Core Value: The administration should frame the policy as an essential tool for closing the healthcare access gap, particularly for historically underserved and marginalized communities. By ensuring that AI technologies are used equitably, the policy can be positioned as a long-term solution to combat health disparities, which will resonate with both healthcare professionals and advocacy organizations focused on social determinants of health (SDOH).
• Highlighting Public Health Benefits: The administration can emphasize that AI, when used effectively, has the potential to improve patient outcomes by providing faster, more accurate diagnoses, improving care coordination, and making healthcare more efficient. Framing the policy in this way ensures that stakeholders, including public health advocates, view the policy as a necessary and urgent response to addressing health inequities.
• Inclusive Messaging: Position the policy as one that not only seeks to harness technological advances but also seeks to empower communities by ensuring that no one is left behind. This framing will appeal to organizations and leaders focused on social justice and public health equity.
2. Power: Leveraging Key Stakeholders to Garner Support
Another crucial strategy involves leveraging the power of key stakeholders, including healthcare providers, patient advocacy groups, and influential policymakers, to gain the support necessary for policy adoption and success. These groups have the credibility and the platforms to influence public opinion and policy decisions.
• Engaging Healthcare Providers: Healthcare professionals, such as physicians, nurses, and hospital administrators, will play an integral role in the successful integration of AI into healthcare systems. These stakeholders should be engaged early in the process to gain their insights on how AI can be best implemented in clinical practice. Additionally, healthcare providers’ trust in AI will be crucial in driving the technology’s adoption. By involving these key stakeholders in the policy discussion, the administration can build support for AI deployment that is clinically relevant and practically feasible.
• Collaborating with Patient Advocacy Groups: Patient advocacy organizations, particularly those representing underserved populations, can be instrumental in advocating for the policy’s adoption. These groups, which often have significant influence in shaping healthcare policy and public opinion, will be important allies in ensuring that the policy is responsive to the needs of marginalized communities. Through collaborations and joint initiatives, patient advocacy groups can help raise awareness about the policy’s potential to reduce disparities in care and promote health equity.
• Influencing Key Decision-Makers: The administration can work to secure the support of influential political leaders and policymakers who have a vested interest in health equity. By collaborating with these individuals, the administration can ensure that the policy is framed as a priority in national healthcare reform efforts, increasing its chances of successful implementation and broader support.
3. Player Strategies: Engaging Both Supporters and Opposers
A well-rounded engagement strategy will include player strategies that actively engage both supporters and opposers of the policy to build a broad coalition of stakeholders. This approach allows for the identification and resolution of concerns early in the process, increasing the likelihood of consensus and reducing potential opposition.
• Engaging Supporters: Early engagement with supporters—such as technology developers, academic researchers, healthcare leaders, and policymakers already advocating for AI in healthcare—can ensure that the policy is aligned with the needs of those who are most likely to drive its implementation. These groups can be used as policy champions who will actively support the policy by mobilizing their networks, sharing evidence-based research, and advocating for the policy’s passage.
• Addressing Opposers: Recognizing and addressing concerns from opposers is equally important. Critics may include healthcare professionals skeptical of AI’s impact on patient-provider relationships, privacy advocates concerned about data security, or groups that fear AI will worsen health disparities. Proactively engaging with these critics through town halls, policy discussions, and consultations allows the administration to listen to their concerns, provide evidence-based responses, and find solutions that address their fears. For instance, privacy concerns can be alleviated by ensuring robust safeguards around patient data security, while concerns about AI bias can be addressed by highlighting ongoing efforts to make AI algorithms more transparent and equitable.
• Building Consensus: By bringing both supporters and opposers to the table early in the process, the administration can engage in open dialogues to identify common ground and build consensus around the policy’s objectives. This collaborative approach fosters mutual understanding and can mitigate the risk of opposition from key stakeholder groups later in the policy implementation phase.
4. Transparency and Communication
In addition to the above strategies, maintaining transparency and continuous communication throughout the policymaking and implementation process will be essential. This will help to build trust, increase stakeholder buy-in, and ensure that all parties remain informed and involved.
• Regular Updates: The administration should provide regular updates on the policy’s development, including opportunities for public comment, meetings with stakeholders, and progress reports. This will help stakeholders feel included in the policymaking process and ensure that their concerns are addressed in a timely manner.
• Public Campaigns: Launching public awareness campaigns to highlight the policy’s benefits—especially its focus on equity—can mobilize support from the general public, especially marginalized groups. These campaigns can use various platforms, such as social media, community events, and partnerships with public health organizations, to reach a wide audience.
• Feedback Loops: Continuous feedback loops (e.g., surveys, focus groups, and open forums) can be established to allow stakeholders to provide ongoing input as the policy is implemented and evaluated.
5. Creating Strategic Partnerships
In addition to engaging key stakeholders, policymakers can also create strategic partnerships between technology companies, developers, healthcare providers, academic institutions, and advocacy organizations. These collaborations can be instrumental in ensuring that the policy is well-informed, well-resourced, and effective.
• Public-Private Partnerships: Collaborations between government agencies and private sector companies, particularly technology developers, can help to accelerate the adoption of AI technologies while ensuring that ethical standards and equity goals are upheld. These partnerships can also facilitate the creation of funding streams to support the development of AI technologies that serve underserved populations.
• Community-Based Organizations: Partnering with local, grassroots organizations that work directly with marginalized communities can help to amplify the policy’s impact. These organizations can act as trusted intermediaries to educate and engage communities that may otherwise be excluded from the conversation about AI in healthcare
SOCIAL DETERMINANTS OF HEALTH (SDOH)
Social determinants of health (SDOH), including income, education, and access to technology, will influence how AI in healthcare is received by different populations. Marginalized communities, such as those in rural areas or low-income urban neighborhoods, may lack access to the internet or the skills to navigate digital health tools, hindering their ability to benefit from AI innovations (Brown et al., 2022). These factors will need to be addressed in the policy design.
HEALTH INEQUITIES
Health inequities, driven by factors such as race, socioeconomic status, and geographic location, may affect the successful implementation of AI technologies. For instance, without proper safeguards, AI systems could exacerbate these disparities by underperforming for minority groups (Obermeyer et al., 2019). Thus, it is crucial that AI systems are designed with diverse data sets and tested for fairness across demographic groups to avoid reinforcing existing inequalities.
INCLUSIVE POLICYMAKING
To ensure that the policy promotes health equity, it is essential to involve marginalized stakeholders in the policymaking process. This includes engaging with community leaders, patients, and healthcare providers from underrepresented groups to ensure that the policy addresses their needs and challenges. By doing so, the policy can foster trust and increase the likelihood of its success in achieving equitable healthcare outcomes (Brown et al., 2022).
IMPACT ON DIVERSE POPULATIONS
AI in healthcare has the potential to reduce health disparities by improving access to care and personalizing treatment. By ensuring equitable access to these technologies, the policy can contribute to positive health outcomes for diverse populations, particularly those who have been historically underserved. This approach will also help dismantle structural racism and promote health equity by ensuring that all populations benefit from technological advancements in healthcare (WHO, 2021).
CONCLUSION
Artificial intelligence (AI) and emerging technologies hold significant promise for transforming healthcare, with the potential to revolutionize how healthcare systems operate and how patient care is delivered. From enhancing clinical decision-making and optimizing treatment plans to reducing operational costs and improving efficiency, AI’s applications in healthcare are vast and growing. These technologies offer the ability to analyze large datasets, predict patient outcomes, personalize treatment, and improve the overall patient experience. However, as with any technological advancement, the implementation of AI in healthcare carries both profound opportunities and substantial risks. If not carefully regulated, AI could exacerbate existing health disparities and introduce new forms of inequity.
While AI has the potential to improve healthcare outcomes across diverse populations, its implementation must be approached with caution to ensure that it benefits all patient groups equitably. Without appropriate safeguards, AI systems risk perpetuating or even amplifying existing biases, deepening health disparities, and compromising patient privacy. For example, AI algorithms can inadvertently reflect the biases present in the data on which they are trained. If these data are not diverse or representative of all populations, AI systems can perpetuate inequitable outcomes, particularly for marginalized and underserved groups, including racial and ethnic minorities, individuals with disabilities, and low-income communities. This issue is compounded by concerns related to data privacy violations, which can disproportionately affect vulnerable populations.
A policy implementation framework that prioritizes fairness, transparency, and inclusivity is essential for ensuring that AI in healthcare is used in ways that benefit all populations. By focusing on equitable data collection, algorithmic fairness, patient privacy, and access to AI technologies, this policy can ensure that the benefits of AI are shared broadly, contributing to improved health outcomes for diverse populations. Most importantly, such a framework will prevent the unintended consequences of AI adoption, such as the worsening of health disparities or the marginalization of already vulnerable groups. Instead, the policy can foster an environment in which AI technologies are used to address health inequities, enhance patient care, and promote health justice for all.
By ensuring that AI and emerging technologies are deployed with a focus on equity, fairness, and access, this implementation framework can make a transformative impact on the healthcare system, making it more efficient, personalized, and inclusive. It can help ensure that all individuals, regardless of their background or socioeconomic status, have equal access to the advantages of AI-driven healthcare innovations, ultimately contributing to better, more equitable health outcomes across all populations.
APPENDIX
Appendix 1A: Policy Formulation
The formulation of this policy will involve several steps:
1. Legislative Action: A bill will be introduced to regulate AI in healthcare, ensuring that technologies are ethically developed and deployed. This legislation will define standards for fairness, transparency, and data privacy.
2. Rulemaking Process: After the passage of the bill, federal agencies will engage in rulemaking to establish specific regulatory standards for AI in healthcare, focusing on algorithmic transparency, data privacy, and non-discrimination.
Appendix 1B: Federal Agencies Involved
The following agencies within the Department of Health & Human Services (HHS) will play key roles:
• Food and Drug Administration (FDA): Will regulate AI-driven medical devices to ensure their safety and efficacy.
• Centers for Medicare & Medicaid Services (CMS): Will create reimbursement policies for AI-based treatments.
• Centers for Disease Control and Prevention (CDC): Will monitor and report on public health outcomes related to AI integration.
• Agency for Healthcare Research and Quality (AHRQ): Will fund research on the effectiveness and equity of AI technologies in healthcare.
Appendix 1C: Key Stakeholders: Supporters vs. Opposers
The key stakeholders include:
o Tech Companies: Developers of AI technologies who will be directly impacted by regulatory frameworks.
o Healthcare Providers: Hospitals, physicians, and clinics that will use AI tools in clinical decision-making.
o Patients: Particularly those from underserved communities who may either benefit or be harmed by AI technologies.
o Regulatory Agencies: FDA, CMS, AHRQ, and CDC, which will establish guidelines and monitor AI integration.
Supporters:
o Tech Companies: Will likely support regulations that provide clear guidelines for AI deployment, as these will reduce uncertainty and promote widespread adoption.
o Healthcare Providers: Will support policies that ensure AI tools are effective and equitable, especially if they can improve patient outcomes and streamline care.
o Patient Advocacy Groups: Will support the policy, particularly those advocating for marginalized communities who stand to benefit from equitable access to AI-driven healthcare.
Opposers:
o Privacy Advocates: Concerned with the collection and potential misuse of personal health data by AI systems.
o Healthcare Unions: May oppose the policy due to fears that AI could displace workers or reduce the demand for human healthcare professionals.
REFERENCES
Brown, S. A., Hudson, C., Hamid, A., Berman, G., Echefu, G., Lee, K., Lamberg, M., & Olson, J. (2022). The pursuit of health equity in digital transformation, health informatics, and the cardiovascular learning healthcare system. Am Heart J Plus, 17, 100160. https://doi.org/10.1016/j.ahj.2022.100160
Chen, M., Ma, Y., Li, X., & Li, S. (2017). Wearable devices in health care: A review of the literature. Telemedicine and e-Health, 23(3), 171-179. https://doi.org/10.1089/tmj.2016.0179
Kaiser Family Foundation (KFF). (2022). The role of social determinants of health in promoting health equity. Retrieved from https://www.kff.org
Mann, C. (2019). The potential for Artificial Intelligence to support health equity. The Lancet Digital Health, 1(7), e285-e286. https://doi.org/10.1016/S2589-7500(19)30107-4
Obermeyer, Z., Powers, B. W., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://doi.org/10.1126/science.aax2342
Topol, E. J. (2019). Deep medicine: How Artificial Intelligence can make healthcare human again. Basic Books.
World Health Organization (WHO). (2021). Ethics and governance of Artificial Intelligence for Health: WHO guidance. Retrieved from https://www.who.int