AI governance: An overview for businesses, financial institutions and HIPAA covered entities
Recently, on June 22, 2025, the governor of Texas signed into law House Bill 149, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which establishes outlines various disclosure requirements for government entity AI developers and deployers, prohibited uses of AI, and establishes civil penalties for violations. Texas is not alone in its efforts to govern the development and use of AI, and it joins the ranks of states including California, Colorado, and Utah that have all taken a crack at regulation. But yet, the laws surrounding AI are often sporadic and murky, layering generalized and sectoral obligations for entities who are left to navigate through the legality of development and progress forward. For entities, AI governance has become a top-of-mind concern previously not contemplated.
AI governance, specifically, refers to the collection of policies, standards, and oversight mechanisms designed to ensure that AI systems are developed and employed in a manner that is ethical, safe, and aligned with what is deemed morally sound in the lens of society. AI governance involves creating frameworks and rules that address issues like fairness, transparency, accountability, data privacy, and the prevention of bias or misuse. The goal of AI governance is to guide the responsible development and deployment of AI technologies, manage risks, and build trust among users and stakeholders by ensuring compliance with legal and ethical standards.
Undoubtedly, beyond development of these policies, enforcing AI governance is crucial to ensure that AI technologies are developed and used responsibly, ethically, and safely. Without effective enforcement, governance policies risk becoming mere guidelines without real impact, allowing potential harm such as biased decision-making, privacy violations, or misuse of AI systems to go unchecked. Enforcement establishes clear accountability, making it possible to identify who is responsible when AI results in unintended consequences or harm. This accountability is essential for building trust among users, stakeholders, and the public, as it demonstrates a commitment to transparency and ethical standards.
Additionally, as regulatory bodies around the world introduce laws governing AI, enforcement ensures organizations comply with these requirements, helping to avoid penalties and reputational damage. Effective enforcement of AI governance, further, promotes fairness by requiring regular audits and reviews of AI systems to detect and correct biases or errors, making AI outcomes more transparent, justifiable, and accurate. Real world examples of AI governance failures highlight the risks of weak enforcement, showing how lapses can lead to significant social and ethical problems. Overall, enforcing AI governance transforms policies into actionable practices that protect individuals, support ethical innovation, and ensure confidence in AI technologies.
What effective and ineffective AI governance looks like in practice
Effective AI governance is established by clear, well-defined policies and supervision mechanisms that guide the development and use of AI in a responsible and ethical manner. In organizations with strong AI governance, there are clear procedures for evaluating and monitoring AI systems, regular assessments to check bias or errors, and open communication about how decisions are made by AI. These organizations prioritize accountability by assigning clear roles and responsibilities, ensuring that any issues can be traced and addressed quickly by the appropriate team members. They also foster a culture of ethical awareness, encouraging ongoing training and discussion about the social impacts of AI for employees. This proactive approach not only helps prevent harm but also builds public trust and supports compliance with evolving regulations.
On the other hand, poor AI governance is marked by a lack of clear rules, inconsistent oversight, and minimal transparency. In such environments, AI systems may be deployed without thorough testing or consideration of their broader impact. There is often little accountability, making it difficult to determine who is responsible when problems arise. This can lead to unchecked biases, privacy breaches, and decisions that are difficult to explain or justify. Organizations with weak governance may also struggle to keep up with regulatory requirements, resulting in unnecessary exposure to legal and reputational risks. Ultimately, poor AI governance creates distrust in technology and increases the likelihood of negative outcomes for both individuals, society, and ultimately the businesses developing and deploying such systems.
Key elements of an effective AI governance framework
An effective AI governance framework relies on several essential components working in tandem to ensure the responsible and trustworthy use of AI technology. These components include the following:
- Policies and guidelines - Clear policies and guidelines provide the foundation of effective AI governance by clearly defining expectations for ethical development and deployment of AI systems within an organization. These policies should be made readily available to all employees within an organization, especially to those individuals who maintain a direct relationship with the entity’s use or development of AI systems, and all employees should receive training on these policies to ensure a culture of understanding of proper AI usage from the top down.
- Regulatory compliance - Adhering to regulatory guidelines is equally important, as it ensures that AI practices meet legal standards and keep abreast with evolving laws. The process of ensuring regulatory compliance with an organization’s legal obligations often involves a thorough understanding of the entity’s business model, where and how the business operates, and how the organization utilizes AI technology to ensure that the applicability of relevant laws can be accurately determined and mapped.
- Risk management and accountability - Risk management and accountability are critical for identifying potential issues early and assigning responsibility, so that any problems can be addressed quickly and transparently. Assessing associated risks could be carried out via regular audits and risk assessments. A key guidepost for such risk assessments can be found in the NIST Artificial Intelligence Risk Management Framework, AI RMF 1.0.
- Transparency and explainability - Transparency and explainability strengthens AI governance by making AI decisions understandable to stakeholders, which in turn helps build trust in the organization’s development and use of such technology and allows for meaningful and more comprehensive oversight.
- Continuous monitoring and improvement - Continuous monitoring and improvement ensures that AI systems, and the policies that encompass their development and deployment, remain effective and ethical over time, adapting to new challenges and incorporating lessons learned. Regular, annual or quarterly reviews of an organization’s engagement with AI can be beneficial from this standpoint.
Together, these elements create a robust framework that not only helps to safeguards against organizational harm but also promotes the responsible advancement of AI technology within the business, providing the organization with the benefits of AI usage and limited drawbacks.
AI governance for HIPAA covered entities
The integration of AI in healthcare facilities is transforming how patient data is managed, diagnoses are made, and care is delivered. AI tools are now used for everything from analyzing medical images and predicting patient risks to automating clinical documentation and supporting virtual health assistants. Whenever these systems handle protected health information (PHI), HIPAA regulations apply, requiring strict safeguards to ensure privacy and security. HIPAA’s rules mandate that AI applications access only the minimum necessary data, use robust encryption, and implement strong access controls to prevent unauthorized disclosures. Covered entities must also ensure that AI models are regularly audited for bias and errors and that patients are informed about how their data is used, maintaining transparency and upholding patient rights.
Among other things, key ways for covered entities to effectively govern the use of AI in a healthcare context include the following:
- Comprehensive policy development - Covered Entities should develop clear policies and address the use and oversight of AI tools, including documentation of acceptable uses, data protection strategies, and requirements for de-identification.
- Risk assessments - Regular risk assessments are vital to identify vulnerabilities that are created by AI, such as the potential for the identification of de-identified data or algorithmic bias. These AI risk assessments should be carried out on a similar timeline with existing HIPAA risk assessment schedules.
- Stringent business associate agreements - Entities should also require that any third-party AI vendors comply with HIPAA standards, as they would any other vendor, formalizing these expectations through comprehensive BAAs.
- Ongoing employee training - Continuous staff training on AI risks and compliance, along with detailed audit trails and incident response plans, strengthens a covered entity’s AI governance standing and helps maintain compliance as technology and regulations evolve.
By prioritizing these measures, covered entities can capitalize on AI’s benefits while protecting patient privacy and maintaining regulatory trust in line with their obligations under HIPAA.
AI governance for financial institutions
AI is increasingly deployed in the context of daily operations of financial institutions, powering everything from fraud detection and credit scoring to personalized customer service and algorithmic trading. As financial organizations increase their reliance on AI systems, they must navigate the landscape of regulatory expectations and industry best practices to ensure that their use of AI remains ethical, secure, and reliable. Regulatory organizations like the Federal Reserve, the Office of the Comptroller of the Currency (OCC), and the Consumer Financial Protection Bureau (CFPB) have provided guidance emphasizing the importance of transparency, risk management, and accountability when deploying AI in financial services. These agencies expect institutions to have strong governance structures in place, including thorough documentation of AI models, clear explanations of automated decisions, and advanced controls to prevent discriminatory outcomes or unintended risks.
- Among the primary ways in which financial organizations can effectively govern their use of AI include the following comprehensive internal policies - Setting clear internal policies for the development, testing, and deployment of AI systems, as well as conducting regular audits to identify and address potential biases or vulnerabilities, is critical for financial institutions when relying on AI systems.
- Focus on explainability and transparency - Financial institutions should prioritize explainability of AI systems and outcomes, ensuring that both regulators and customers can understand how important decisions are made like those related to loan approvals and fraud alerts.
- Contractual conditions and agreements for vendors - Collaboration with third-party AI vendors should be managed carefully, with contractual agreements in place that clearly define data security requirements and compliance obligations of the financial institution to ensure that third party vendors stay within the expected guideposts.
- Ongoing employee training – Similarly to HIPAA covered entities, financial institutions should engage in ongoing staff training and continuous improvement, which will help such institutions adapt to evolving technologies and regulatory expectations and help staff to understand their allowed and disallowed practices surrounding the use of AI for the organization.
Best practices and considerations when implementing effective AI governance
With the importance of effective AI governance being undisputed, organizations that are just beginning to consider their AI usage and what governance would look like in relation to their business, there are a few considerations that should be addressed to kick-start the process. For organizations in the early stages of developing an AI governance strategy, the list below provides a great place to start:
- Assess current maturity - Implementing effective AI governance begins with an in-depth assessment of your organization’s current level of maturity in managing AI technologies. This involves evaluating existing policies, identifying strengths and weaknesses, and understanding where gaps may exist in oversight or risk management.
- Define governance objectives - Once this baseline is established, it is important to clearly define the objectives of your AI governance program, aligning these objectives with your organization’s values, regulatory obligations, and strategic goals.
- Develop and document policies - The next step is to develop and formally document policies that outline standards for ethical AI use, data privacy, security, and compliance. These policies should be detailed and action oriented, providing clear guidance for all relevant scenarios and should be readily available to employees, especially those with a hands-on role in the development or use of the organization’s AI systems.
- Assign roles and responsibilities - Assigning specific roles and responsibilities is essential to ensure accountability and effective oversight. This means designating individuals or teams who are responsible for policy enforcement, risk assessment, and the continuous management of AI systems. Documented policies can be an effective tool here to clearly lay out the responsibilities of certain project managers and staff members and to outline when certain risk assessments should be conducted and by whom.
- Engage stakeholders - Engaging stakeholders across the organization, including leadership, technical teams, legal, and compliance staff, helps to build a cohesive team, foster transparency, and ensure that diverse perspectives are considered in governance decisions.
- Monitor and update - Finally, it is crucial to establish processes for continuously monitoring AI systems and updating governance practices as technology evolves and new risks emerge. Regular reviews and improvements will help your organization maintain the effectiveness of your AI governance framework and ensure that it remains responsive to changing needs and regulatory requirements.
When in doubt, businesses can consult with trusted external legal counsel for guidance with fulfilling their legal obligations and establishing AI Governance. If you have any questions regarding your company’s AI governance strategy, reach out to McDonald Hopkins’ national Data Privacy and Cybersecurity practice group.
McDonald Hopkins' Summer Associate Thierno Diallo assisted in the crafting of this article.