As companies become increasingly reliant on artificial intelligence (AI) tools in both internal and customer- and client-facing functions, it also becomes increasingly important for companies to ensure that they are managing the risks associated with AI effectively. While predictive and generative AI technologies are advancing rapidly, numerous risks remain—and failure to address these risks effectively can leave companies exposed to substantial liability.

With this in mind, as we head into 2026, companies that use (or that are planning to use) AI technologies must ensure that their boards are prepared. Here are some key considerations:

Understanding the Risks Associated with Using AI Tools

In 2023, the National Institute of Standards and Technology (NIST) published an Artificial Intelligence Risk Management Framework (the “Framework”) which is intended to help guide company executives and directors in their decision-making regarding AI-related risks. In its Framework, NIST identifies three broad categories of risks associated with companies’ use of AI tools:

  • Harm to People – This includes infringement of civil liberties, physical and psychological harm, economic loss, and discrimination, among other risks.
  • Harm to the Organization – This includes the risk of data security breaches, economic loss, and reputational harm resulting from undue reliance on AI tools.
  • Harm to Ecosystems – This includes harm to financial, business, and environmental ecosystems due to misplaced reliance on artificial intelligence.

All of these are risks that companies and their boards must consider when weighing the potential benefits and risks of adopting predictive or generative AI tools. While many AI developers heavily promote their platforms’ capabilities, it is imperative that companies and their boards make informed decisions taking into consideration all pertinent factors. If the risks associated with using an AI tool for a particular business purpose cannot be managed effectively, if the risks of using a particular AI tool outweigh its benefits, or if the risks associated with a particular platform or a particular use cannot be clearly identified, then adoption may be premature.

Understanding the Challenges Associated with Managing AI-Related Risk

Along with understanding the risks associated with using AI tools generally, companies must also ensure that their board members understand the challenges associated with managing AI-related risk. NIST’s Framework outlines some of these challenges as well, including:

  • Challenges related to risk measurement and availability of reliable metrics
  • Risks related to relying on third-party software (including access to the information needed to conduct effective risk assessments)
  • The ability (or lack thereof) to track emergent risks
  • Anticipating and assessing AI-related risks in real-world settings
  • Determining an acceptable level of risk tolerance related to the adoption of AI tools

To be clear, these are just examples. NIST’s Framework identifies various other challenges associated with managing AI-related risk as well—and even NIST’s list is not exhaustive. In order to effectively oversee a company’s use of AI, its board members must have a clear and comprehensive understanding of all pertinent risks and the ability to assess these risks in a meaningful and measurable way. This applies not only at the adoption stage, but also on an ongoing basis.

Implementing AI-Specific Risk Management Policies, Procedures and Protocols

With these considerations in mind, to prepare their boards for effective AI oversight in 2026, companies must implement AI-specific risk management policies, procedures and protocols. Crucially, not only must these policies, procedures and protocols be specific to AI (and to the specific AI tools the company intends to adopt), but they must be custom-tailored to the company’s specific operations and intended uses of AI as well.

Given that this is the case, some key steps for companies to take before pursuing new AI initiatives in 2026 include:

1. Conducting a Comprehensive AI-Specific Risk Assessment

Prior to adopting new AI technologies, companies must conduct a comprehensive risk assessment that is specific to the technology in question and its intended use within the organization. Conducting this risk assessment will begin with identifying all potential risks—including (but not limited to) those listed above. Once a company’s board members have a clear understanding of the risks they need to consider, then they can decide whether these are risks they are (or aren’t) willing to accept.

2. Determining How They Can Measure and Manage Their AI-Related Risks

While some AI-related risks may be nonstarters, others may require a more detailed level of assessment. If a company’s board is willing to consider adoption of a particular AI tool at a base level, then the board must be able to further assess the specific risks that the tool presents. As discussed above, this requires the ability to measure the pertinent risks effectively—which may present challenges of its own.

3. Developing AI-Specific Risk Management Policies, Procedures and Protocols

Assuming the board is prepared to approve the use of a predictive or generative AI platform, then the next step is to develop AI-specific risk management policies, procedures and protocols. Again, these policies, procedures and protocols must be specific not just to AI generally, but to the specific tool and specific use under consideration.

4. Conducting Internal Training and Emphasizing the Importance of AI-Related Risk Management

After developing the necessary policies, procedures and protocols, it will be necessary to conduct internal training programs that emphasize the importance of AI-related risk management. This includes, but is not limited to, training programs that are designed specifically for members of the company’s board.

5. Monitoring for AI-Related Risks On an Ongoing Basis

While making informed decisions about AI-related risk management is critical, it is also just the first step in the process. After adopting AI technologies, companies and their boards must be prepared to leverage their newly adopted policies, procedures and protocols to monitor for (and respond to) AI-related risks on an ongoing basis.

Schedule a Call with a Fort Lauderdale Corporate Lawyer at Shaw Lewenz

Do you need to know more about preparing your company’s board for AI oversight in 2026? If so, we invite you to get in touch. To schedule a call with a lawyer at Shaw Lewenz, please call 954-361-3633 or contact us online today.

Contact Us

Name(Required)