You are about to leave Risk Strategies website and view the content of an external website.
You are leaving risk-strategies.com
By accessing this link, you will be leaving Risk Strategies website and entering a website hosted by another party. Please be advised that you will no longer be subject to, or under the protection of, the privacy and security policies of Risk Strategies website. We encourage you to read and evaluate the privacy and security policies of the site you are entering, which may be different than those of Risk Strategies.
The White House’s October 30, 2023 Executive Order (EO) – titled, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”1 – made the Biden Administration’s goal clear: to “establish new standards and protections for artificial intelligence (AI) safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more.”
The EO seeks to promote the safe and responsible development and deployment of AI, balancing innovation against the risks posed by potential detrimental effects on, among others, the labor market, fair competition, and national security.
While the EO is necessarily focused on the actions of federal government departments and agencies under the purview of the executive branch, it’s likely this will be a precursor to a much broader regulatory environment that will ultimately have significant implications for the private sector.
Specifically, there are two aspects of the EO anticipated to have a clear impact on the private sector, as follows:
The Department of Commerce is tasked with developing best practices for detecting and authenticating AI-generated content to protect the public from AI-based fraud such as “deep fakes,” fraudulently using voice and image content. Given the EO’s broad scope and the reliance on federal funding of contractors and organizations required to meet agency requirements, companies in the energy, defense, healthcare, technology, software, financial institutions and entertainment industries, among others, are expected to be affected.
The EO calls on Congress to pass data privacy legislation, and specifically asks for that legislation to account for AI-generated technology and risk, and govern how federal agencies collect, store, and use privacy data.
The adoption of AI, and adherence to any subsequent laws/regulations governing the use, oversight and implementation of adopting AI to help companies more efficiently run their businesses, manufacture and deliver their products, and provide services will have broad implications for several insurance coverages such as Directors & Officers (D&O), Employment Practices Liability (EPL), Errors and Omissions (E&O) and Cyber. Private organizations, therefore, should take note of this EO as it will likely provide guidance to insurers with respect to underwriting of AI-related risks, organizations that rely on AI to conduct business or perform services, and federal contractors hired to provide sensitive services.
Though more work needs to be done by federal agencies, Congress, the National Institute of Standards and Technology, and the private sector, the EO’s affects on the insurance industry, particularly cyber insurance, seem clear. For instance, the EO’s fraud reduction aim noted above likely means increased underwriting scrutiny around the use of AI-generated content and whether organizations sufficiently establish best practices in line with the Department of Commerce’s guidelines and regulatory requirements. Any perceived failure to do so could negatively impact coverages, such as Cyber Crime. Furthermore, cyber insurers can be expected to use any data privacy laws passed by Congress as a guide or tool in making underwriting decisions regarding organizations using AI to collect, aggregate, and/or sell sensitive data.
In addition, for employment practices liability (EPL), E&O and other management liability coverages, underwriters may start to ask more questions around the implementation and use of AI, which could require completion of revised and/or supplemental applications to help assess the company’s exposure and risk management policies and procedures. Depending on the responses, insurers may seek to narrow coverage terms or adjust pricing of the cost to secure coverage.
In light of the order’s priorities, there are a host of potential AI-related risks and claims that can foreseeably arise that would implicate management and cyber liability coverage issues, including:
With the White House’s Executive Order and continually evolving risks posed by AI technology, clients should be aware of this emerging issue and expect increased legal, regulatory, and insurance market scrutiny. Carriers are still determining what, if any, impact AI will have on the insurance market and their policies. Although this EO does not answer their questions, it does begin to provide governing and best practice standards that insurance markets, particularly Cyber carriers, may deem useful. It’s important to review your insurance coverages with an experienced broker to ensure policy terms adequately address these exposures.
The contents of this article are for general informational purposes only and Risk Strategies Company makes no representation or warranty of any kind, express or implied, regarding the accuracy or completeness of any information contained herein. Any recommendations contained herein are intended to provide insight based on currently available information for consideration and should be vetted against applicable legal and business needs before application to a specific client.