The emergence of artificial intelligence (AI) and its widespread reach are no longer simply concerns for AI developers. The risks and exposures of AI aren’t limited just to errors and omissions (E&O), and cyber claims in the tech industry. Directors and officers in all sectors may encounter litigation implications related to AI, which surfaced with the recent filing of the first AI-related securities class action lawsuit and President Biden’s AI Executive Order issued in late 2023.
As AI becomes more integrated into critical infrastructure and decision-making processes, private equity firms and their portfolio companies face unprecedented challenges. AI-driven cybercrime continues to evolve and regulatory scrutiny surrounding AI practices will only increase in the years to come
Though the insurance market has yet to devise a cohesive approach to address these challenges, following industry best practices and fully understanding the risks can help private equity professionals navigate the AI revolution.
AI has transformed the cybersecurity landscape, advancing well-known cyber threats (e.g., ransomware, business email compromise, etc.). While IT departments strive to enhance their cybersecurity defenses using AI, cybercriminals are deploying malicious programs capable of adapting and evolving in real time to evade traditional defense mechanisms. This evolution has lowered the barrier for less skilled hackers to join criminal campaigns, likely increasing the severity and scope of breaches.
One particularly concerning development is the rise of AI-enabled deepfake technology in phishing attacks. A deepfake is a deep learning or machine learning technology in which a person's face, body or voice has been digitally altered so that they appear to be someone else. Deepfakes can be used to spread misinformation or gain access to valuable information and data.
The following measures can protect companies against deepfakes:
In response to growing concern over a lack of AI governance and its potential social implications, the White House issued an Executive Order in October 2023 to establish new standards for AI safety and security.
This order marks the federal government's first attempt to regulate the development and use of AI by establishing the "White House Artificial Intelligence Council." As new AI protocols are implemented, private equity firms and their portfolio companies may face increased exposure to claims alleging:
These claims would directly impact directors and officers (D&O) and employment practices liability (EPL) insurance. Companies need to ensure adequate coverage and protection, especially if using AI.
To date, AI-related lawsuits have mostly alleged copyright and privacy claims. Regulatory compliance issues related to privacy laws have also arisen. Organizations leveraging AI in their operations need to implement policies ensuring compliance with relevant laws and obtaining necessary permissions for content usage.
In February 2024, the first AI-related securities class action lawsuit was filed against Innodata, an AI-enabled software platform company. The complaint alleged the defendants misrepresented both the extent to which actual AI was being used in its products/services and the monetary investment the company made into research and development.
This case highlights the emerging risk of "AI washing" — where companies may overstate their AI capabilities to attract investors. We expect more class action lawsuits to follow, in addition to AI-related SEC enforcement actions.
As AI continues to capture investor interest, private equity firms must be vigilant in their due diligence and portfolio company oversight to mitigate these risks.
The insurance market is still adjusting to the rise of AI-related risks. As of now, underwriting requirements may vary from insurer to insurer. Until comprehensive claims data is available, you can expect the following:
While there's no universal definition of "artificial intelligence" in insurance policies, existing coverage under cyber/privacy or D&O may apply to AI-related claims. However, it's essential to review policy language carefully for limitations, exclusions, or triggers.
Most cyber policies include a "media liability" insuring agreement, which generally covers alleged infringement of copyright or trademark, invasion of privacy, libel, slander, plagiarism, or negligence by the organization regarding its online content. However, be aware of intellectual property or infringement of computer code exclusions that could limit coverage for copyright or trademark infringement.
Directors and officers (D&O) liability policies generally provide coverage for actual or alleged "wrongful acts" in managing the operations of the company. However, broad intellectual property exclusions may apply, potentially limiting coverage for AI-related claims.
Standalone IP policies can fill gaps in cyber and D&O coverage, offering both defense and enforcement protection for patents, trademarks, copyrights, and trade secrets.
Not all policies are created equal. Policy wording needs to be carefully negotiated by a knowledgeable broker to ensure the broadest coverage is included, taking the latest AI litigation decisions into account.
To navigate the complex AI landscape effectively, private equity firms and portfolio companies should implement the following strategies:
As AI continues to reshape the business landscape, private equity firms — and businesses across sectors — must stay ahead of the curve in understanding and mitigating associated risks. By implementing robust governance policies, staying informed about regulatory developments, and working closely with experienced insurance brokers, firms can navigate the AI revolution while protecting their investments and reputation.
Connect with the Risk Strategies Private Equity team at privateequity@risk-strategies.com.