You are about to leave Risk Strategies website and view the content of an external website.
You are leaving risk-strategies.com
By accessing this link, you will be leaving Risk Strategies website and entering a website hosted by another party. Please be advised that you will no longer be subject to, or under the protection of, the privacy and security policies of Risk Strategies website. We encourage you to read and evaluate the privacy and security policies of the site you are entering, which may be different than those of Risk Strategies.
Chances are you’re already aware of AI’s explosive recent developments. News outlets are saturated with articles documenting groundbreaking AI advancements, and the AI industry is experiencing monumental growth.
Technology companies are continually releasing new AI tools, and increasingly businesses are diving into the AI waters. The architecture, engineering, and construction (AEC) industry is no exception. Organizations are using AI language models like ChatGPT to draft communications and proposals. Designers and architects are using generative AI to provide adaptive remodeling of floor plans and calculate thermal efficiency. The technology once prophesized to revolutionize how people work is here.
AI is projected to become more integral to AEC work. If you’re one of those firms looking to explore AI’s capabilities, consider creating guidelines around its use. While AI itself is neither good nor bad, individuals don’t always know how to use it responsibly. In the absence of an official AI policy, employees could inadvertently use AI tools to the firm’s detriment.
Here are some questions to explore when creating your AI policy. This list is likely to grow as AI becomes more popular, but starting with the following will be helpful:
AI language models have demonstrated a propensity for providing false information. It’s sound practice to vet work produced by AI tools. You may want to dip your toes before jumping in. Start with simple tasks before gradually feeding it more complex tasks.
Some AI tools perform very basic services and calculations that are not of real consequence to clients. In these cases, it may not be necessary to disclose your use. If AI figures prominently in your work, you likely will want to disclose this to your client.
We have yet to see the legal ramifications of using AI to rework the IP of other parties. Lawsuits currently being filed against AI content creators for misusing copyrighted work indicate that regulation is on its way. In the future, we may recognize training AI with IP to be copyright infringement. It’s wise to have your firm's legal team assess the use of AI to create original work for the firm.
AI programs require input data to produce work. Many of these programs also record interactions to further train and refine their capabilities. Because of this, some organizations prohibit sharing sensitive data with third parties, such as generative AI programs. Feeding sensitive project details like government or military documents to an AI tool could result in costly litigation.
Creating guidelines to ensure employees use AI responsibly is a good start for firms looking to safely adopt this technology. AI regulations and best practices are likely to change, so you will need to revisit and update your company guidelines periodically to help mitigate risk.
Paying close attention to AI developments will be critical as AI use cases and regulations continue to evolve. Meeting with an insurance specialist to discuss your firm’s AI practices can help ensure that your firm has the coverage it needs.
AI tools have already proven to be exceptionally useful in the AEC space, and they’re certainly not in short supply. AI is here to stay, despite the controversy and challenges surrounding it. Firms that use these tools responsibly will see the greatest benefit from their services while facing the least potential risk.
Want to learn more?
Find Darren Black on LinkedIn, here.
Connect with Risk Strategies Architects & Engineers team at aepro@risk-strategies.com
About the author
For over 20 years, Darren S. Black has served architectural, engineering, and design-build firms as a professional liability insurance broker and risk management advisor. With previous experience as a litigator and coverage counsel, he provides a unique lens for looking at AI risks.
The contents of this article are for general informational purposes only and Risk Strategies Company makes no representation or warranty of any kind, express or implied, regarding the accuracy or completeness of any information contained herein. Any recommendations contained herein are intended to provide insight based on currently available information for consideration and should be vetted against applicable legal and business needs before application to a specific client.