how-to-simplify-crafting-responsible-ai-policies

How To Simplify Crafting Responsible AI Policies

artificial-intelligence-4111582_1920A GC recently shared an interesting perspective on the generative AI craze and the hesitancy it seems to have instilled in lawyers. She said, “Our role is not so different from before. We still manage people and their expectations, including our legal teams, company vendors, employees, and business partners. Only now, we have more advanced tools at our disposal.”

For example, cloud platforms allow GCs to centralize legal vendor management. With greater control comes the benefits of discounts, economies of scale, and improved service levels. Or consider how the skillful management of lawyers has always required meaningful conversations about career aspirations. Only now, the availability of modern AI systems significantly impacts career expectations. 

AI can enhance business efficiencies, lower the costs of services, and enhance work life in many ways. However, AI can also amplify biases, raise transparency concerns, and pose risks to privacy. Many legal teams still need to take charge to manage AI risks and maximize its benefits.

  • Only 13% of 97 public companies surveyed have an AI use framework, AI policy or policies, or an AI code of conduct, according to the Society for Corporate Governance’s 2023 Board Practices Report.
  • Just 37% of the 399 company leaders surveyed by employment law firm Littler Mendelson provide policies and guidance on proper AI usage to employees.
  • Only 24% of companies provide policies or guidance on AI usage at work, and just 17% of employees say they’ve received training on how to use AI in their day-to-day work, according to the Asana “State of AI at Work Report” on a survey over 4,500 United States and United Kingdom knowledge workers.

Don’t let uncertainty stop you from crafting an AI policy, especially when you can rely on much of what you already know to simplify the process. The following four tips can help you draft a flexible policy that ensures AI’s ethical and responsible use. 

Align Your AI Policy With Your Existing Policies

First, don’t reinvent the wheel. AI policies are not the first or only technology-related guidance legal teams create. Data protection and privacy, information security, and social media policies are just a few of the policies and programs lawyers help create to ensure the responsible, ethical, and legal use of technology. 

Integrate AI policies with existing policies and governance frameworks. This alignment provides a comprehensive and cohesive approach to AI that promotes consistency and deters conflicts.  

Learn Data Literacy Skills

Data is the lifeblood of AI. You’ll no more understand AI without data literacy skills than you would understand courtroom or contract procedures without a law school education. And both require constant learning. 

To supervise AI systems, you must first understand data collection, management, and governance. Learn about the types of data AI uses, how AI uses data, and related standards such as privacy regulations and the rights of individuals to control their data. Regulatory frameworks that govern data can help inform your AI policy. 

Communicate Clearly With Visuals

Present AI policies with clear and concise language that stakeholders across the organization can easily understand, regardless of their technical or legal expertise. Avoid legal and technical jargon and complex terminology. 

Use visuals to highlight key points and reinforce your message. Incorporate images, graphs, and charts. Use arrows, fonts, colors, and emojis to demonstrate relationships and connections and enhance clarity. Visuals help readers engage and retain information. 

Regularly Update Company AI Policies

It’s easier to glue a boat to water than to accurately anticipate every impact of AI’s use. That’s why AI policies are living documents. Create AI policies now that fulfill current needs, such as guiding the use of generative AI tools, using AI-driven software to centrally manage vendors, and helping lawyers learn new skills.  

When changes come, update your AI policy to keep pace with emerging ethical considerations, technological advancements, and legal developments related to data privacy, bias mitigation, intellectual property rights, liability, and other areas. 

The future is more dynamic than ever, and your AI policy should be, too. 

Are you crafting an AI policy for your organization? What would you add to this list?


Olga MackOlga V. Mack is the VP at LexisNexis and CEO of Parley Pro, a next-generation contract management company that has pioneered online negotiation technology. Olga embraces legal innovation and had dedicated her career to improving and shaping the future of law. She is convinced that the legal profession will emerge even stronger, more resilient, and more inclusive than before by embracing technology. Olga is also an award-winning general counsel, operations professional, startup advisor, public speaker, adjunct professor, and entrepreneur. She founded the Women Serve on Boards movement that advocates for women to participate on corporate boards of Fortune 500 companies. She authored Get on Board: Earning Your Ticket to a Corporate Board SeatFundamentals of Smart Contract Security, and  Blockchain Value: Transforming Business Models, Society, and Communities. She is working on Visual IQ for Lawyers, her next book (ABA 2023). You can follow Olga on Twitter @olgavmack.

CRM Banner