BarbriSFCourseDetails

Course Details

This CLE webinar will examine the growing use of generative and other forms of AI in various business settings. The panel will address the need for organizations to develop and implement adequate policies and procedures to identify and manage the legal risks of this evolving technology.

Faculty

Description

Artificial intelligence is changing all aspects of life and attorneys must learn about AI to assist their clients with sweeping fast-paced changes in technology. The latest OpenAI models and other generative AI tools such as Claude, Google Gemini, and Microsoft Copilot are now in everyday usage. These tools can generate content in the form of text, images, and videos in seemingly miraculous fashion. Most organizations are testing and integrating these tools into their operations. Other forms of AI make decisions, predict outcomes, optimize business and technical processes, identify people and things, and operate robots and other machines in the real world. However, myriad risks and novel issues associated with the use of AI pose challenges to the organizations that adopt them.

While generative AI is opening new doors for organizations with creativity-boosting and time-saving tools, the technology has limitations that could cause big problems if not managed appropriately. The limitations typically center around the accuracy of its output and the data used to train its systems. Simple errors with the technology can put organizations at risk. Nonetheless, risks go far beyond the reliability of generative AI. Other forms of AI are vulnerable to bias and cyberattacks, while creating social and ethical challenges.

Given the limitations of AI and the pace of its popularity, organizations need to act now and proactively implement policies to ensure AI systems enable sound decisions about its use. Having rigorous policies in place allows businesses to embrace AI technology in a deliberate and rational way while mitigating legal risks.

Some key areas an AI policy should cover are reliability; compliance; data confidentiality, privacy, and security; bias and discrimination; transparency and explainability; intellectual property infringement; unintended consequences; employee training; management, accountability, and responsibility; economic and social disruption; ethical considerations; incident reporting and handling; and continuous monitoring and improvement.

Listen as our panel of authoritative experts provides advice on creating policies surrounding the use of AI and generative AI that make sense for an organization. The panel will discuss tips for creating one or more policies on generative AI by setting expectations, examining potential risks, and balancing the legal and ethical considerations in an effective risk management program.

Outline

  1. Defining ChatGPT and generative AI
  2. Ways organizations are using ChatGPT and generative AI
  3. Benefits and limitations of ChatGPT and AI
  4. Risks associated with ChatGPT and AI
  5. Items to address in creating a ChatGPT and AI corporate policy
    • Where should policy be documented?
    • Vendor due diligence
    • Compliance with applicable law
    • Ensuring reliability, accuracy, and effectiveness
    • Data confidentiality, privacy, and security
    • Bias and discrimination
    • Transparency, explainability, and disclosures
    • Intellectual property considerations
    • Employee training
    • Management, accountability, and responsibility
    • Incident response and reporting
    • Ethical considerations
    • Continuous monitoring and improvement

Benefits

The panel will address these and other key issues:

  • What are different forms of AI, including generative AI?
  • How are businesses currently using generative and other forms of AI?
  • What are the benefits, limitations, and risks of AI?
  • What are the key issues that need to be addressed in setting policies for the use of generative AI in an organization?