2 February 2024
By Mike Swift
With the US Federal Trade Commission ordering companies to delete algorithms and data because of privacy violations, companies planning to launch AI products need to carefully weigh regulatory risks against the benefits of those products, experts agreed at a key legal gathering in New York this week.
Generative artificial intelligence, and its probability of transforming the legal industry as well as the scores of markets where it is introduced, was the central theme of the LegalWeek conference.* But even as companies race to introduce AI products that could save money by automating tasks now done by people, companies need to weigh the risks of FTC and state enforcement of new privacy laws passed by US states based on the data those AI tools need, legal experts said.
In another order today underscoring that risk, the FTC ordered Blackbaud to delete personal data that it doesn’t need to retain to settle allegations the company had “shoddy” security and misled its users about the seriousness of a 2020 data breach.
New state privacy laws can create a business as well as a regulatory risk for companies investing in new AI tools, one expert said.
“Organizations need to be careful about how they’re using personally identifiable information and giving consumers the opportunity to opt out of those things” due to those new state laws, said Garylene Javier of the firm Crowell & Moring. “That could really have such a huge business impact, if all of a sudden you’re having multiple consumers coming in saying, ‘Please, I’m opting out of the use of any kind of automated decision-making.’ ”
Led by California, more than a dozen US states have now passed comprehensive privacy laws, with others likely to follow in the absence of a national US law. The California Privacy Protection Agency is developing new rules for automated decision-making.
Combined with FTC enforcement actions such as the enforcer’s 2022 order to a Weight Watchers subsidiary to delete personal data and destroy algorithms that used it, Javier said companies could face millions of dollars of lost investment if the FTC were to order a company that fed personally identifiable information into its AI training data to destroy “anything that is the fruit of the poisonous tree.”
“I do think there’s a danger in moving too fast here” on AI, agreed Ignatius Grande, a director at the Berkeley Research Group who specializes in electronic discovery and data privacy issues. He cautioned companies against wanting “to just jump in” on AI without carefully assessing privacy and security issues because that is “likely going to cause great ethical issues or other issues.”
Across the annual gathering of the legal industry, the letters “AI” were central to almost every discussion, whether it was about the pain or the gain of what most agree will be a transformative technology.
A group of current and former federal judges, in a discussion at the conference, expressed skepticism that AI would soon become central to court proceedings.
“I think it’s probably going to be a slow start because of the conservative nature of attorneys,” said US Magistrate Judge Kimberly Priest Johnson, who sits in the Eastern District of Texas.
US Magistrate Judge Sarah Cave, who sits in the Southern District of New York, said generative AI has invented cases that didn’t exist, and from now on, “everybody is on notice, every lawyer is on notice, that if you use generative AI proceed very carefully and check whatever it is you are citing to.”
One retired judge said she believes, however, that using generative AI in a court case isn’t qualitatively different from any other work product submitted to a court. “It’s a matter of whatever you submit to a court, you stand by,” said former US District Judge Shira Scheindlin. “It’s really that simple.”
In today’s order against Blackbaud, which provides software services to nonprofits, the FTC alleges the company deceived users by failing to implement the “appropriate physical, electronic and procedural safeguards to protect your personal information” that left people’s Social Security and banking account numbers accessible to a hacker in 2020. The company also did not delete data it no longer needed, the complaint said.
“This action illustrates how indefinite retention of consumer data, which can lure hackers and magnify the harms stemming from a breach, is independently a prohibited unfair practice under the FTC Act,” Commissioners Lina Khan, Alvaro Bedoya and Rebecca Kelly Slaughter wrote in a statement.
With reporting by Madeline Hughes in Washington, DC.
* LegalWeek, ALM/Law.com, New York City, Jan. 29-Feb 1, 2024.
![the-complete-picture-on-ai-regulation,-with-global-insights-from-mlex-[sponsored]](https://lawofnew.com/wp-content/uploads/2024/03/5259-the-complete-picture-on-ai-regulation-with-global-insights-from-mlex-sponsored.png)