Online Advertising

Considerations for Adtech Stemming From Oracle’s $115‑Million Settlement


Oracle America, Inc. (Oracle) agreed to settle a class action lawsuit alleging it engaged in “deliberate and purposeful surveillance of the general population via their digital and online existence . . . and created a network that tracks in real time and records indefinitely the personal information of hundreds of millions of people” without their knowledge or consent. As part of the July 2024 Settlement Agreement and Release (Agreement), Oracle has agreed not to capture certain website data and to implement auditing to ensure compliance with privacy duties. This article discusses the litigation and settlement terms, with commentary from Leslie A. Shanklin, a partner at Proskauer, on the implications of this lawsuit for the adtech industry.

See our four-part series on tracking technologies: “Privacy Regulation, Enforcement and Risk” (Jan. 17, 2024), “A Deep Dive on What They Are and How They Work” (Jan. 31, 2024), “A 360‑Degree Governance Plan” (Feb. 21, 2024), and “Compliance Challenges and Solutions” (Apr. 17, 2024).

Plaintiffs Take a Kitchen-Sink Approach

In August 2022, Michael Katz-Lacabe and Dr. Jennifer Golbeck (together, Plaintiffs), individually and as representatives of a purported class, filed a class action complaint against Oracle in the U.S. District Court for the Northern District of California. They alleged multiple privacy-based claims under California law and the federal Wiretap Act, and sought both damages and declaratory and injunctive relief.

After Oracle’s partly successful motion to dismiss, Plaintiffs filed a first amended complaint, which included privacy claims under Florida law. After a second motion to dismiss, Plaintiffs filed a second amended complaint (SAC), on which the settlement is based. Following Oracle’s third motion to dismiss, the surviving claims under the SAC were:

  • invasion of privacy under the California constitution, on behalf of a proposed California class;
  • intrusion upon seclusion under California common law, California Invasion of Privacy Act (CIPA), on behalf of the proposed California class;
  • violation of CIPA, on behalf of a proposed sub‑class;
  • violation of the Florida Security of Communications Act, on behalf of a proposed Florida class;
  • unjust enrichment under California common law, on behalf of the proposed California class;
  • unjust enrichment under Florida law, on behalf of the proposed Florida class; and
  • declaratory judgment that Oracle wrongfully accessed, collected, stored, disclosed, sold and otherwise improperly used Plaintiffs’ private data and injunctive relief, on behalf of all proposed classes.

“The gravamen of this controversy lies in Oracle’s collection, tracking, and analysis of Plaintiffs’ and Class members’ personal information and behavior, and building dossiers based on that information and providing that information to third parties. Plaintiffs and Class members never consented to, or were even aware of, Oracle’s conduct described herein,” alleges the SAC. “Oracle’s misconduct has put Plaintiffs’ and Class members’ privacy and autonomy at risk, and violated their dignitary rights, privacy, and economic well-being.”

“The complaint took a ‘kitchen-sink’ approach” to the alleged causes of action, Shanklin told the Cybersecurity Law Report. “These are all common causes of action in privacy class actions brought in California, particularly CIPA, which has been a favorite of the plaintiffs’ bar for the past few years.”

Notably absent were claims under the CCPA, which lacks a private right of action except for data breach-related claims, noted Shanklin. Even if a private right of action were available under the CCPA for privacy claims, it is uncertain if Plaintiffs would have attempted to assert one, as the SAC and their public statements about the lawsuit “emphasize Oracle’s alleged failure to gather user opt‑in consent for the data tracking at issue,” she explained. Unlike the GDPR’s opt‑in model, the CCPA and other comprehensive state privacy laws are generally opt-out laws, except for children’s data or other types of sensitive data. “Nevertheless, plaintiffs are increasingly using older state laws, such as CIPA, to challenge online data collection where opt‑in consent has not been obtained,” she added.

See our two-part series on website-tracking lawsuits: “A Guide to New Video Privacy Decisions Starring PBS and People.com” (Mar. 29, 2023), and “Takeaways From New Dismissals of Wiretap Claims” (Apr. 5, 2023).

Oracle’s Tracking Technologies

Oracle used tracking mechanisms and other technologies to associate browsing histories with other data and compile profiles about individual internet users, according to the SAC. It collected:

  • personal information, including “concrete identifiers” such as names, addresses, email addresses and telephone numbers; and
  • behavioral data, including websites visited, digital and offline purchases, and payment methods.

To compile such information, Oracle used various internet technologies, including cookies, JavaScript code, tracking pixels, device identification, cross-device tracking and “AddThis” browser plugins, which Plaintiffs describe as “a highly privacy-invasive data collection mechanism.” It also acquired data from third-party data brokers. Oracle then allegedly “process[ed], analyz[ed] and monetiz[ed]” the compiled data through various products, including its “BlueKai” data management platform, which included the Oracle Data Marketplace (a commercial data exchange) and the Oracle ID Graph.

Data Marketplace

According to the SAC, Oracle’s Data Marketplace is a market for personal data collected by:

  • Oracle using its BlueKai tracking pixels;
  • other companies from their own users, which they sold directly to Oracle clients; and
  • third-party data brokers, which they sold to Oracle clients.

ID Graph

Oracle ID Graph is an “identity resolution” service that permitted the compiling of an internet user’s various disaggregated identifiers into a single customer profile. Without the knowledge or consent of the user of a website employing Oracle’s tracking technologies, Oracle could collect and store the user’s “behavioral activity and personal information, including, but not limited to, home location, age, income, education, family status, hobbies, weight and what the user bought at a brick-and-mortar business yesterday afternoon,” the SAC alleges.

“Data tracking across user devices is ubiquitous, as is the effort companies make to get a full picture of a consumer’s online behavior by stitching together online interactions through an individual’s various phone, tablet and laptop devices,” according to Shanklin. “Compiling ‘profiles’ on individuals and their devices based on an individual’s online activity has been a cornerstone of targeted advertising practices for decades. Those user profiles are what allow advertisers to deliver advertising in an increasingly hyper-personalized way.” What differentiates Oracle’s practices “is the sheer scope and scale of data to which Oracle had access,” she added.

Key Settlement Agreement Terms and Oracle’s Significant Additional Steps

On May 8, 2024, following additional motion practice and a round of mediation, the parties executed a binding term sheet outlining a settlement that includes both monetary and nonmonetary relief. On July 8, 2024, they executed the Agreement. On July 18, Plaintiffs filed an unopposed motion (Motion) for an order granting preliminary approval of the settlement, and:

  • appointing Plaintiffs as class representatives;
  • appointing their counsel as class counsel;
  • approving the proposed means of notifying the class;
  • appointing a settlement administrator; and
  • setting a fairness hearing in connection with final approval of the Agreement.

A California federal judge preliminarily approved the settlement on August 8, 2024.

Settlement Class

The Agreement defines the proposed “Settlement Class” as:

[A]ll natural persons residing in the United States whose personal information, or data derived from their personal information, was acquired, captured, or otherwise collected by Oracle Advertising technologies or made available for use or sale by or through ID Graph, Data Marketplace, or any other Oracle Advertising product or service from August 19, 2018, to the date of final judgment in the Action.

The Settlement Class excludes Oracle and its officers, directors, employees and affiliates. Plaintiffs’ counsel estimates that the Settlement Class could include approximately 220 million individuals. If 15,000 or more members of the Settlement Class validly exclude themselves from the class, Oracle may rescind the Agreement.

The Agreement recites that Oracle denies all allegations made by Plaintiffs in the action and that it did anything unlawful or improper. The Agreement provides that it is not an admission of guilt or wrongdoing by Oracle.

$115‑Million Settlement Fund

Oracle has agreed to pay $115 million to create a settlement fund, which, after deducting settlement costs, will be paid pro rata to the Settlement Class members. The principal settlement costs to be deducted from the settlement include:

  • class counsel’s fees and expenses, estimated to be up to 25 percent of the total fund;
  • “service awards” of up to $10,000 for each Plaintiff;
  • the settlement administrator’s fees; and
  • other settlement expenses.

Any amounts remaining after distribution to Settlement Class members who file valid claims will be distributed to one or more eligible non-profit organizations. The parties have initially identified the Privacy Rights Clearinghouse as the potential recipient of any undisbursed settlement funds.

Nonmonetary Relief

Oracle has agreed that, for as long as it continues to offer the products and services described in the SAC:

  • it will not “capture (a) user-generated information within referrer URLs (i.e., the URL of the previously visited page) associated with a website user; or (b) except for Oracle’s own websites, any text entered by a user in an online web form”; and
  • it will “implement an audit program to reasonably review customer compliance with contractual consumer privacy obligations.”

Companies commonly employ the practices highlighted in the first bullet point above, noted Shanklin. “[A]s with other practices challenged in this lawsuit and other CIPA cases, the practices are not expressly prohibited under U.S. state comprehensive privacy laws but they are being challenged through other avenues,” she said.

See “Benchmarking the Impact of State Privacy Laws on Digital Advertising” (Oct. 11, 2023).

Oracle Goes Beyond Agreement and Exits Adtech Business

“Oracle ceased operation of its ‘AddThis’ tracking mechanism only after Plaintiffs’ initial pleadings alleged that Oracle’s collection of data through AddThis violated Plaintiffs’ privacy rights,” according to the Motion.

In June 2024, after executing the settlement term sheet, Oracle “announced that it would be exiting the adtech business altogether,” according to the Motion. In particular, as of September 30, 2024, it will no longer offer its adtech products including:

  • Cloud Data Management Platform, which includes the BlueKai “Core Tag” tracking mechanism and associated cookies and pixels, Datalogix and Data Marketplace;
  • Digital Audiences, including OnRamp (which uses ID Graph); and
  • Cross-Device tracking.

Additionally, Oracle will end relationships with data providers and delete customers’ data after fulfilling any outstanding obligations under customer contracts, according to Oracle’s Advertising End-of-Life Frequently Asked Questions.

These changes, which are not formally part of the settlement, are perhaps more significant than the nonmonetary relief outlined in the Agreement, suggested Shanklin. Although Oracle cited declining revenues for its decision, the Agreement asserts they were driven by Plaintiffs’ lawsuit. “[T]his lawsuit highlights the growing impact that privacy risk and compliance is having on both established and emerging business models,” she said.

See “Recommended Data Strategies As Google Swears Off Web Tracking” (Mar. 24, 2021).

Practical Considerations for Adtech

Account for Growing Risk of Online Data Collection

“The Oracle settlement does not create new law, and there currently is no definitive law in the U.S. that declares [collection of user-generated information from external URLs and online forms] unlawful per se,” said Shanklin. However, she cautioned, the case highlights “the growing risks of online data collection, particular types of data collection that users would not readily understand or expect to be occurring.”

“Companies have been carefully crafting their business practices to comply with evolving privacy laws in the U.S., but risks are increasingly coming from the plaintiffs’ bar, which is taking a creative and expansive interpretation of older laws on the books,” continued Shanklin. Additionally, the FTC is using its Section 5 authority to claim that activities that might be strictly compliant with U.S. comprehensive privacy laws are, nevertheless, unfair or deceptive. “All companies should be reassessing their data practices in light of these evolving risks,” she advised.

See “Court Hands FTC Grounds to Curb Data Broker Sales” (Mar. 20, 2024).

Reassess Tracking Tech Use

“There’s no question that use of tracking technologies is an increasingly risky proposition,” Shanklin stressed. Europe has been considered to have the highest risks and the highest compliance bar for tracking, but U.S. risks are growing. “The CCPA and other U.S. state comprehensive privacy laws that followed it generally established an opt‑out approach for tracking tech that gave companies operating in the U.S. some degree of comfort in maintaining existing business models reliant on tracking tech,” she noted.

In the past few years, however, a wave of lawsuits has leveraged state and federal wiretapping laws to challenge use of tracking tech without consent. This is “changing that risk calculus considerably, particularly as defendants are not finding it easy to get out of those cases at an early stage,” Shanklin observed. “This risk is heighted by other new U.S. laws such as Washington’s My Health My Data Act, [which] is significantly heightening compliance challenges and risks for a wide array of companies using tracking tech on their websites.”

“The increasing risk we’re seeing around tracking tech is indicative of the larger movement toward stricter privacy regulation across the board,” added Shanklin. That movement will “continue challenging existing digital business practices and forcing much greater focus on privacy from the outset as new data-driven technologies are built and implemented.”

The scale of Oracle’s tracking was unusual, but “even companies not operating at that scale should be aware that there is increasing scrutiny on all of these practices,” cautioned Shanklin. In light of that scrutiny, companies should:

  • conduct fresh assessments of tracking tech used on their digital services to ensure the risk and compliance challenges are warranted by the business benefit; and
  • ensure the disclosures they make to consumers about data collection and usage are adequate and clear.

“We are seeing some companies that are more risk-adverse weighing a shift to an opt‑in approach for many of these activities,” observed Shanklin. Those companies are no longer “assuming compliance with the opt‑out requirements of most U.S. state privacy laws will shield them from legal exposure.”

See “Addressing the Operational Complexities of Complying With the Washington My Health My Data Act” (Apr. 3, 2024); and “How to Approach CCPA’s Under‑16 Opt‑In Consent” (Feb. 12, 2020).

Fine-Tune Privacy Notices

“Privacy notices are an important component of consumer protection in the privacy space. The challenge, of course, is the practical difficulty of where and how to present a privacy notice to a consumer if a third party has no direct interaction with that consumer,” noted Shanklin. In Europe, she added, the issue has been an area of focus “with respect to ePrivacy consent, and one of the reasons consent management tools are becoming increasingly verbose and unwieldly for European users.”

Additional challenges include “privacy notice exhaustion” and “the tension between regulators wanting notices to be detailed and comprehensive while also being easily understood and digestible by the average consumer, even as data-driven technologies become increasingly complex,” continued Shanklin. Consequently, the adtech industry will have to “continue to collaborate to craft better, more practical and more creative solutions for consumer transparency.”

See “Google’s Wiretap Cases Highlight Evolving Privacy Transparency Standards” (Jan. 24, 2024).

Leverage and Manage First-Party Data

There are increasing legal pressures around third-party tracking and data brokers. Additionally, there is commercial value in “having a unique proprietary data set that is built from direct customer interactions,” noted Shanklin. Accordingly, companies with direct consumer relationships have been retooling their data strategies to leverage the value of their first-party data more effectively.

“Transparency is always going to be critical component of not just compliance but of maintaining a trusted relationship with customers,” continued Shanklin. When companies use evolving technologies to enhance data value, such as identity resolution solutions and AI-powered data visualization tools, they should be mindful of the disclosures they make to consumers when collecting data and consider whether those disclosures contemplate the planned means of data processing.

Companies should also consider “the increasingly strict requirements under U.S. law around purpose limitation and data minimization,” added Shanklin. “[B]usiness models built around future data monetization assumptions may not align with evolving legal requirements to keep use of data closely connected to the purpose for which it was originally collected.”

SEC Enforcement

SolarWinds Decision: Practical Takeaways for Cyber Communications


Businesses once preferred to keep quiet about their cybersecurity capabilities. Now they have lots to say. “Companies want to talk about their cybersecurity to show that they are a security-focused company, and that they take into account the security needs of their customers,” Cooley partner Michael Egan told the Cybersecurity Law Report.

Since the SEC’s cybersecurity disclosure rules took effect in 2023, even organizations reluctant to discuss their security efforts have needed to talk about their programs because they are either a regulated company or a company that works with one. It has become essential to check the accuracy of those statements, Jenner & Block partner Jennifer Lee said. Pressed by the SEC’s rules, “companies have really taken stock of what their public statements reveal about their security, and how those statements match with their controls and practices,” she told the Cybersecurity Law Report.

Cyber-related communication has been at the core of the SEC’s much-watched case against SolarWinds and its CISO for securities fraud. A New York federal district court (Court) issued a 107‑page opinion on July 18, 2024, that discussed over a dozen of SolarWinds’ statements made before, during and after a massive 2020 cyberattack, which shot through the software supply chain and affected thousands of public and private organizations.

The Court narrowed the case, dismissing several SEC claims. It allowed to stand securities fraud claims concerning the company’s website statement about security (Security Statement), which SolarWinds first posted in 2017.

This article, the second in a two-part series about the groundbreaking case, presents a series of lessons from it on how companies can approach internal and public cybersecurity-related statements, with observations from Egan, Lee, and lawyers at Freshfields, Orrick and Sullivan & Cromwell, plus insight from the CISO of SafeBase. Part one provided perspective on the SEC’s wins and losses, and examined four implications that worry CISOs.

See “A Framework for Materiality Determinations Under SEC’s Cyber Incident Disclosure Rules” (Jul. 10, 2024).

The Types of Communication at Issue in the Case

The Court held that the Security Statement, although directed to customers, was material to investors.

Individual liability for CISO Timothy Brown remains at the center of the case because of his role in approving the Security Statement, disseminating it, and talking it up in blog posts and podcasts. The SEC alleges in detail that Brown knew about the Security Statement’s inaccuracies when he promoted it.

The Court dismissed many of the SEC’s claims about other instances of SolarWinds’ external communications about security, including the company’s cyber risk statements in its SEC filings – on its Form S‑1 registration, annual, quarterly and 8‑K disclosures. The Court also found satisfactory SolarWinds’ plan for internally escalating security events, as well as Brown’s statements in press releases, blog posts and podcasts discussing SolarWinds’ cybersecurity separate from his comments about the Security Statement.

See “Key Implications and Practical Cyber Program Lessons From SEC’s R.R. Donnelley Settlement” (Jul. 10, 2024).

Lessons for External Communications

The SolarWinds case offers multiple takeaways for companies to consider when making decisions about public-facing communications on cybersecurity, including those to regulators and customers.

See “What Regulated Companies Need to Know About the SEC’s Final Amendments to Regulation S‑P” (Jul. 24, 2024).

Give the CISOs Media Training and Talking Points

Companies should prepare their technology executives for making public statements about cybersecurity, Sullivan & Cromwell partner Nicole Friedlander told the Cybersecurity Law Report. “If the CFO or the CEO is speaking publicly on a subject important to the business, the company typically gives a lot of thought to what those executives plan to say,” she noted. SolarWinds demonstrates that, “beyond website statements, it is important for companies to have a process in place to consider what the senior information technology executives will be saying about security,” including how to respond to audience questions, she added.

“I’ve seen a lot of interest from CISOs in getting more guidance on their public statements,” observed Friedlander, to help protect the company and themselves.

“Media training for these executives will be helpful. They are the face of security for companies that, especially in the technology industry, have to talk about security with their customers,” Friedlander added.

See “Challenges, Risks and Future of the CISO Role” (Jul. 31, 2024).

Conduct an Inventory of the Company’s Public Statements

“There’s a significant call to action here for companies across industries to undertake an inventory of their public-facing security statements,” Egan stressed. Beyond websites, such statements may appear “in ESG (environment, social and governance) reports, in publicly available contracts, data processing agreements or information security addendums,” he elaborated.

An inventory process should involve checking consistency, identifying the details being shared and assessing the likelihood that a reasonable investor would rely upon the statements.

See “Cyber Crisis Communications – ‘No Comment’ Is Not an Option” (Sep. 7, 2022).

Check for Accuracy and Match Descriptions With the Controls

Public companies should evaluate whether their policies match their practices and statements, Lee advised. If an incident occurs, “the SEC basically will scrutinize everything under the hood,” she cautioned.

Teams responsible for sales or product development may issue statements without knowing the latest changes in the company’s cyber architecture. The cyber team, for example, could have tweaked its approach because of new requirements, threats, or integrations with the cloud or AI.

“Some companies are mature, where the CISO puts eyeballs on everything” issued about security, noted Orrick partner Aravind Swaminathan, but elsewhere “she’s not looking at every little piece of marketing collateral that goes out.”

See “Cloud Security Priorities: Stopping the Proliferation of Super Users and Zombie IDs” (Jul. 31, 2024).

In Public, Say More About Process, Less About Controls

As companies regularly enhance or switch their cybersecurity measures, “it is better to speak more about processes that are in place rather than specific controls,” Freshfields partner Tim Howard recommended. A description of the cyber program might match the company’s controls in one region, but cyber software like access/authentication controls or endpoint detection may differ across company units or locations, or after mergers, he pointed out. “It is easy for someone to take pot shots when they spot differences here or there,” he added.

Alternatively, by highlighting processes, the company is “saying ‘we audit what we do. We follow well-accepted frameworks for evaluating risk,’” Howard said.

If the company would like to disclose its use of any specific controls, then “each control that will be discussed should be audited to make sure the company can stand behind its statements,” Howard added.

The SEC attacked SolarWinds’ claim that it “followed” the National Institute of Standards and Technology (NIST) Cyber Framework because an internal scoring showed that the company had notable gaps in areas prescribed by the framework, Lee noted. “Making statements in any place that investors might see about the framework, whether you follow it and in what form, is an enforcement risk,” she said.

See our two-part series on NIST’s new IoT standard: “Boosting Security As States Launch Laws” (Mar. 4, 2020), and “Inspiring a Wave of New Device Security Guidance” (Mar. 11, 2020).

Update Risk Statements in SEC Filings

The Court dismissed the SEC’s claims regarding SolarWinds’ statements of risk factors, finding that they sufficiently alerted “the investing public of the types and nature of the cybersecurity risks SolarWinds faced and the grave consequences these could present for the company’s financial health and future.”

One fact the SEC focused on was that “the SolarWinds risk factors didn’t change from its S‑1 in 2018, the initial IPO registration statement,” and remained the same for years, Friedlander noted. In finding SolarWinds’ series of risk disclosures acceptable, the Court reasoned that a company’s statements need not change if they are appropriately worded and complete for investors, she pointed out.

Nonetheless, Friedlander said, companies should be mindful that, in an investigation, the SEC may continue to scrutinize situations where risk factors did not change over time. It is important for companies in that context to be able to demonstrate that they followed an appropriate process to ensure the risk factors were materially accurate and complete, she stressed.  

To be safe, companies should consider updating the wording of their filings, at least slightly, to show continuous attention to changing cyber risks.

See “Navigating the SEC’s Newly Adopted Cybersecurity Disclosure and Controls Regime” (Sep. 6, 2023).

Decide How Openly to Share Cybersecurity Information on the Web

Because of customers’ or partners’ inquiries, “companies will almost always have to have some sort of security statement” on the web, Egan observed. In commercial agreements, a major focus of negotiation often is whether the third party provides appropriate security for all of the company’s data that it accesses, including personal and business confidential information, he reported.

Companies have been creating trust centers that contain information well beyond SolarWinds’ Security Statement. These sites host a wide array of compliance records, data processing assessments, audits, certifications and cyber insurance summaries.

Security teams like these web repositories (e.g.LinkedIn’s) because of the volume of security questionnaires flying back and forth among parties, said Lisa Hall, CISO of SafeBase, a vendor that runs a platform for trust centers. Those teams are “the ones dealing with the questionnaires and having to email other security teams asking for their SOC 2 reports,” she said.

Centralizing documents allows buyers, customers or partners to access the relevant material themselves. With this type of resource, “my team can build security things instead of responding to the questions about the security things,” Hall observed. With a trust center making companies’ exchange of information more efficient, “there definitely is a move towards more transparency, even saying what your controls are,” she said.

With scrutiny rising, Lee noted, companies must be mindful of how they use trust centers and should consider whether “to limit access to representations about their cybersecurity practices to the groups that need to know, typically their customers.” Companies with such sites sometimes make public documents available for bulk download, while other documents are listed but available only upon request.

Lessons for Managing Internal Communications

The SEC claimed SolarWinds had inadequate escalation protocols. A key reason companies should consider refining their internal policies for discussing security is to mitigate enforcement risk, but better communication often means better protection, too.

Persuade Top Leaders to Engage With Cybersecurity, Not Just Acknowledge It

SolarWinds makes clear that CISOs and companies must be aligned, and cybersecurity isn’t something that falls squarely on the CISO,” Lee said.

In the current environment, boards and executives are “all paying attention,” Swaminathan reported. “But is the business listening with an open mind about those challenges that the security team is bringing to them, and thinking about how they can meaningfully address them?”

Ultimately, Egan advised, “there needs to be the appropriate tone from the top to provide CISOs with the requisite authority and gravitas they need to implement their solutions” and engage business colleagues.

See “Twelve Steps for Engaging the Board of Directors and Implementing a Long-Term Cybersecurity Plan” (Sep. 16, 2020).

Push Cybersecurity Out of Its Silo

With product teams and marketing teams increasingly discussing cybersecurity with outsiders, they or “other stakeholders should be involved when it comes to how cybersecurity is reviewed, addressed, publicized and advertised for the company,” Egan opined.

Some companies use a cyber governance committee to coordinate such decision making, Swaminathan pointed out. Often, those committees more comprehensively and maturely address any issues, but of course can be slower and less nimble, he summarized.

See “Cybersecurity and Privacy Teams Join to Create Data Governance Councils” (May 4, 2022).

Add Structure to Cyber Teams’ Internal Communications

Inherently, frontline communications about cyber, particularly in instant-message apps, can be imprecise – and, thus, they could be misconstrued. “People working quickly can say things that they don’t quite mean,” Friedlander observed, especially in a crisis or a complex situation.

Companies should impress upon employees “that their communications may be misinterpreted or viewed in a bad light in hindsight,” Friedlander recommended.

Reining in communications is not simple. “These teams are used to moving very fast to fix problems,” Egan noted. A fight for resources and bosses’ attention drives unhelpful comments. It is a challenge, he said, to ensure people do not overstate security risks in presentations or written communications – such as saying something “is going to be a huge issue when it’s really going to be more of a standard one” – while, at the same time, avoiding stifling communications.

To address communication issues, companies should use a formal channel for security concerns and urge the security team to flag any issues there. Business leadership should convey to security employees “that we’re in it together,” Howard recommended. Educate them that “priority one is protecting the system, knocking the threat actor out, and protecting our stakeholders and company,” he advised. In exchange for disciplined communications within the security team, leadership should emphasize that each of them still will have an outlet for freely voicing complaints and concerns. “There is going to be a process through counsel for everyone to provide their perspective on how the team can improve,” he explained.

See “Off-Channel Communications Are Not the Only Source of Electronic Recordkeeping Violations” (May 1, 2024).

Reinforce the Process for Escalating Incidents

SolarWinds had a 0‑to‑3 scale for assessing risks, and executives were to be alerted when any risk scored higher than a 2, to weigh if public disclosures were needed. The SEC claimed that the company’s system misclassified two incidents that deserved greater attention.

SolarWinds won in this case, but “I expect to continue to see disclosure-controls cases brought more frequently than charges that a company failed to disclose a material fact,” Friedlander predicted. “In looking at events with hindsight, it is generally easier for the agency to criticize some piece of a process, the timing of when a fact was escalated or the extent of the facts that were escalated” than it would be for it to uncover a significant undisclosed fact, she said.

Perform Security Assessments Under Privilege

Security assessments anchor what a company can say externally about its cybersecurity. Sometimes, assessors conduct their checks “to understand how the company is complying with its contractual requirements, anticipated contractual needs or anticipated regulatory requirements,” Egan noted.

If one of those legal purposes applies to the assessment, the CISO and lawyers should conduct certain elements of it under privilege, Egan advised.

Artificial Intelligence

AI Offers Clear Value for AML, but the Path Forward Is Murky


AI might be an inevitable enhancement to Anti-Money Laundering (AML) efforts, but when it will be fully integrated into existing AML systems and governed in a responsible way is unclear, according to compliance and law enforcement experts.

The use cases are clear, from reviewing due diligence documentation to writing suspicious activity reports (SARs), but banks and regulators are so far taking a slow path to adoption and adaptation.

“AI can be a part of the solution. But [not] if the financial institution is unwilling to invest in the technology,” noted moderator Jerome Walker, a partner at Jerome Walker. This article distills the insights of Walker and experts from the FBI, Elliptic and Encompass Corporation, delivered at the Artificial Intelligence Institute hosted by the New York City Bar Association.

See our two-part series on the practicalities of AI governance: “AI Governance Gets Real: Tips From a Chat Platform on Building a Program” (Feb. 1, 2023), and “AI Governance Gets Real: Core Compliance Strategies” (Feb. 8, 2023).

Benefits and Drawbacks of AI Use in AML

In the AML space, generative AI (Gen AI) can be especially helpful with quickly analyzing data to identify trends and indicate suspicious activity faster than any human could. “Fraud detection is extraordinarily promising when you think about artificial intelligence,” Walker said.

Challenges for Smaller Businesses

Gen AI is not a panacea. The machine learning algorithms still need to be “properly trained,” Walker explained. Additionally, Gen AI is not universally applicable. Absent a representative data sample, small banks are going to remain at a disadvantage without the insight that big banks can gain from more extensive data sets. At the same time, bigger data sets also mean a bigger job and, with the expense of creating a proprietary system, most companies will bring in third-party AI platforms that may not be as customizable as needed.

For a company that lacks technology, processes, procedures and internal controls, it is not clear that “an overlay of AI is going to get you where you want to be,” Walker said, noting that the Anti-Money Laundering Act of 2020 is openly encouraging the use of technology in the best interest of national security.

See “Innovation and Accountability: Asking Better Questions in Implementing Generative AI” (Aug. 2, 2023).

Need for Human Touch

AI has been around in different forms for many years, e.g., IBM’s Deep Blue, which defeated chess champion Garry Kasparov in 1997. The “learning” that older versions of AI like Deep Blue relied on were essentially predicting probabilities based on patterns, a thing that computers can do exceptionally well, explained Henry Balani, global head of industry and regulatory affairs at Encompass Corporation. “AI is not intelligent. It is a well-trained recognizer,” he said.

Google Translate is an example of “traditional” AI’s seamless integration into daily use. It has been “optimized for specific tasks,” Balani said. In AML, this might be recognizing suspicious transactions based on rules previously established.

Gen AI is considered the “new” AI and is most familiar in the form of ChatGPT and Open AI. It emulates the human brain and utilizes neural networks to generate new content based on previous content. While it can be incredibly useful to create new images or text, it can “come up with unpredictable results,” Balani said. ChatGPT only knows what color the sky is because the data shows that “blue” is the most likely color to be in proximity to the word sky. “It does not know what is right and wrong and therein lies part of the challenge” for unfettered use, he warned.

Because of this, a human touch is still required, particularly for fraud detection. “It is important to recognize that tech itself is not the be all, end all,” Balani stressed.

For example, banks and other companies have been quick to highlight their liberal use of AI, but, if the system was trained with bad data and learned to ignore certain alerts, it is useless, observed Elizabeth “Lili” Kudirka, Special Agent for the FBI New York. “As a financial institution, you need to be cognizant of what you are paying money for and then . . . the decisions that you are making,” she said, adding that “the machine cannot do everything. There still needs to be a human decision at the end of that model.”

CEOs are publicly proclaiming their intention to invest in the technology side, but they are likely underestimating that they will also have to invest in proper staffing and training, Walker noted. The solution is not for a company to have “one person who is wearing eight hats,” he quipped.

Certain industries have additional specialized requirements that also may be missed in general-use applications, such as cyber currency, which relies on seamless cross-border operations, said Liat Shetret, director of global policy and regulation at Elliptic. “There is a real urgent need to orchestrate that entire tech stack to really deliver.”

See “AI Governance Strategies for Privacy Pros” (Apr. 17, 2024).

A Cost-Effective Solution?

One of the oft-cited benefits of AI is that it can save money. However, cost-effectiveness is a difficult concept to quantify in AML, Balani observed, because the real benefit is in avoiding a regulator coming knocking. A company properly arming itself with the right technology “is the cost of doing business, sometimes,” he suggested.

Luckily, everyone can access some form of AI technology, in many cases for free. “We are upping the game on both sides,” Balani said. There are “bad guys that are using this technology for sophisticated hacks and fraud and money laundering.” For this reason, firms need to employ the same tools, regardless of the cost.

“We have almost jumped to the end state because the solutions are there,” Shetret said, but, unfortunately, governance, ethics and other “prenuptial” considerations have been skipped. “We have not gone through the transition phase of testing and quality assurance and validation that I think are really required.”

See “Navigating NIST’s AI Risk Management Framework” (Nov. 15, 2023).

Questions Around Satisfying Regulators

In certain industries, regulatory fines are built into the business model and compliance is only a post-enforcement consideration. As Kudirka described it, some companies have an attitude of “pay the fine and move on; we will fix it afterwards,” which effectively rolls out a red carpet for regulators and investigators.

If, instead, there was an open door for companies to talk to law enforcement and regulators to actively seek out solutions, then the business cost “is simply the cost of that compliance program,” Kudirka observed, and “there is not the cost of the forfeiture and the fine and the litigation.”

Kudirka noted that what matters to regulators is whether there is a risk-based approach in place, policies are formalized, staff is not just trained but actively following stated procedures, and there are exceptions in place for certain customers. Whether a firm uses AI to accomplish those goals is less important to regulators.

AI is forcing a reckoning on whether any of the risk-based approaches to due diligence and know your customer (KYC) are enough to prevent money laundering and fraud entirely, Shetret asserted. Identity verification is still not fully and accurately answering who the beneficial owners are, the source of funds and who is ultimately in control. “Until we solve for identity, we are in a little bit of trouble and there is urgency around that,” she emphasized.

For example, facial recognition software is one form of AI that is commonly used by firms. However, it is facing scrutiny because of evidence of bias and excessive false positives due to underrepresentation of non-white, non-male faces in data sets, for example.

See “How to Address the Colorado AI Act’s ‘Complex Compliance Regime’” (Jun. 5, 2024).

Example Use Cases

While there are concerns and conundrums related to using AI in AML, there are some strong applications.

See “How Hedge Funds Are Approaching AI Use” (Jul. 31, 2024).

Transaction Monitoring

In transaction monitoring, for example, SARs could be a “pretty decent use case” for Gen AI employment, Balani said, because they are language-based.

Client Onboarding, KYC and Identification

Similarly, in complex client onboardings where there are multiple formation and financial documents being collected for customer identification purposes, Gen AI can take advantage of optical character recognition in order to derive legal names and ownership percentages and pre‑fill KYC requirements.

AI could be a powerful tool for increasing inclusivity in banking. Iris scans could be used in lieu of traditional forms of identification, which can be a barrier to entry.

Quality Assurance

Balani envisions that future Gen AI tools could be employed as quality assurance for the first AI pass-through. “It is one AI challenging another AI,” he said, leading to “much higher accuracy,” which could be “game-changing in the banking industry.”

Assessing Credit Risk

AI algorithms can also consider more nuanced indicators of credit risk, which could help increase access to debt financing. “We have ways to use AI that help us innovate on concepts of KYC so that we use better metrics for identification, verification, and authorization beyond driver’s license and IDs,” Shetret said.

SARs

Full adoption of using AI to generate SARs is some distance away. Kudirka said she is not familiar with any financial institution using Gen AI for this purpose now, though it would be welcome as these reports are largely treated as a check-the-box task and are heavily flawed. “Banks will often write a SAR just to get a SAR out there, and they are some of the hardest things you will read, and they are often nonsensical,” she observed, likening a good report to a “diamond in a sea of sometimes garbage.” Making it more difficult is that her agency, like others, manually reviews each one – no AI is used on the regulators’ side to review SARs.

The prospect of having even more reports, AI-generated or not, is daunting. “Having to sift through that much more volume is nerve-wracking and slightly terrifying,” Kudirka admitted, adding that the timeline for her agency to use AI to analyze SARs is likely beyond that of banks.

Balani said many banks write “defensive SARs” as a protective measure. He fully expects that Gen AI can “improve the quality of the SARs” rather than increase the quantity. With Gen AI, SARs could be “beautifully written” and contain precisely the necessary information. “I am really, really excited about that versus these garbage rubbish SARs that are going out there purely for defensive purposes,” he related. These automatically generated SARs could even contain a “watermark” so that law enforcement and regulators use the same application to scan and filter them.

Still, if AI was a “silver bullet” for winning money laundering convictions and settlements, Walker proffered, then the largest AML regulator would be using it to prepare all of its reports. “As a practical matter, I am unaware of FinCEN’s ability to do that,” he said.

Until there is proper governance on a global scale and a full audit of information that banks, cryptocurrency exchanges and law enforcement each collect for a SAR versus what is realistically needed to effectively combat modern financial crimes, these discussions are all theoretical, Shetret said. “There needs to be a conversation around this data-flush economy that we are in. How do we distill the pieces and bits of information that all reach the right institutions in a timely manner?” he questioned, all while criminals are “exploiting every angle . . . to move money faster than we can blink.”

See “Applying AI in Information Security” (May 15, 2024).

People Moves

Data, Privacy and Cybersecurity Partner Joins Goodwin in New York


Kaitlin Betancourt has joined Goodwin’s data, privacy and cybersecurity practice as a partner in the New York office. She arrives from Prudential Financial.

Betancourt advises on cybersecurity and privacy law and compliance, enabling corporate clients, including private funds and asset managers, to overcome related complex legal and regulatory issues and risks. She provides comprehensive, tailored and practical guidance on navigating the rapidly evolving cybersecurity threat landscape. Her expertise spans cyber incident response and preparedness, cybersecurity governance and regulatory compliance, corporate transactions and internal investigations.

Betancourt previously served as chief legal officer of cybersecurity, data privacy and AI at Prudential Financial, where she managed highly complex, high profile, global incident response matters, and provided guidance on the development of governance and compliance efforts surrounding cybersecurity laws and regulations.

For insights from Goodwin, see “How CPOs Communicate Privacy’s Value to the Board” (May. 31, 2023); and our two-part series on SEC cyber rules: “How to Prepare for the New 8‑K Incident Mandate” (Aug. 10, 2022), and “How to Prepare for the New 10‑K Disclosure Mandates” (Aug. 17, 2022).