Boards of Directors

Five Steps for Effective Board Oversight on Cybersecurity Breach Response


New cybersecurity regulations, along with an uptick in post-breach regulatory enforcement actions and civil litigation, continue to push corporate boards toward more active oversight of their organizations’ cybersecurity risks and programs. With this increasing pressure, boards are often left questioning how and to what extent they should be involved in responding to significant cybersecurity incidents. This article addresses the evolving regulatory and litigation landscape impacting the board’s cyber-risk governance and the role of boards in overseeing breach response and related disclosures, and offers five steps for effective board oversight of cybersecurity incident response.

See “Twelve Steps for Engaging the Board of Directors and Implementing a Long-Term Cybersecurity Plan” (Sep. 16, 2020).

Boards in the Evolving Regulatory and Litigation Landscape

Several recent regulations at the state and federal level now require either routine board-level reporting or supplemental disclosures regarding board-level involvement in cybersecurity risk oversight and incident response.

State Regulation

Several state regulators, including state insurance regulators such as the New York Department of Financial Services (NYDFS), as well as approximately 23 states that have adopted the National Association of Insurance Commissioners model data security law, have enacted regulations explicitly related to role of boards in cybersecurity programs and incident response. For example, the second amendment to the NYDFS Cybersecurity Regulation, which took effecxft in December 2023 and phased in certain aspects of the amendment, requires the CISO of a covered entity to timely report “significant” cybersecurity events to the “senior governing body” – typically the board – overseeing the company’s cybersecurity program. Under the amendment, the covered entity’s CISO must also report in writing, at least annually, to the senior governing body on the entity’s cybersecurity program. The report must include (among other things) an assessment of the overall effectiveness of the cybersecurity program, material cybersecurity risks to the covered entity, material cybersecurity events and plans for remediating material inadequacies. Similarly, starting in October 2025, the New York Department of Health will require CISOs for all general hospitals to report similar metrics in writing, at least annually, to the hospital’s governing body.

See this two-part series “Amendment to NYDFS Cyber Regulation Brings New Mandates”: Governance Provisions (Dec. 13, 2023), and First Compliance Steps (Jan. 3, 2024).

Federal Regulation

Of particular note among federal regulations, in 2023, the SEC published its rules on Cybersecurity Risk Management, Strategy, Governance and Incident Disclosure. Under the rules, public companies must disclose in their annual reports details regarding their cyber-risk governance, including the processes by which the board or a relevant committee thereof is informed of those risks and any board-level committee responsible for overseeing such risks.

The SEC’s new rules parallel the agency’s increasing enforcement activity against public companies and their executives in the wake of significant cybersecurity incidents, including actions in which the SEC has alleged that the company lacked sufficient disclosure controls and/or internal “accounting controls” related to its cybersecurity program. Recent actions include charges against public companies that allegedly failed to sufficiently disclose the impact of cybersecurity attacks on their third-party vendors. While no directors have been individually named in cyber-related SEC enforcement actions to date, the Commission’s focus on internal and disclosure controls related to cybersecurity matters could implicate director liability in the future.

See this two-part series on the SEC charging four companies for misleading cyber incident disclosures: “New Expectations?” (Nov. 20, 2024), and “Lessons on Contents and Procedures” (Dec. 4, 2024).

Litigation

Directors have, however, frequently been named as defendants in post-incident shareholder derivative litigation, in which it is alleged that the board failed to sufficiently oversee the company’s cybersecurity risks, in breach of their fiduciary duties. In one recent action, SolarWinds shareholders alleged that the company’s directors failed to implement a system of corporate controls for overseeing the company’s cybersecurity risks and purportedly overlooked “red flags” of cyber threats against the company. The Delaware Court of Chancery granted the defendants’ motion to dismiss the case, based in significant part on the plaintiffs’ failure to plead that the board allowed the company to violate “positive law” and the court’s finding that, “absent statutory or regulatory obligations, how much effort to expend to prevent criminal activities by third parties against the corporate interest requires an evaluation of business risk, the quintessential board function,” which is entitled to deference. While the case was dismissed in SolarWinds’ instance, as cybersecurity and privacy rules and regulations continue to proliferate (i.e., as “positive law” is enacted), it remains to be seen whether future shareholder derivative plaintiffs may have more success.

See this two-part series on the SolarWinds Decision: “Court Narrows Case, but SEC’s Surviving Claims Alarm CISOs” (Aug. 7, 2024), and “Practical Takeaways for Cyber Communications” (Aug. 14, 2024).

The Board’s Post-Breach Oversight Role

As a general matter, at the time of a significant incident, the board’s role should be one of oversight: to oversee the company’s material risks from the incident, the company’s response to the incident and the likely impact on the company.

Becoming Actively Informed

Boards exercise their oversight role in significant part by becoming informed on the following:

  • the company’s action/response plan (as typically included in its incident response plan);
  • the nature, scope and potential impact of the incident;
  • the status of the investigation;
  • the company’s containment strategies and remediation plan; and
  • any applicable insurance coverage.

The board’s oversight role requires it to remain actively engaged in understanding a cybersecurity event as material facts unfold – particularly by asking probing questions to assess the company’s response.

Being Careful Not to Overstep

As heightened expectations for board involvement in cybersecurity matters continue, striking the right balance for board involvement in a cyber breach response is becoming more challenging. Directors may be tempted to shift from an oversight role to a more active “boots on the ground” role, getting involved in the day-to-day management of the response and potentially blurring the line between the board and management.

Directors with significant cyber experience and those who have lived through a cyber incident at another organization may (naturally) attempt to step into the shoes of management in responding to the breach and, for instance, seek to:

  • participate in daily forensic investigation calls to receive information in real time;
  • direct the company on its decision to take or not take systems offline given the potential business impact;
  • question the level of expertise of the company’s third-party response partner(s) or retain a second, independent investigator to act on behalf of the board;
  • mandate that the company interact with customers and/or employees in a certain manner; and/or
  • attempt to negotiate specific ransom payments with criminal extortionists.

While directors should be applauded for their increased interest and involvement in cyber-risk oversight, stepping too far into the realm of incident response management can lead to inefficiencies during the most critical moments of the investigation, unnecessary tension with management or the cybersecurity team, attorney-client privilege concerns and – potentially – increased liability exposure for the directors.

Steps for Effective Breach Response Oversight

There are several tangible steps that management and the board can proactively take to establish an effective oversight process in the event of a significant incident. Taking the following steps should ensure that the board is kept appropriately informed while management takes primary responsibility for responding to the incident.

1) Gain an Appropriate Baseline Knowledge of Cyber Risks and the Evolving Cyber Threat Landscape

Although there is no explicit regulatory requirement for directors to maintain a specific level of cyber expertise, obtaining a baseline knowledge of cybersecurity and relevant cyber risks has become increasingly important for a number of reasons. At least a basic level of cyber literacy – which, in itself, is a specialized and highly technical area – is needed to adequately assess and oversee the cybersecurity program and a cybersecurity incident. In addition to external sources of knowledge and training, an effective way for directors to obtain this baseline knowledge can be through regular engagement with the CISO, which can help build trust between directors and management, a factor that can go a long way when responding to a significant cyber incident. In a promising statistic, in the annual National Association of Corporate Directors (NACD) survey (reflected on pages 23‑24 of the updated 2023 Director’s Handbook on Cyber-Risk Oversight by the NACD and Internet Security Alliance) of public company directors, 83 percent of respondents indicated that the board’s understanding of cyber risk has significantly improved over the past two years.

It is also important for boards to remain aware and knowledgeable of the rapidly evolving cyber threat landscape, particularly because cyber criminals are quick to exploit new and emerging technologies. Current knowledge of threats can help inform questions to evaluate the current posture of a company’s cybersecurity program in light of those new threats.

The U.K. National Cyber Security Centre’s Cyber Security Toolkit for Boards rightly recommends that directors receive regular briefings covering current threats that could affect all organizations as well as those that are specific to the company’s business, the nature of the threats, how they affect business objectives, and how the company is addressing/guarding against them. Management frequently supplies such briefings, though boards are increasingly interested in retaining supplemental outside cybersecurity advisors (i.e., outside cyber counsel and/or technical cybersecurity experts) to help ensure that they are equipped with the requisite cyber knowledge to meet their fiduciary duties.

Often, management and/or outside cybersecurity advisors provide pre-read materials in advance of the threat briefing, which can help directors digest and prepare questions to discuss. Other popular board-level resources include providing directors with whitepapers from cyber/threat intelligence experts and summaries, prepared by the company (typically members of the information security team), on the recent threats the company has faced.

See “How to Handle Rising Expectations for Board Cyber Education and Involvement” (Mar. 14, 2018).

2) Establish an Escalation Process and Path to the Board

Cybersecurity incidents may – and increasingly do – result in substantial legal, operational and/or financial consequences to companies. Companies should therefore take steps to ensure that significant cybersecurity incidents are promptly escalated to the board or relevant board-level committees – including incidents occurring at the company’s third-party vendors that may significantly impact the company’s operations or its own security protections. For all other (non-significant) cybersecurity incidents, companies should establish a regular reporting cadence for management to brief the board as appropriate. For many organizations, it may be appropriate to report at least quarterly to the applicable committee and at least annually to the full board.

While management should be afforded some discretion in determining whether to escalate an incident to the board, there should generally be no surprises regarding the escalation path when a significant cybersecurity incident occurs. Companies should review their written incident response plans, processes or protocols to understand the triggers for escalating an incident to the board.

Considerations for whether to immediately report an incident to the board may include:

  • whether it significantly impacts the business operations;
  • whether the incident appears to involve the unauthorized access or acquisition of a significant volume of data (particularly if there are indications the data may include personal information, protected health information or other sensitive information);
  • the likely scope of impact to employees or customers;
  • the nature of the cyberattack (e.g., ransomware);
  • the alleged threat actor (e.g., a nation-state actor); and/or
  • the potential need to disclose the cybersecurity incident externally, including to regulators.

Companies should also consider whether the full board should be immediately notified, or whether notification to a committee or committee chair is appropriate.

See “How CISOs Can Use Digital Asset Metrics to Tell a Coherent Cyber Story to the Board” (Jun. 3, 2020).

3) Provide Appropriate and Timely Updates Regarding the Investigation

Following escalation to the board or relevant board-level committee, companies should provide appropriate timely updates on material facts as the investigation into the nature and scope of the incident unfolds. If a third party is engaged by the company to assist with the breach response – for example, a forensic investigator or law firm – management may want to consider having the third party report directly to the board on the key facts or risks associated with the incident.

Typically, helpful information to provide directors during incident status updates includes, but is not limited to:

  • a summary of the incident, including the material threat actor activity;
  • a preliminary root cause analysis (if known);
  • a timeline of threat actor activity;
  • key response efforts, such as the company’s initial steps taken upon detection of the incident;
  • vendors engaged to help respond to the incident;
  • the status of containment and restoration (if applicable) efforts;
  • the estimated or likely financial or operational impact on the business; and
  • potential regulatory and/or litigation exposure.

In addition, directors likely will want to understand the communications strategy with employees, customers, the media, law enforcement and, potentially, the threat actor (such as in a ransomware/extortion incident).

See “Cyber Crisis Communications – ‘No Comment’ Is Not an Option” (Sep. 7, 2022).

4) Document Updates to the Board

When an incident occurs, it is prudent from a regulatory and litigation perspective to document when and how the board is informed and updated on a cyber event, to demonstrate that the board members are actively engaged and meeting their fiduciary duties of oversight. Board and committee meeting minutes should reflect sufficient detail to show that the board discussed the incident and the company’s response, though granular details of the company’s investigation are not required. If the investigation is being conducted through in-house or outside counsel under privilege, the meeting minutes and any accompanying materials should so reflect. Finally, it is a best practice to share written updates through a board portal, if available; sending communications to an outside director’s email that the individual’s employer can access may risk waiving privilege.

5) Conduct Board-Level Training and/or Tabletop Exercises

Tabletop exercises continue to be a useful activity to further develop muscle memory when cyberattacks occur and, increasingly, a regulatory requirement (or expectation). Traditionally, most companies conduct: (a) technical tabletop exercises focused on the technical aspects of a response, involving the company’s information security and IT teams, and/or (b) executive tabletop exercises focused on the cross-functional response to a significant cybersecurity incident, involving front-line responders or executives from various departments beyond information security and IT, including legal, communications, marketing, finance, HR, operations, etc. At a minimum, it can be helpful to provide the board with a readout following an executive-level tabletop exercise, highlighting the key strengths and areas of improvement for management to focus on moving forward.

More recently, however, boards are increasingly becoming directly involved in tabletop exercises or similar “live” simulated training. Board-level exercises frequently include a third-party expert, such as outside breach counsel or a cyber consultant, to help facilitate and guide the board through a simulated cyber incident to allow the directors to explore their oversight role at pivotal points in the incident lifecycle.

In the NACD’s most recent update to its Director’s Handbook on Cyber-Risk Oversight, it recommends that it “is also advisable for directors to participate with management in one or more cyberbreach simulations, or ‘tabletop exercises.’” By including the board, or certain members thereof, in tabletop exercises, board members are not only able to practice their oversight role, but also they can gain a better understanding of potential cyberattacks, the multitude of issues that arise while responding to a cyberattack and the company’s incident response protocols, including how or when incidents are escalated to the board. Effective board-level exercises will focus on training the directors, rather than testing them, and should result in the board feeling confident in its oversight role and that management is prepared in the event of a significant cybersecurity incident.

See this two-part series on a mock cyber incident tabletop exercise: “Everything at Once” (Jun. 19, 2024), and “Day Two and Beyond” (Jun. 26, 2024).

 

Kim Peretti is co-chair of Alston & Bird’s privacy, cyber and data strategy team, as well as the national security and digital crimes team. She is the former director of PwC’s cyber forensic services group and a former senior litigator for the DOJ’s Computer Crime and Intellectual Property Section. With over 24 years of experience as an information security professional and lawyer, she manages technical cyber investigations, assists clients in responding to large cybersecurity and privacy incidents, and advises boards and senior executives in cybersecurity and cyber risk matters. Peretti also services clients in matters of privacy, AI, national security investigations, and responding to data security-related regulatory inquiries and enforcement actions.

Cara Peterman is a partner in Alston & Bird’s securities litigation group. She focuses on shareholder derivative suits, shareholder class actions, M&A litigation and other complex commercial litigation. She also regularly represents clients in investigations brought by the SEC and other federal and state regulators. Peterman counsels public companies and their directors and officers on public disclosure and corporate governance matters, with a concentration on cybersecurity; data privacy; AI; and environmental, social and governance-related issues.

Lance Taubin is a senior associate at Alston & Bird, focusing on cybersecurity and data privacy. He advises clients on various cybersecurity and data privacy issues, including breach preparedness, response and compliance, guiding them through complex incidents and regulatory challenges, as well as managing cyber risk, technology transactions and M&A diligence. Taubin draws from his in-house cybersecurity and privacy experience to provide clients with unique solutions to cybersecurity risk management, privacy compliance, technology transactions and corporate law matters.

Consumer Privacy

Children’s Privacy Grows Up: Examining New Laws That Now Protect Older Teens


Children’s online privacy is not just for tweens and half-pints anymore. During 24 months of near-constant legislative movement to address youth privacy and safety, lawmakers have focused on protecting children and adolescents up to age 18.

Since 2022, a dozen U.S. states have expanded protections for minors’ online activities, including boosting the age range from 13 years old to 16, 17 or 18. “Basically all the laws have broader coverage for teens,” Stacy Feuer, Entertainment Software Rating Board (ESRB) senior vice president, told the Cybersecurity Law Report.

Each state’s legislators have crafted their provisions idiosyncratically. Some of the obligations for companies come in comprehensive privacy laws, others in standalone statutes. Most laws include privacy and security obligations, but many also address safety considerations, covering everything from social media feeds to using minors’ data to advertise piercings and lotteries.

Collectively, the latest children’s privacy laws increase liability for companies eager to gain teens’ brand loyalty and inspire them to share personal information. Reinforcing the stricter protections, several federal and state enforcement cases in 2024 focused on teen data.

This first article in a two-part series, with commentary from Feuer and experts at BakerHostetler, Loeb & Loeb and SuperAwesome, discusses the key pacesetter laws that emerged in 2024 to regulate minors’ online activities and examines the most significant trends shaping this increasingly difficult compliance area. Part two will provide insights on how companies can approach the emerging new legal framework on teen and children’s data, including considerations for navigating the tension between privacy and safety obligations.

See our two-part series on the FTC’s NGL Labs settlement: “Key Violations and Settlement Terms” (Sep. 18, 2024), and “Compliance Lessons” (Sep. 25, 2024).

From Two to Twenty Laws

In 2022, the Children’s Online Privacy Protection Act (COPPA), applicable to kids under 13 years of age, and the CCPA, applicable to youth under 16 years old, effectively comprised the body of children’s privacy law in the U.S.

Starting in 2022, four general types of laws have imposed heightened obligations for companies to protect teens’ personal data, BakerHostetler attorney Carolina Alonso told the Cybersecurity Law Report: (1) state comprehensive privacy laws; (2) online safety laws specific to minors, which include age-appropriate design codes (AADC); (3) laws governing minors’ access to and use of social media; and (4) laws addressing addictive feeds.

Almost every state law differs in its wording of the applicability standard and age range. Some states (California, Maryland, Delaware, Utah and Tennessee) doubled or tripled up, passing a comprehensive privacy law and a minor-specific one.

“It’s fascinating how many different permutations there are. The issues around children and teens are so complex and have branched out to cover so many different things like safety, mental health, digital wellbeing, that the patchwork seems to be not well stitched together, at least not yet,” Feuer said.

See our two part series analyzing 2023’s new state privacy laws: “The First Six Plus Compliance Measures” (Jun. 28, 2023), and “Oregon and Delaware Join the Strictest Tier” (Jul. 12, 2023).

Leading 2024 and 2025 Children’s Privacy Developments

Amid the flurry of activity, a few youth privacy developments stand out at the start of 2025.

See “Takeaways and Looming Questions After Ninth Circuit Cuts DPIA From California’s Age-Appropriate Design Code” (Sep. 11, 2024).

Maryland’s Age-Appropriate Design Code

The Maryland Age-Appropriate Design Code Act (Kids Code) went into effect on October 1, 2024, covering all minors. For companies, it is currently the most demanding of the many state laws addressing kids’ and teens’ privacy. The Kids Code applies to online products and features that are “reasonably likely to be accessed by children.” It requires privacy settings for children to be set to a “high level of privacy,” bars processing of minors’ precise geolocation data and requires businesses to complete an impact assessment (DPIA) to determine whether its product design ensures the best interests of children.

California enacted its AADC first, but a federal court enjoined many of its provisions, finding a likely First Amendment violation, because the code requires evaluation of possible harms from content.

Maryland legislators wrote the state’s Kids Code to avoid addressing content, and industry organizations have not challenged it in court yet. The Kids Code is now the foremost topic of discussion in children’s data governance circles, Alonso noted. “Everyone is using it as a reference point. The focus is on Maryland because it’s new and shiny,” and has a detailed test for its DPIA, she said.

Maryland’s Kids Code is something of a relief for companies because “it is not as onerous as the California law. The part that was really killing these big companies was how they had to comply with the unclear DPIA under California,” Loeb & Loeb partner Nerissa Coyle McGinn pointed out.

Under California’s law, companies would have needed to judge whether their product was in the best interests of children without much guidance. In contrast, the Kids Code offers companies factors to instruct the best interests determination: whether children could experience harmful contacts, conduct, data collection, processing practices or experiments during use; exploitative contracts (e.g., to buy items in-app); addictive features or algorithms; and any other factor indicating that product design is inconsistent with the best interests of children.

See “Deciphering California and U.K. Children’s Codes and Compliance Obligations” (Jun. 14, 2023).

Connecticut and Colorado Amendments

Connecticut amended its privacy law with SB 3, which became effective October 1, 2024. SB 3 imposes a duty of care on companies to avoid heightened risk of harm to minors, including financial, physical or reputational injuries, violations of their private affairs and discriminatory effects of data use. Colorado also amended its privacy law to impose a duty of care using similar provisions, which take effect October 1, 2025.

Maryland’s Kids Code is overshadowing the children’s privacy provisions of Colorado and Connecticut as 2025 opens. “A lot of people forget that these amendments exist, they don’t realize that these are part of the comprehensive state privacy laws that first passed,” Alonso said.

Yet the two states may rebound into discussions during 2025. “Connecticut’s AG has been talking to some companies already” about children’s issues, Alonso noted. Colorado, in early 2025 is set to finalize the full privacy law’s implementing regulations, offering companies clarity they can rely on.

Like the Kids Code, Connecticut’s SB 3 provisions are narrower than California’s law and the amendment has not been challenged as of press time, Feuer noted. Maryland’s law likely will face a challenge before long “in an as-applied situation where real facts and a real company are on the other side,” she predicted. Possibly, Connecticut’s and Colorado’s will as well.

See “Preparing for U.S. State Law Privacy Compliance in 2025” (Dec. 11, 2024).

New Restrictions on Targeted Advertising to Minors

Maryland’s comprehensive privacy law, separate from its Kids Code, is effective in October 2025 and outright prohibits the sale or processing of personal data of consumers under the age of 18 years for purposes of targeted advertising.

As of January 15, 2025, both New Hampshire and New Jersey restrict, without consent, the processing of teens’ personal data for targeted advertising, sales and profiling that carries significant effects. New Hampshire’s children’s privacy law applies to youth under 16, and New Jersey’s applies to those under 17.

The need to include older, nearly grown teens as part of a tailored compliance program, separate from efforts geared toward grown-ups’ privacy, will formidably impact a wide swath of brands, sites and apps, cautioned Loeb & Loeb partner Jessica Lee. “Most of the sites that have to deal with the new laws aren’t set up for it because they didn’t consider themselves previously in scope,” she noted. Companies have expressed concern about the implications for their site’s architecture, she added, wondering whether they will “have to throw up an age verification on the front page” of their sites, or whether compliance requirements will be “completely disruptive” to how they currently engage with users.

Two New York Laws With Rulemaking, and a New Browser Signal

In August 2024, New York started rulemaking for the two children’s data laws it passed in autumn 2023. The New York’s Child Data Protection Act addresses minors’ privacy, and the Stop Addictive Feeds Exploitation for Kids Act addresses the safety of algorithms. “People are trying to wrap their head around the New York legislation. It feels very new and different from the other laws that we’ve seen,” Alonso said. “People are setting it aside because, unlike most of the other state laws, there will be regulations,” she noted. “We are hoping that New York will provide more and better guidance on how to comply,” and fill a gap left by the many other state laws without rulemaking, she shared.

New York’s Child Data Protection Act introduces a provision for a minor’s consent signal sent by browsers or other platforms, Feuer pointed out. “Operators must treat users as minors if a user’s device signals that the user is or shall be treated as a minor. If a minor’s device signals that the person declines to provide informed consent, an operator shall not request such consent, though the operator may make available a mechanism through which the covered user can provide consent,” she explained. No clarity exists about the technology or protocols for device signaling, she continued. Will Global Privacy Control apply? “There are lots and lots of questions. What do you do about shared household devices?” she questioned.

See “Examining Distinctive Aspects of Minnesota’s Demanding New Privacy Law” (Jun. 19, 2024).

Texas AG’s Enforcement

The Texas Securing Children Online through Parental Empowerment Act (SCOPE Act) became effective in September 2024 – and quickly drew enforcement from the Texas AG’s recently augmented privacy unit.

In early October 2024, Texas filed a three-count complaint against TikTok for violating the SCOPE Act, alleging it lacks adequate parent verification tools, and that it unlawfully shares, discloses and sells minors’ personal data, even when minors’ accounts are set to “private.” The complaint lists 40 categories of data the company collects about users.

In December 2024, Texas launched new investigations against 15 companies under both the SCOPE Act and Texas Data Privacy and Security Act. At least a couple of other companies face not-yet-public inquiries, attorneys told the Cybersecurity Law Report.

The December investigations seek information about an array of elements that may become increasingly familiar in states’ privacy enforcement:

  • the number of Texas minors removed from the target’s products/services for providing inaccurate age information;
  • how the target complies with the SCOPE Act, including age registration methods, parental verification methods, purchase restrictions on minors, advertising practices, parental tools and requests, and algorithmic disclosures;
  • how it obtains consent to process minor personal data;
  • the number of Texas minors whose personal data has been sold or shared; and
  • third parties to which the target sold or shared Texas minors’ personal data, the types of personal data involved and the constraints on the third party.

The Texas enforcement has led more companies to ask about compliance generally for children, Alonso noted. The elements laid out in the state AG’s enforcement flag what companies should pay attention to, she pointed out.

See “Meta and Epic Cases Show FTC Toughening Its Children’s Privacy Enforcement” (May 17, 2023).

New Strict and Demanding High-Water Marks Among States

With no consensus among the states, or any national laws beyond COPPA, the pacesetter jurisdictions gain importance. Key new high-water marks for children include:

  • established privacy protections for a youth until they turn 18 in the E.U. and four U.S. states (Colorado, Maryland, Delaware and Connecticut);
  • the ban on targeted advertising to teens in Maryland (as of October 2025) and across the E.U.;
  • the requirement by several U.S. states that default settings for teens must be set to a “high level” of privacy;
  • the requirement under Connecticut and Maryland laws that consent is obtained for processing of any minor data beyond what is needed to provide a product or service that the minor requests;
  • the restriction under Maryland law that prohibits the collection of teens’ personal data without consent unless it is strictly necessary to provide online services;
  • a potentially expanded number of companies regulated under Maryland law because it applies when a site or app is “likely to be accessed” by teens; and
  • Maryland’s stricter standard may apply if evidence shows that teens access a “substantially similar” product.

Some companies have geared their compliance to age 16, which was the CCPA’s cutoff for consent and used by many other states. Now companies face age limits of 16, 17 and 18. But distinguishing 16 from 18 is a technically onerous task and no longer practical for most companies, Amy Lawrence, CPO of advertising compliance company SuperAwesome, told the Cybersecurity Law Report. With more jurisdictions choosing the highest of these numbers, it has become safest to apply all youth privacy compliance efforts to those 18 and under, she observed.

See “Examining Maryland’s Game-Changing Data Minimization Requirements” (Apr. 24, 2024).

Five Significant Legislative and Enforcement Trends

Several trends have emerged as new laws have been introduced and with state and federal enforcement efforts.

1) Protection From Discrimination, and Physical and Financial Harms

State laws are expanding to explicitly guard against discrimination as well as physical and financial harms caused by use of minors’ data. Efforts with respect to children’s legislation parallel the general movement for state privacy laws to increase guardrails around sensitive information, vulnerable populations and profiling of individuals for weighty decisions.

2) Protections Around Mental Health

State laws are broadening beyond privacy and security to include protections around mental and emotional health. “In recent years, particularly post- [the Covid] pandemic, there have been increasing concerns about kids’ mental health and their well-being on digital services overall,” Feuer noted. Multiple lawmakers behind state children’s privacy and safety laws have cited a whistleblower who, in 2021, testified in congressional hearings that Meta suppressed internal conclusions that social media services hurt teenagers’ health.

3) Addiction Prevention

More laws, including New York’s, are attempting to regulate digital product design to stop addictive or improper features. The FTC has brought cases, including one against Epic Games in 2023, alleging that platforms’ default settings made it easier for predators to contact teens.

4) Parental Consent Requirements

A handful of states have established a requirement that parental consent is obtained before a teen can use social media or the minor’s data is collected.

5) Bipartisan Enforcement

“We already see enforcement coming out of red states as well as blue states. We are going to see a lot of action,” said Feuer, who runs an FTC-authorized safe harbor program, ESRB Privacy Certified, for gaming and entertainment companies. “There are going to be various tensions” with enforcers pushing both on privacy and online safety, and empowering teens while seeking to validate parental rights, she added.

Artificial Intelligence

Navigating Ever-Increasing State AI Laws and Regulations


AI continues to make inroads into many, if not most, aspects of everyday business and life. Recognizing individuals’ concerns about the potential for AI systems to yield erroneous or discriminatory outcomes or decisions, U.S. states have rushed to adopt applicable laws and regulations, relevant especially to systems deemed to involve certain high-risk functions like financial or employment-related decision-making.

This article, distilling insights offered during a Husch Blackwell presentation, surveys the huge volume of AI-related legislation introduced and adopted, and regulations and guidance issued, in 2024. It also offers an outlook on AI legal and regulatory efforts expected in 2025, including proposed California regulations pertaining to automated decision-making technology and a Texas AI law focused on high-risk systems.

See our two-part series on how to manage AI procurement: “Leadership and Preparation” (Sep. 18, 2024), and “Five Steps” (Oct. 2, 2024); as well as “AI Governance Strategies for Privacy Pros” (Apr. 17, 2024).

At Least 447 Proposed AI Laws in 2024

In 2024, there were about 447 AI-related bills introduced across 45 different states, as well as Washington, D.C., Puerto Rico and the U.S. Virgin Islands – and four of the five states without AI bills did not have a legislative session in 2024, Husch Blackwell partner David Stauss noted. In California, 13 out of 41 bills introduced last year were signed into law – the most of any state. The other states with the most signed AI bills include:

  • Maryland (8 out of 23 bills signed);
  • Utah (6 out of 7);
  • Florida (4 out of 15); and
  • Illinois (3 out of 28).

New York had the most AI bills introduced – 80 in all – but only one of them passed, Stauss continued. The 477 bills that Stauss and his colleagues studied cover a wide range of uses and topics. Half of those bills involved:

  • use of AI by government agencies and law enforcement (17%);
  • use by private sector businesses and organizations (15%);
  • responsible use, including bills to address potential discriminatory decisions by algorithms (10%); and
  • use in education (8%).

Notably, 1% of the proposed bills concerned use of AI in judicial proceedings, observed Stauss. A handful (0.5%) addressed whether AI could be granted “personhood” and associated rights. The statistics presented were based on the primary focus of the bills in question. Many touched on more than one area of potential use of AI.

See our three-part series on new AI rules: “NYC First to Mandate Audit” (Jun. 15, 2022), “States Require Notice and Records, Feds Urge Monitoring and Vetting” (Jun. 22, 2022), and “Five Compliance Takeaways” (Jul. 13, 2022).

Notable AI Laws Passed in 2024

Colorado AI Act

The Colorado AI Act (CAIA) is the first of its kind, according to Husch Blackwell associate Shelby Dolen. Unlike many consumer privacy laws, this act does not have any consumer- or revenue-based applicability thresholds. It applies to all “developers” and “deployers” of high-risk systems deployed to Colorado consumers. Unlike the state’s consumer privacy law, CAIA does not exempt employees from the definition of consumer. An AI task force is currently working on reviewing and revising the law, which is set to take effect February 1, 2026.

CAIA defines a “high-risk” system as one that, when deployed, either makes or is a substantial factor in making a “consequential decision,” which includes a decision that “affects the provision or denial of any cost or terms of education, employment, financial or lending services, government services, health care, housing, insurance or legal services,” explained Dolen. CAIA excludes anti-fraud, cybersecurity and anti-virus applications of AI from the definition of “high-risk.”

There is no private right of action under CAIA, Dolen added. The Colorado AG is empowered to enforce the law and may impose a penalty of up to $20,000 per violation.

See “How to Address the Colorado AI Act’s ‘Complex Compliance Regime’” (Jun. 5, 2024).

California

There were four significant AI-related bills passed in California in 2024, one of which was vetoed, according to Husch Blackwell associate Owen Davis.

AB 2885

This bill amended California laws to incorporate a definition of AI, said Davis. It took effect on January 1, 2025. Many of the affected laws apply to state agencies, but some affect private entities. The definition closely aligns with the definition used in many other AI bills.

AB 2013

This law, which takes effect on January 1, 2026, requires developers of generative AI systems or services to disclose on their websites a high-level summary of the datasets that the systems use. Although the bill targets developers, it also encompasses users of generative AI systems who substantially modify the system they are using. It remains to be seen which users will be required to make the requisite disclosures.

See “Deciphering California’s Pioneering Mandate for an AI Nutrition Label” (Oct. 16, 2024).

SB 942 – California AI Transparency Act

This act takes effect January 1, 2026. It imposes disclosure requirements on companies that create code or otherwise produce generative AI systems and that have more than one million monthly visitors using such AI systems. It also requires covered companies to develop tools for identifying whether content has been created by generative AI, including by using latent signifiers within the created content that enables that content to be identified as AI-generated.

SB 1047 – Safe and Secure Innovation for Frontier AI Systems Act

Although this bill passed, it was vetoed by Governor Gavin Newsom, noted Davis. It would have applied only to the largest foundation/frontier models – those that cost $100 million to develop or $10 million to fine-tune. Newson believed the bill was focused too much on size and not enough on specific use cases for the covered AI models.

See “Outgoing CPPA Board Member Discusses Rulemaking and Looming Privacy Issues” (Sep. 25, 2024).

Utah

Utah’s SB 149 took effect on May 1, 2024, continued Davis. It provides that current consumer protection laws continue to apply when a firm uses generative AI to accomplish tasks or interact with consumers. It requires a company using generative AI to interact with consumers to disclose that fact when asked. Additionally, it requires certain licensed firms, including accountants and architects, to disclose proactively their use of generative AI. Finally, it creates an Office of AI Intelligence Policy, which is focused on future AI regulation.

See “Examining Utah’s Pioneering State AI Law” (Apr. 3, 2024).

Illinois

Illinois’ HB 3773 takes effect on January 1, 2026, said Davis. It amends the state’s existing employment anti-discrimination law to provide that AI cannot be used to discriminate against employees in protected classes in any aspect of the employment relationship. It also makes it a civil rights violation for an employer to fail to disclose its use of generative AI. The law will be enforced by the Illinois Department of Human Rights and the Illinois Human Rights Commission, which are expected to issue regulations on how employers must provide the requisite disclosure.

See “Dos and Don’ts for Employee Use of Generative AI” (Dec. 6, 2023).

Federal AI Efforts in Flux

In light of the new administration in 2025, federal AI efforts are likely to change in some respects, Husch Blackwell partner Erik Dullea noted. In November 2024, the Department of Homeland Security (DHS) published a Roles and Responsibilities Framework for AI in Critical Infrastructure. The framework is premised on President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, which incoming President Trump has promised to rescind. The DHS framework, which is entirely voluntary, focuses on increasing and facilitating collaboration across the critical infrastructure supply chain. In contrast, two other initiatives related to the Executive Order are the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework and the U.S. AI Safety Institute, both of which appear to have some bipartisan support.

See “Navigating NIST’s AI Risk Management Framework” (Nov. 15, 2023); and our two-part series “What the AI Executive Order Means for Companies”: Seven Key Takeaways (Nov. 8, 2023), and Examining Red‑Teaming Requirements (Nov. 15, 2023).

New York State Financial Services Guidance

In October 2024, the New York State Department of Financial Services (NYDFS) issued an industry letter with guidance on AI risks, noted Dullea. The NYDFS expects covered firms to factor AI risks into their annual reports on compliance with the state’s cybersecurity requirements. It also calls on them to consider how AI could be used to mitigate cybersecurity risks. Finally, in addition to considering the risks AI may pose to their networks or interactions with consumers, covered firms also must consider how AI may be used to target them and the broader risks AI poses to the industry.

See our two-part series “Amendment to NYDFS Cyber Regulation Brings New Mandates”: Governance Provisions (Dec. 13, 2023), and First Compliance Steps (Jan. 3, 2024).

Anticipated Legislative and Regulatory Action in 2025

For the past couple of years, a multistate group composed of AI experts and lawmakers has been working on developing “interoperable” AI bills, according to Stauss. Its efforts resulted in the adoption of CAIA and Connecticut’s SB 2, which did not pass in 2024, but has already been reintroduced for the 2025 session.

In 2025, the working group has almost 250 lawmakers from 47 states on its mailing list, including nearly 50 who attend most meetings. The group’s goal continues to be ensuring interoperability of AI laws across states. As many as a dozen states may work on bills similar to CAIA and Connecticut’s SB 2, Stauss added.

See “Applying AI in Information Security” (May 15, 2024).

Texas AI Bill

Texas State Representative Giovanni Capriglione, who wants Texas to be a global center of AI, introduced a bill that takes an approach similar to Connecticut’s SB 2, noted Dullea. The Texas Responsible AI Governance Act would:

  • make developers and deployers accountable;
  • empower consumers to understand when they are interacting with AI systems and give them rights of redress; and
  • empower the Texas AG to enforce the law.

Like CAIA, the bill focuses on high-risk AI systems, which include those that make or contribute to certain consequential decisions, continued Dullea. Enterprises that qualify as small businesses under U.S. Small Business Administration guidelines would be entirely exempt from the law.

The definitions of “developer” and “deployer” align with those in other AI legislation, Dullea observed. Both would be subject to certain reporting, disclosure and/or risk management requirements. In addition, the Texas AI law would provide the following:

  • Both developers and deployers must use reasonable care to protect consumers from foreseeable algorithmic discrimination.
  • If a deployer detects a problem with the operation of a high-risk system, it must advise the developer.
  • A deployer must:
    • train employees to oversee and monitor high-risk systems; and
    • advise consumers they are interacting with an AI system before or at the time of interaction.
  • A deployer that modifies a system into a high-risk system or modifies the functionality of a high-risk system will be considered a developer. “If you modify it, you own it,” Dullea emphasized.

The Texas AI bill provides for a regulatory “sandbox” where AI systems could be tested for a limited time without being subject to regulatory enforcement, continued Dullea. The bill also calls for the establishment of an AI council within the governor’s office to provide advisory opinions to state agencies and the AG on the safe and ethical use of AI. It remains to be seen whether an advisory opinion from the council could be the basis for an enforcement action.

The Texas AG would be empowered to investigate and notify covered entities of violations. If an entity does not cure an alleged violation with a 30‑day cure period, the AG could issue a fine. A consumer would not have the right to sue for monetary redress. However, if a consumer could show an AI system engaged in a prohibited activity, the consumer could seek declaratory and injunctive relief, as well as attorneys’ fees and costs. A consumer would also have the right to appeal an AI-generated decision.

See “FTC and State Enforcers Reveal What’s Next and What to Do About It” (Oct. 2, 2024).

Carryover Legislation

Several states had bills pending in 2024 that may be carried over into the 2025 legislative sessions, according to Dolen.

New Jersey Employment-Related Bills

In 2024, there were 26 bills introduced in New Jersey that implicated AI in some way, including seven concerning the use of AI in the employment context:

  • AB 4030, AB 3854 and SB 1588 concern the use of automated employment decision tools and require companies to minimize any discrimination that may result from use of such tools;
  • SB 3015 and AB 3911 regulate use of AI-enabled video interviews during the hiring process; and
  • SB 2964 and AB 3855 establish standards for conducting independent bias auditing for automated employment decision tools.

Other States

There are also other states where AI-related legislation introduced in 2024 may be carried over into 2025, particularly with respect to elections, Stauss said. The additional legislation includes:

  • Virginia: Bill 747 covering AI developers, SB 164 prohibiting undisclosed dissemination of deepfakes and HB 697/SB 571 making it a misdemeanor to use synthetic media for fraudulent activity;
  • California: several AI bill placeholders on the legislative agenda;
  • Missouri: two AI bills, including one focused on use of AI in property assessments;
  • Nevada: a bill on synthetic media content;
  • Texas: multiple bills, including bills covering use of AI in schools and creation of sexually explicit material;
  • Montana: bills addressing AI and elections, as well as general AI use; and
  • Arkansas: a bill on deepfakes in elections.

California Rulemaking

CCPA Rules for Automated Decision-Making

The California Privacy Protection Agency Board has issued proposed regulations under the CCPA, one element of which covers AI-driven automated decision-making, noted Davis. The comment period for the proposed regulations ends in January 2025.

The proposed regulations define “automated decision-making technology” (ADMT) as “any technology that processes personal information and uses computation to execute a decision, replace human decision making, or substantially facilitate human decision making,” Davis explained. A key element of the definition is the processing of personal information. The regulations would apply when ADMT is used for:

  • making significant decisions concerning consumers in the areas of:
    • financial/lending services;
    • housing;
    • insurance;
    • education enrolment/opportunity;
    • criminal justice;
    • employment opportunity or compensation; and
    • healthcare;
  • extensive profiling, including:
    • profiling for work or education;
    • public profiling; or
    • consumer profiling for behavioral advertising; and
  • training uses of ADMT.

Certain uses of ADMT for extensive profiling could trigger a duty to conduct a risk assessment, continued Davis. The regulations apply only to information that is subject to the CCPA. An entity using covered ADMT must:

  • provide pre-use notice;
  • give a right to opt out; and
  • give a right to access information about the ADMT.

See “Deciphering the New CPPA Proposed Regulations for Data Brokers” (Dec. 11, 2024).

Anti-Discrimination Regulations

The California Civil Rights Department proposed regulations under California’s Fair Employment and Housing Act to cover “automated decision systems,” said Davis. “Automated decision system” is defined differently from ADMT and does not focus on personal information. The regulations would prohibit using an automated decision system to discriminate on the basis of a protected class. If adopted, current rules under the anti-discrimination statute, including those related to recordkeeping, would also apply to automated decision systems.

Colorado AI Task Force

Colorado has created an AI impact task force composed of lawmakers, business leaders, consumer advocates and academics, noted Dolen. The task force is focusing on AI in general and CAIA in particular. It is charged with submitting recommendations to the Joint Technology Committee in the governor’s office by February 1, 2025.

To date, the task force has identified certain favorable aspects of the existing AI regime, including:

  • the requirement to complete impact assessments and document discrimination risks;
  • the requirement to provide notice to consumers who are the subject of AI-driven decisions; and
  • limiting enforcement to the Colorado AG’s office.

The task force also has identified concerns including, for example:

  • difficulties faced by small companies in complying;
  • high compliance costs;
  • CAIA’s limitation to high-risk applications; and
  • CAIA’s potential to discourage innovation and favor industry leaders.

See “Colorado Controllers: The Final (Rules’) Frontier” (May 31, 2023).

People Moves

ZwillGen Launches AI Division in Washington, D.C.


ZwillGen has welcomed Brenda Leong and Jey Kumarasamy to its newly created AI division in Washington, D.C. Leong, the division’s director, and Kumarasamy, its legal director, arrive from Luminos.Law LLP. The pair joins as part of a group acquisition including lawyers, data scientists, proprietary processes and technology to form the division, which focuses on red team testing, technical assessments, audits and certifications for clients’ AI models and systems.

The Luminos.Law group joins a ZwillGen team that regularly counsels developers and users of machine learning, large language models and other technologies powered by AI, advising clients on the development and use of AI-powered technologies, responding to regulatory inquiries and crafting policies for internal uses of AI.

Leong, the former managing partner at Luminos.Law, has expertise in digital identity and the responsible use of biometrics, with a focus on facial recognition, facial analysis and emerging issues around voice-operated systems. In her new role, she leads and oversees the operations of the AI division. Earlier in her career, she served in the U.S. Air Force before becoming the senior counsel and director of AI and ethics at the Future of Privacy Forum, where she oversaw resource development for the implementation and analysis of AI and machine learning technologies.

Kumarasamy brings his technical background, including software engineering experience in both the private and public sectors, to his legal director role in the firm’s AI division. Before joining ZwillGen, Kumarasamy was a senior associate at Luminos.Law, where he advised clients on AI-related matters such as AI governance, red teaming and model audits. Prior to that, he worked as a corporate and commercial lawyer with a focus on technology transactions.

For commentary from Leong, see “Deciphering California’s Pioneering Mandate for an AI Nutrition Label” (Oct. 16, 2024). For commentary from Kumarasamy, see our two-part series on New York City’s law requiring AI bias audits: “What Five Companies Published – and How Others Avoid It” (Sep. 13, 2023), and “A Best Practice Guide, From Choosing an Auditor to Avoiding Enforcement” (Sep. 20, 2023).

For insights from ZwillGen, see our two-part series on the FTC’s NGL Labs settlement: “Key Violations and Settlement Terms” (Sep. 18, 2024), and “Compliance Lessons” (Sep. 25, 2024); as well as our two-part series on scraping battles: “Meta Loses Legal Effort to Halt Harvesting of Personal Profiles” (Feb. 21, 2024), and “Global Privacy Regulators Urge Safeguards to Stop Data Scraping” (Mar. 6, 2024).