June brought a mixed bag of policy updates, regulatory showdowns, and AI rulebook drama. Here’s a roundup of the major headlines.

01

Colorado gets serious about biometrics

Starting July 1, the Colorado Privacy Act requires businesses to have a formal written policy on biometric data, such as facial recognition, fingerprint scans, and the like. The policy must cover how long the data is kept, what happens if there’s a breach, and when it gets deleted. Most businesses will also need to make that policy publicly available. Read more

02

UK passes Data Use and Access Act

The UK’s Data Use and Access Act received Royal Assent in June 2025. The law introduces new data-sharing rules aimed at improving public service delivery, including healthcare and infrastructure. The UK government estimates the reforms could generate up to £10 billion in economic benefits over the next ten years. Read more

03

New US state privacy and social media laws take effect

In June, several US state laws came into effect. Tennessee’s Information Protection Act and Minnesota’s Consumer Data Privacy Act expanded consumer rights over personal data. Additionally, Georgia’s new law requiring parental consent for minors to create social media accounts was temporarily blocked by a federal judge, citing free speech concerns. Read more

04

EU agrees on streamlined cross-border GDPR enforcement

On June 16, 2025, the European Parliament and the Council of the EU reached a provisional agreement on a new regulation aimed at improving cooperation between national data protection authorities in cross-border GDPR cases. The regulation sets out harmonised procedures, including standardised complaint criteria, investigation deadlines, and clearer rights for both complainants and organisations. Read more

05

Sweden wants to pause the EU’s AI Act

Sweden’s prime minister says parts of the EU’s AI Act are too vague and need a time-out. He’s not alone; other countries like Poland and the Czech Republic are also open to slowing things down. With no clear implementation standards in place yet, some leaders fear rushing the rollout could backfire. Read more

06

California drops a blueprint for responsible AI

A group of top-tier AI experts, commissioned by Governor Newsom, released the California Report on Frontier AI Policy. It outlines how to keep AI ethical, transparent, and, frankly, not terrifying. The report calls for safety checks, whistleblower protections, and stronger governance before letting powerful AI systems loose. Read more

07

California bill could rein in CIPA lawsuits

The California Senate unanimously passed SB 690, which would tweak the state’s wiretap law (CIPA) to prevent it from being used to sue businesses over tools like cookies, chatbots, and session replays. If passed, the bill would carve out a “commercial business purpose” exception, aiming to stop opportunistic lawsuits while keeping core privacy protections intact. Read more

08

Nebraska and Vermont pass kids’ online privacy laws

Following California and Maryland, Nebraska and Vermont have passed their own age-appropriate design codes. On May 30, the Nebraska Governor signed the Nebraska Age-Appropriate Online Design Code Act (AADC), and on June 12, the Vermont Governor signed off on Vermont’s version. Vermont’s rules are more sweeping, while Nebraska’s take a narrower, business-friendly route. Read more

09

23andMe’s bankruptcy triggers biometric data alarm bells

DNA-testing company 23andMe filed for bankruptcy, and suddenly, millions of users are wondering what happens to their genetic data. A buyout bid by biotech firm Regeneron was pulled back, leaving a nonprofit led by 23andMe’s co-founder as the likely buyer. Meanwhile, lawmakers and privacy watchdogs are calling for strict limits on how the company’s treasure trove of biometric data can be transferred or sold. Read more

10

California fights back against the AI regulation ban

California AG Rob Bonta and other privacy regulators have urged Congress to ditch a proposal that would block states from passing AI laws for 10 years. The letter argues this federal overreach would gut existing state protections and tie regulators’ hands just as AI risks are mounting. Read more