The California Privacy Rights Act (CPRA), which went into effect on January 1, 2021, a piece of legislation that amends the California Consumer Privacy Act (CCPA). The CCPA was groundbreaking and forced businesses to provide consumers with options to access and delete their personal information. The CPRA grants Californians even more privacy rights, including the right to control how companies collect, use, and share their personal information. The CPRA also requires companies to provide clear and easy-to-understand privacy notices, and to obtain consumers’ affirmative consent before collecting and using sensitive personal information. Digital privacy got even more focus in 2022 when the first enforcement action specifically called out the absence of Global Privacy Control, a new tech standard to signal an opt-out of third party data sharing.
This 2021 law goes into full effect today, January 1, 2023. This ends a number of exemptions and moratoriums. Financial services are no longer exempt by GLBA, and employers must honor CPRA requirements for employees, not just their consumers.
Because California is home to much of Big Tech, CPRA is likely to have a far-reaching impact, setting a global standard for data privacy protection and inspiring other countries to create legislation that provides similar rights and protections.
Many countries and US states are enacting similar privacy laws. In the US that’s creating a patchwork of state regulations that are increasingly difficult to comply with.
With every new privacy law, consumers are closer to having full control of their data. We’re still a long ways off, but eventually regulations will make it such that the default for businesses is to not have access to your data or be able to share it with third parties. Your digital identity will be something you’ll be able to trace across the web.
Artificial Intelligence fundamentally changes the scale of the the privacy problem
AI is going to hasten the pace of regulation. The wonderment of ChatGPT this past month has mostly revolved around the job-killing or job-changing effects of the technology. While ChatGPT may not have access to your personal information (or at least not reveal that it has it), other AI will.
Imagine AI crawling the vastness of the Internet to find pieces of your digital identity that are seemingly unrelated to a human, but AI can instantly link the last 4 digits of your social found on one site to an email address you used 12 years and a personal blog you wrote in the 2000s to your face found in a video clip.
Abuse of personal data online is only going to get worse. Bad actors, or simply negligent data custodians, will prove repeatedly that our endless supply of personal information to the Internet that the web of data really is a web that can be traversed. Deep fake photo, video, and audio is yet another wrench in all of this.
Regulators will, and should, continue to make it more difficult for companies to rely on data capture and data sharing. We’ll see security regulations be enhanced, but security alone isn’t a solve for this problem. Privacy requirements will empower individuals to barter their data more thoughtfully.
It’s hard to predict where consumer empowerment will really take us. Until the dangers are real and present, people aren’t going to change data sharing behaviors even if the mechanisms are clearer and easier. It’d be a full-time job to understand every website’s and app’s controls and fine-tune them to your personal privacy preferences. Regulation is often a reaction anyway, so we’ll likely experience pain before change.
A couple things are certain:
- Legislators will keep bolstering privacy laws
- Companies will need to invest even more in privacy engineering
I should add, AI isn’t bad. It’ll just be misused as humanity has demonstrated with every other technology.