Clearview AI settlement limits company’s sale of facial recognition tools

0

This week, facial recognition software company Clearview AI settled a lawsuit with the American Civil Liberties Union. The group sued Clearview in 2020 for allegedly violating Illinois biometric information privacy law. Although the case involves state law, the settlement has national implications, including limiting access to the company’s facial fingerprint database.

Clearview AI says the database now contains some 20 billion facial images. I spoke with Calli Schroeder, global privacy adviser at the Electronic Privacy Information Center, who said the lawsuit was about the use of biometric markers, including faces. The following is an edited transcript of our conversation.

Calli Schroeder: Essentially, if you are using a biometric tag to track someone’s identity in Illinois, there are consent procedures and permissions you must follow to do so. And Clearview had not received consent or even, to our knowledge, sought consent from any Illinois resident for the collection and use of facial impressions. So that’s what the ACLU is bringing the costume with. They were actually suing on behalf of multiple plaintiffs – so survivors of domestic violence and sexual assault who may not want their facial markers online so abusers can’t identify them, immigrants without papers, current and former sex workers or other communities who are at increased risk of identification in this way.

Kimberly Adams: So it’s been about two years since this complaint was filed. The deal is done. What are some of the main elements of this regulation?

Schröder: One of the most significant is that Clearview is permanently prohibited from granting paid or free access to its facial recognition database to private entities. This includes both private companies and individuals. And that part of the deal is a national part, not just in Illinois. They must also have an opt-out request form on the website so that Illinois residents can request not to be included in search results or not to be included in the database. And then they are also prohibited from granting access to the database to any Illinois state or local government entity, which includes law enforcement, for a period of five years.

Adam: Clearview AI hasn’t just faced intense scrutiny here in the US. How does this settlement compare to some of the other sanctions the company has faced here in the United States and abroad?

Schröder: Abroad, it is an interesting proposition. In Italy they faced an actual monetary fine, I believe it was 20 million euros [$21 million]. In the UK it was a £17 million [$21 million] fine. In France I don’t believe there was a fine, but in Italy, UK and France they all ordered Clearview to remove photos from their database which is a project time consuming and also greatly reduces the amount of face impressions they can say they have there. And it’s interesting to see if US entities like the Federal Trade Commission would be interested in trying to wield that level of authority when it comes to these sorts of practices.

Adam: What privacy issues or concerns regarding Clearview AI are not addressed in the settlement?

Schröder: The larger question of whether and how we allow facial recognition to be widely used in society is completely ignored here. And that’s partly because it’s much more of a philosophical-existential question that may not be appropriate for a particular trial. But there’s this ongoing debate about whether to allow technology to proliferate when it’s based on something you can’t change. You cannot change your face like you would a password. If an account is compromised, your face is your face. So if you are going to function in public or function in the world, it is information about you that is always visible and that you cannot change. So ongoing discussions about what is the appropriate use of that, what is appropriate when it comes to whether there should be a total ban on facial recognition or whether it should only be allowed that in certain circumstances with warrants – that’s an ongoing discussion that I think we need to have because this technology isn’t going to go away unless it’s subject to such strict bans and restrictions that it seems more worth it.

Adam: What are the limits and strengths of statewide privacy laws? And how do we see them appear?

Schröder: The advantages of state privacy laws are that it is often possible to pass a stricter law than what you could get at the federal level. We’ve been calling for a federal privacy law for a very long time, and there just isn’t a lot of movement on it. So individual state laws are a great way to more quickly address privacy protections and protect individuals in those states. And we’re hoping that by getting good privacy laws in multiple states, you’ll eventually reach a critical mass of companies saying, “Well, we have to comply with those standards in this state, this state.” and this state. Why don’t we just make it our base standard? The problem with the state-level approach and the kind of patchwork approach to privacy that we have is this: Until you get real critical mass, it’s a patchwork. There are people in some states who functionally have more rights when it comes to their privacy and information than others in other states in the country.

Related Links: More from Kimberly Adams

The ACLU and Clearview AI are calling this settlement a win — or, as Clearview put it in an emailed statement to Marketplace: “a huge win.” A company representative went on to say, “Clearview AI will not make any changes to its current business model. It will continue to develop its commercial offers in compliance with the legislation in force. And it will pay a small amount of money to cover advertising and costs, far less money than it would cost to pursue litigation.

The ACLU calls the settlement a big win and urges more states to implement strong privacy laws.

Meanwhile, other tech companies are clearly paying attention. Axios reports that Facebook has disabled filters and avatars that use augmented reality for users in Texas and Illinois — you know, how some add cat ears or virtual sunglasses to your face while chatting. Parent company Meta says it doesn’t think this type of AR counts as facial recognition technology, but better safe than sorry, it seems.

Share.

About Author

Comments are closed.