Police Use Clearview AI for Nearly 1M Searches: Privacy Breach?

Clearview, a facial recognition firm, has come under fire for running nearly a million searches for US police and scraping 30 billion images from platforms such as Facebook without users' permission, leading to millions of dollars in fines in Europe and Australia for privacy breaches.

Clearview AI, a facial recognition firm, has revealed to the BBC that it has run nearly a million searches for US police. Its CEO Hoan Ton-That also revealed that the company has scraped 30 billion images from platforms such as Facebook, without users’ permission. This has led to Clearview being fined millions of dollars in Europe and Australia for privacy breaches. Critics argue that the police’s use of Clearview puts everyone into a “perpetual police line-up”.

Clearview AI is a facial recognition company that has created a powerful and accurate system for law enforcement customers. The system allows customers to upload a photo of a face and find matches in a database of billions of images. It then provides links to where matching images appear online. However, this system has been criticized by the Electronic Frontier Foundation as being too invasive. They argue that it is a violation of privacy and could lead to false arrests. It is important to note that this technology is still in its early stages and its implications are yet to be fully understood.

Clearview AI has been banned from selling its services to most US companies after the American Civil Liberties Union (ACLU) took them to court in Illinois for breaking privacy law. However, police forces across the US are exempt from this ban and are allowed to use the software. Despite this, the use of facial recognition by the police is often sold to the public as only being used for serious or violent crimes, and the software is banned in several US cities including Portland, San Francisco and Seattle. Clearview AI’s software is used by hundreds of police forces across the US, but they do not routinely reveal this information.

Miami Police have been using facial recognition software Clearview AI to help solve crimes, such as murders and shoplifting. Assistant Chief of Police Armando Aguilar said the system has been used around 450 times a year and has helped solve several murders. However, there are no laws around the use of facial recognition by police, and Aguilar says they treat it like a tip. He emphasizes that they don’t make an arrest based on the algorithm, but rather use traditional methods such as a photographic line-up. Clearview AI is a powerful tool for law enforcement, but it is important to ensure that its use is regulated and monitored.

Police use of facial recognition technology has been linked to wrongful arrests due to poor policing. Clearview AI claims to have a near 100% accuracy rate, but this is often based on mugshots. There are documented cases of mistaken identity using facial recognition by the police, but the lack of data and transparency around police use means the true figure is likely far higher. Clearview’s CEO, Hoan Ton-That, accepts police have made wrongful arrests using facial recognition technology, but attributes those to “poor policing”. It is important to note that the accuracy of facial recognition technology is still being debated and the potential for wrongful arrests is still a major concern.

Civil rights campaigners are skeptical of the accuracy of the software, and want police forces to openly say when it is used and for its accuracy to be tested in court. Kaitlin Jackson, a criminal defense lawyer based in New York, believes that the idea that the software is incredibly accurate is wishful thinking, as there is no way to know when using images from CCTV. She believes that the accuracy of Clearview depends on the quality of the image that is fed into it. Therefore, campaigners want the algorithm to be scrutinised by independent experts.

The Clearview AI’s CEO, Hoan Ton-That, has recently given the system to defense lawyers in specific cases, believing that both prosecutors and defenders should have the same access to the technology. Last year, Andrew Conlyn from Fort Myers, Florida, had charges against him dropped after Clearview was used to find a crucial witness. However, Ton-That does not want to testify in court to its accuracy, as investigators are using other methods to verify it. Clearview AI’s facial recognition technology is proving to be a powerful tool in criminal cases, allowing defense lawyers to access crucial evidence and witnesses that may have otherwise been overlooked.

Mr Conlyn was charged with vehicular homicide after a fatal car accident, despite claiming to be the passenger. His lawyers used Clearview AI to identify the driver, a passerby who had left the scene without making a statement. The AI identified the passerby in just 3-5 seconds, providing crucial evidence for Mr Conlyn’s defence. This case highlights the potential of AI technology to help identify suspects and provide evidence in criminal cases.

Clearview AI has been using facial recognition technology to create face prints of people based on their photos online without their consent. This has raised serious concerns about civil liberties and civil rights, and many are calling for the company to be banned. Recently, Vince Ramirez made a statement that he had taken Mr Conlyn out of the passenger’s seat, and the charges were dropped. However, there have been cases where Clearview has been proven to work, leading to some believing that the benefits outweigh the risks. Ultimately, the debate over Clearview AI’s use of facial recognition technology continues, and it remains to be seen if the company will be banned or allowed to continue its operations.

Previous articleChatGPT: AI-Powered Chatbot Helps Jobseekers Get Hired
Next articleFrom Complaints to Compliance: Unveiling Binance CEO Changpeng Zhao’s U.S. Lawsuit Battle
Steve Gates
Steve shows his dedication by holding 90% in cryptocurrencies, 10% to pay the bills.