In a move raising eyebrows among privacy advocates and civil liberties groups, police departments near the U.S.–Mexico border are quietly investing in experimental artificial intelligence systems designed to impersonate humans online. These AI-powered “virtual agents,” developed by a New York–based company called Massive Blue, are being deployed under the banner of public safety—but their true targets range from traffickers and criminals to political activists and college protesters.
Internal documents and contracts obtained by 404 Media through public records requests reveal that these lifelike AI personas are part of a controversial program called Overwatch, which Massive Blue markets as a cutting-edge surveillance tool. The pitch? Deploy AI characters that mimic real people online, interact with suspects via text or social media, and extract incriminating intelligence.
So far, the technology hasn’t led to any known arrests. But it has already cost taxpayers hundreds of thousands of dollars—and ignited new debates about how far police should go in their use of AI to monitor the public.
“Virtual Agents” With Real-World Implications
Overwatch, according to marketing materials reviewed by 404 Media, is promoted as an “AI-powered force multiplier” for law enforcement. Think of it as a digital sting operation: a fleet of AI-generated personas—complete with names, backstories, interests, and dialects—engaging potential suspects in conversations across social media and encrypted messaging platforms like Signal and Telegram.
Some personas are modeled as children to lure suspected predators. Others are designed as “honeypots” to attract traffickers. One disturbing example is a fictional 14-year-old boy programmed to pose as a victim of child trafficking. Another character is described as a 25-year-old Yemeni-American woman who uses multiple communication platforms and speaks in a specific Arabic dialect. There’s even a lonely, divorced woman persona, described as a radicalized protester interested in baking and body positivity.
The program also includes AI personas targeting college protesters and political activists—groups whose inclusion raises significant civil liberties concerns. In the context of the Trump administration’s recent revocation of visas from students involved in anti-war protests, critics fear these tools could become a digital dragnet for dissent.
Real Money for Imaginary People
Despite the lack of concrete results, police departments are buying in. Pinal County, Arizona—located between Tucson and Phoenix—signed a $360,000 contract with Massive Blue, funded by an anti-human trafficking grant from the state’s Department of Public Safety. According to the county’s purchasing report, the contract includes around-the-clock surveillance of internet platforms and the operation of up to 50 AI personas deployed across three investigative categories.
Yuma County, also in Arizona, tried the technology on a smaller scale in 2023 with a $10,000 contract. But after a trial run, the county declined to renew. A sheriff’s office spokesperson said bluntly, “It did not meet our needs.”
Still, Massive Blue is pressing forward, selling its product to agencies eager for new tools to tackle border crimes—even if the evidence of effectiveness is still paper-thin.
When AI Policing Crosses the Line
The use of artificial personas by law enforcement is not new, but the automation and scale that AI enables brings fresh ethical and legal dilemmas. Dave Maass of the Electronic Frontier Foundation, a nonprofit focused on digital rights, voiced concerns over the program’s ambiguous goals.
“What problem are they actually trying to solve?” Maass asked. “A fake youth talking to a pedophile is one thing. But an AI targeting college protesters or sex workers? That crosses into policing speech and lifestyle choices. I’m not concerned about escorts or activists. What this seems effective at is violating people’s First Amendment rights.”
Indeed, while traditional undercover policing is governed by strict protocols, these AI personas operate in murky territory. There’s little transparency, no warrants required, and no clear oversight over how data is collected, stored, or used. Worse still, targets might never realize they were talking to a bot in the first place.
Political Surveillance Under the Guise of Safety?
The inclusion of politically active individuals in Overwatch’s scope has amplified fears that AI is being used to suppress dissent. Civil liberties groups warn that, particularly under politically motivated administrations, these tools could serve as a digital surveillance state—monitoring, profiling, and even entrapping citizens based on their beliefs.
This isn’t a far-off concern. In recent months, student protesters across the U.S. have faced increasing scrutiny for voicing opposition to the war in Gaza. Visa revocations, detentions, and surveillance have escalated, with border and law enforcement agencies citing vague national security concerns.
When AI is programmed to “act” as a protester and probe for incriminating language or intent, the line between intelligence gathering and entrapment becomes dangerously blurred.
The Future of AI-Driven Policing
As more law enforcement agencies experiment with tools like Overwatch, watchdog groups are demanding greater accountability. Who approves these AI personas? What data are they collecting? How long is it stored? And what protections exist for those falsely flagged by a chatbot?
Thus far, answers have been hard to come by. Massive Blue has remained tight-lipped about the specifics of its technology and methodology. And while some departments are backing away, others are doubling down—convinced that AI is the future of crime-fighting, even if it comes with uncomfortable questions.
For now, AI personas continue to roam the web, disguised as real people, in search of what law enforcement might consider “suspicious.” Whether they’re a shield against trafficking or a tool of political suppression depends largely on who’s wielding them—and who’s watching the watchers.
As technology races ahead, it’s up to the public—and lawmakers—to decide just how far is too far when it comes to turning bots into badges.