


Does Flock Enable Mass Surveillance?
Flock ALPRs do not and cannot track vehicles, much less individual people. ALPRs take a point-in-time image of the rear of vehicles on public roadways. They are incapable of tracking the whole of anyone’s movements, a determination that has been consistently made by Courts across the country.
Claim
ALPRs track people and show police everywhere they go.
Fact
Flock ALPRs do not and cannot track vehicles, much less individual people. ALPRs take a point-in-time image of the rear of vehicles on public roadways. They are incapable of tracking the whole of anyone’s movements, a determination that has been consistently made by Courts across the country. Courts have also uniformly held that there is no reasonable expectation of privacy on public roadways. The vast majority of images captured by Flock ALPRs are not used for any investigation, and are permanently deleted at regular intervals, without ever having been accessed or seen by anyone
Why do some people claim Flock license plate readers are mass surveillance?
The term “mass surveillance” carries a powerful emotional and legal meaning. It suggests indiscriminate, suspicionless monitoring of large populations without meaningful limits or oversight. When applied to any public safety technology, it deserves careful examination, and that examination should be grounded in definitions, data, and facts.
So why do some critics apply that label to license plate readers?
There are two primary reasons.
First, civil liberties organizations often take a broad view of what constitutes surveillance. From that perspective, any system that collects and stores location data, even in public spaces where there is no reasonable expectation of privacy, can be framed as surveillance infrastructure. When that data is shared across jurisdictions, critics may describe it as “nationwide” or “interconnected,” which can reinforce the perception of scale.
Second, advocacy groups tend to evaluate technologies based on potential risk, not just documented misuse. If a tool could hypothetically be used to monitor protest activity, track sensitive travel patterns, or aggregate large amounts of location data, critics may argue that the architecture itself creates unacceptable danger, regardless of how frequently or infrequently such misuse occurs in practice.
However, labeling something “mass surveillance” requires more than theoretical capability. It requires evidence of widespread, indiscriminate monitoring of lawful activity. In its 2025 review, EFF reported analyzing approximately 12 million searches across 3,900 law enforcement agencies and identified five categories of concern:
- Surveillance of Protesters
- Biased Policing and Discriminatory Searches
- Weaponizing Surveillance Against Reproductive Rights
- The entire Flock business model
- Distress detection
Those issues deserve discussion. But the scale and proportionality of the findings matter.
The distinction between isolated misuse and systemic design matters here. The data cited, roughly 0.48% of agencies, describes specific incidents that warrant investigation and response, not a pattern that reflects how the system operates across thousands of jurisdictions.
Whether that characterization is accurate depends on several factual questions:
- Are searches tied to specific cases or lawful investigations?
- Are there retention limits and access controls?
- Is usage logged and auditable?
- Are agencies required to comply with policy and constitutional standards?
- Is the data collected from public vantage points where there is no reasonable expectation of privacy?
If the system operates with case-based access, logging, retention controls, and contractual guardrails, it functions more like a searchable investigative database than a system of indiscriminate monitoring.
There is also an important legal distinction. Courts have long held that license plates observed in public are not private information. ALPR systems automate the recording of information that is already visible to anyone on the road. The constitutional debate typically centers on duration, aggregation, and access controls, not on the mere existence of the data.
That doesn’t mean concerns are illegitimate. Reasonable people can disagree about how much data retention is appropriate, how interagency sharing should work, or what oversight mechanisms are sufficient. Those are policy debates worth having.
But the term “mass surveillance” implies something broader: widespread, suspicionless tracking of the public as a whole. The data cited in EFF’s own report does not clearly demonstrate that kind of systemic practice across thousands of agencies. It highlights specific areas of concern that merit oversight and policy refinement.
Ultimately, the disagreement appears to be less about whether safeguards matter and more about whether interconnected public safety technology can ever be sufficiently constrained to balance investigative utility with civil liberties protections.
That is a substantive debate. It should be grounded in definitions, proportionality, and documented evidence, not just in hyperbole and labels.
The Mass Surveillance Claim
It’s important to begin with shared principles. The First Amendment guarantees the right to peaceful assembly and protest. That protection is fundamental. At the same time, when protest activity crosses into unlawful conduct, such as vandalism, trespass, theft, or violence, law enforcement has a legal obligation to investigate and prevent those crimes.
The key question, then, is not whether protest activity is protected. It is whether the technology in question is being used to broadly monitor lawful First Amendment activity, or whether it is being used in specific investigations tied to alleged unlawful conduct.
According to EFF’s own reporting, 19 agencies out of approximately 3,900 Flock customers conducted searches that explicitly referenced protest activity. That represents roughly 0.48% of agencies, less than half of one percent.
That context matters. Describing this as “mass surveillance” implies widespread, systemic monitoring of protestors across thousands of jurisdictions. The reported data does not appear to support that characterization. Instead, it suggests a relatively small number of agencies conducted searches connected to protest-related incidents.
It is also notable that some of the searches referenced organizations associated with documented unlawful activity, including break-ins, theft of private data, and property crimes. If agencies were investigating specific criminal allegations connected to those incidents, that would fall within traditional law enforcement functions. The presence of protest-related language in a search does not, by itself, establish that peaceful protest was the target.
None of this means there should be no scrutiny or oversight. To the contrary, at Flock, we believe that any use of investigative technology in contexts touching First Amendment activity deserves careful oversight and clear policy safeguards. Transparency and guardrails are essential to maintaining public trust. That is why transparency and accountability are literally engineered into our tools.
But proportionality matters. A finding that fewer than half of one percent of agencies conducted protest-related searches, without evidence that peaceful protestors were systematically monitored, does not, on its face, demonstrate “mass surveillance.” It suggests targeted investigative activity in a limited number of jurisdictions.
If the concern is that the technology could be misused in the future to chill lawful protest, that is a policy discussion worth having. However, based on the reported figures, the evidence presented appears to reflect isolated instances requiring agency-level review, not a widespread, coordinated pattern of suppressing First Amendment rights.
The distinction between targeted investigations into alleged criminal conduct and broad surveillance of lawful protest is significant. The data cited does not clearly establish the latter.

The Biased Policing Claim
Let’s start with common ground: ethnic slurs have no place in professional policing or in society more broadly. When officers use derogatory language, that’s wrong, regardless of intent.
It’s also worth acknowledging that many Americans are unaware that certain terms, including “g*psy,” are ethnic slurs against Romani people. That lack of awareness doesn’t excuse usage, but it does complicate the interpretation of the EFF’s claims of intent and pattern.
According to EFF’s own reporting, out of approximately 3900 law enforcement agencies that used the system, around 2% had at least one instance of an officer typing that term into a search field. Across roughly 12 million total searches analyzed, EFF identified “hundreds” of examples. Even if we assume the number was 1,000 examples, that would represent less than 0.01% of all searches.
Scale matters when evaluating systemic risk. Hundreds of inappropriate searches within 12 million total uses is not a number to dismiss, and those cases should be investigated and addressed. But a small fraction of misuse does not make discrimination a feature of the system; it makes it a problem to fix.
This distinction is important. If individual officers used inappropriate language in search queries, that reflects a training, professionalism, and supervision issue within those agencies. It does not necessarily indicate that the technology itself encourages or amplifies discriminatory behavior.
To argue that the tool “makes it easier” to rely on stereotypes would require evidence that:
- The system structurally incentivizes biased inputs,
- Produces biased outputs independent of user behavior, or
- Disproportionately targets specific communities through its design.
The examples cited do not appear to establish those claims. They show individual misuse, which is unacceptable but not a widespread technological defect.
Labeling this as a “deeply troubling pattern” suggests systemic abuse. Based on the reported numbers, what we’re seeing appears to be a limited and specific problem that agencies should address through policy, training, and oversight — not evidence of pervasive bias built into the system.
We should absolutely hold officers accountable for inappropriate conduct. But we should also be careful not to conflate isolated misconduct with proof of systemic technological harm without stronger evidence.
The Weaponization of Surveillance Against Reproductive Rights Claim
Allegations that surveillance technology is being used to target individuals for reproductive healthcare decisions are extremely serious. If true, that would raise profound constitutional, civil liberties, and ethical concerns. Claims of that magnitude deserve careful scrutiny and full factual context.
The EFF highlights a Johnson County search that included the note: “had an abortion, search for female,” and suggests this reflects technology being used in connection with abortion enforcement. The language used in that search is understandably concerning.
However, subsequent reporting provides additional context that complicates that narrative. According to deeper investigative journalism and direct statements from the Sheriff’s Office, the search was conducted as part of a missing person investigation involving a woman believed to be in a potentially dangerous domestic violence situation. The Sheriff stated that she was not being investigated as a suspect in an abortion-related offense, but was instead the subject of a welfare and safety search.
If that account is accurate, the presence of the word “abortion” in the search note does not necessarily mean law enforcement was investigating reproductive healthcare as a crime. It may reflect contextual information relevant to locating or identifying the missing individual.
This distinction is critical.
Using a term like “abortion” in a search note is not, by itself, proof that the technology was weaponized to enforce abortion laws. To establish that claim, there would need to be evidence that:
- The individual was being investigated for obtaining an abortion,
- The technology was used to build a criminal case related to reproductive care, or
- Agencies were systematically monitoring or tracking individuals seeking abortions.
Based on the publicly available reporting, those elements do not appear to be established in this instance. And it is telling that in endeavoring to establish the threat of systemic misuse of ALPR technology to interfere with the exercise of reproductive rights, the EFF relies entirely on this one case, the facts of which are ambiguous at best.
None of this diminishes the broader concern that surveillance tools could be misused in jurisdictions with restrictive abortion laws. That is a legitimate policy discussion, and safeguards should be clear. But conclusions about weaponization should be grounded in verified facts rather than inference from a single search note, especially when subsequent reporting suggests a different investigative purpose.
If the facts show this was a domestic violence-related missing person case, then characterizing it as evidence of reproductive-rights enforcement risks overstating what occurred. That search note is understandably alarming on its face. But subsequent reporting and direct statements from the Sheriff's Office add context that materially changes the interpretation.
The appropriate response in situations like this is transparency, investigation, and clear policy guardrails, not assumptions about intent without complete context.
The Claim That the Flock Business Model is the Problem
There’s a legitimate debate to be had about how modern public safety technology should be governed. Tools that collect and store data, especially at scale, deserve scrutiny. That’s healthy in a democracy.
But the claim that Flock’s architecture is inherently abusive, and that no safeguards can meaningfully reduce risk, overstates both the intent and the design of the system.
First, it’s important to separate the two questions:
- Can technology be misused? Yes. Any investigative tool, from a radio to a database to a fingerprint system, can be misused without proper policy, training, and oversight.
- Is misuse inevitable because of the system’s architecture? That’s a much stronger claim, and it requires stronger evidence.
The EFF argues that because Flock connects cameras into individually owned, but shareable networks, risks are inherent and unavoidable. But connectivity alone does not equal mass surveillance. Interoperability in public safety systems is common, whether in criminal databases, Amber Alerts, or interagency task forces. The purpose of the connection is to allow lawful, case-based information sharing when crimes cross jurisdictional lines. And no person committing a crime is ensuring they stick to one jurisdiction.
The key safeguards to public safety technology are how access is structured and constrained:
- Role-based access controls
- Query logging and auditing
- Retention limits
- Geofencing
- Case-number requirements
- Agency-level policy enforcement
- State and local compliance built into the product
Those are not cosmetic changes. They directly affect how data can be accessed, how long it exists, and how agencies are held accountable. Architecture includes guardrails, not just connectivity.
The EFF’s argument assumes that because the system enables lawful interagency collaboration, its “business model depends” on pervasive surveillance. That framing overlooks the actual incentive structure. Public safety agencies adopt tools that help solve crimes efficiently and with objective evidence. If the product facilitated abuse or unlawful monitoring, it would face legal, contractual, and reputational consequences that directly undermine its viability. However, 30+ courts have consistently affirmed that ALPR devices perform lawful actions, do not capture the whole of one’s movements, and “do not come close” to mass surveillance.
It’s also important to distinguish between capability and intent. A tool that can be used across jurisdictions to track a stolen vehicle or locate a kidnapping suspect is not the same as a tool designed to indiscriminately monitor lawful activity. The architecture enables targeted investigation, not blanket observation.
Reasonable people can disagree about how much interconnected data infrastructure is appropriate in a democracy. What the evidence doesn't support is the conclusion that safeguards are meaningless - 30+ courts have reviewed these systems and reached the opposite conclusion.
Flock’s response to criticism, implementing retention limits, geofencing, auditing, and transparency measures, demonstrates that the architecture is adaptable. Systems that are inherently abusive don’t become safer with structural guardrails. Systems designed for lawful investigative use can.
None of this means oversight should stop. On the contrary, continued transparency, independent audits, and clear usage policies are essential. But the evidence cited does not establish that “abuses stem from the architecture itself.” It shows that, like any investigative tool, responsible governance matters.
The real policy question isn’t whether interconnected systems are categorically unsafe. It’s whether they can be designed and governed in a way that maximizes public safety benefits while minimizing civil liberties risks.
That is a discussion worth having, and one that should be grounded in evidence, proportionality, and measurable safeguards, not in the assumption that improvement is impossible.
Protect What Matters Most.
Discover how communities across the country are using Flock to reduce crime and build safer neighborhoods.
.webp)







