He was arrested at gunpoint in a parking lot, jailed twice, lost his car, his girlfriend and his freedom — and later learned it was all a mistake

·

·

Nijeer Parks had just pulled into a parking lot when officers surrounded his car with guns drawn, shouting commands and insisting he was the man who had attacked a police officer and crashed a stolen vehicle into a cruiser in Woodbridge, New Jersey. Parks had never been to Woodbridge. He did not drive the type of car witnesses described. He would later produce receipts and other evidence showing he was miles away at the time. None of it mattered: a facial recognition system had flagged his photo, a detective had looked at the result and declared, “That’s him,” and the machinery of prosecution lurched forward without anyone stopping to ask whether the machine had been wrong.

Parks spent 10 days in jail, lost his job, lost his car, and watched his relationship fall apart before the charges were eventually dismissed. His case, now the subject of a federal civil rights lawsuit, has become one of the starkest illustrations of what can go wrong when police treat an algorithm’s suggestion as proof and skip the basic investigative steps that might have cleared an innocent man in hours.

A police officer handcuffing a man next to a car in a parking lot.
Photo by Kindel Media on Pexels

The arrest: a parking lot, a gun, and a computer match

The sequence that upended Parks’ life began with a 2019 incident at a Hampton Inn in Woodbridge, where a man allegedly shoplifted, then struck an officer and fled in a car that collided with a police vehicle. Investigators pulled a still image from hotel surveillance footage and ran it through a facial recognition system that compared the image against a database of photos. The system returned a list of possible matches ranked by similarity. Parks’ photo appeared on that list, according to CNN’s reporting on the case.

A Woodbridge detective reviewed the candidates and selected Parks, despite visible differences in facial structure, build, and other features between Parks and the man in the surveillance image. The detective then used that selection to obtain an arrest warrant. When officers found Parks in a parking lot and took him into custody at gunpoint, he told them they had the wrong person. He offered to prove his whereabouts. According to the ACLU of New Jersey, police did not meaningfully investigate his alibi before or after the arrest.

The charges were severe: aggravated assault on a police officer, unlawful possession of a weapon, leaving the scene of an accident, resisting arrest, and related counts that carried the possibility of years in prison.

Ten days in jail and a year under threat

Parks spent 10 days locked up before he could secure release. During that stretch, he lost income, and his car was effectively seized. His girlfriend ended their relationship under the weight of the accusations. Even after he got out, the charges hung over him for close to a year, forcing repeated court appearances while prosecutors continued to rely on the original identification.

As the ACLU of New Jersey later noted, Woodbridge police had straightforward ways to rule Parks out. They could have checked whether he owned or had access to the vehicle involved in the crime. They could have investigated his alibi, which placed him nowhere near the hotel. Instead, the department treated the algorithmic output as settled fact, and the case moved forward on that basis until it finally collapsed.

Parks and his mother have described the ordeal publicly as a nightmare that stained his reputation and stalled his life, all because a software system and a single detective decided that two different Black men looked alike.

A federal lawsuit challenges the practice

After prosecutors dropped the charges, Parks filed a federal civil rights lawsuit, Parks v. McCormac, in the U.S. District Court for the District of New Jersey. The complaint argues that his Fourth and Fourteenth Amendment rights were violated when officers used an unreliable facial recognition match as the foundation for an arrest warrant, then failed to investigate evidence that contradicted it.

The suit targets not only the individual officers involved but the broader practice of treating algorithmic suggestions as probable cause. According to court filings, the detective who identified Parks never documented the known limitations of the technology, never noted the possibility of error, and never informed the judge who signed the warrant that the identification rested on a machine-generated list rather than traditional evidence.

The ACLU and the ACLU of New Jersey have supported the case, filing an amicus brief that warns facial recognition is especially dangerous when layered onto existing racial disparities in policing. As of early 2026, the litigation remains active.

Why the technology fails and who it fails most

Facial recognition systems do not declare that two faces belong to the same person. They produce probability-ranked lists of candidates, and a human operator is supposed to treat those lists as investigative leads, not conclusions. In Parks’ case, that distinction vanished the moment the detective selected his photo and described it as a definitive match.

The risk of that kind of error is not evenly distributed. A landmark 2019 study by the National Institute of Standards and Technology (NIST) evaluated 189 facial recognition algorithms from 99 developers and found that the majority produced higher false positive rates for Black and Asian faces compared to white faces. For one-to-many searches, the type used in Parks’ case, the disparity was especially pronounced. Earlier research by Joy Buolamwini and Timnit Gebru at the MIT Media Lab, published as the “Gender Shades” study, documented similar patterns and traced them to biased training data and algorithm design choices.

These findings mean that Black men like Parks are statistically more likely to appear on false-positive candidate lists. When a police department treats those lists as near-certain identifications, the built-in bias of the software translates directly into wrongful stops, arrests, and prosecutions.

Parks is not alone

At least three other Black Americans have come forward publicly after being wrongfully arrested on the basis of facial recognition misidentifications. Robert Williams was arrested in Detroit in 2020 and held for 30 hours after a system matched his driver’s license photo to surveillance footage from a shoplifting case. Michael Oliver, also in Detroit, was charged with a felony based on a facial recognition hit that was later shown to be wrong. And in 2023, Porcha Woodruff, eight months pregnant at the time, was arrested and held for hours on a carjacking charge generated by a facial recognition match that Detroit police later acknowledged was a mistake.

Every known case of a publicly documented wrongful arrest tied to facial recognition in the United States has involved a Black defendant. Civil liberties organizations, including the ACLU, argue that this pattern is not coincidental but a direct consequence of the racial bias embedded in the technology and the lack of safeguards governing its use.

Where lawmakers and courts stand

The legal and regulatory landscape around police use of facial recognition remains uneven. Several cities, including San Francisco, Boston, and Minneapolis, have banned or restricted government use of the technology. In 2021, King County, Washington, which includes Seattle, enacted one of the broadest local bans. But no federal law regulates police use of facial recognition, and most state legislatures have not acted.

In New Jersey, where Parks was arrested, there is no statewide ban. Advocacy groups have pushed for legislation requiring police departments to disclose when facial recognition is used in an investigation and to obtain independent corroboration before making an arrest based on an algorithmic match. As of March 2026, no such law has passed.

Courts are only beginning to grapple with the constitutional questions. Parks v. McCormac is among the first federal cases to directly challenge whether a facial recognition match, standing alone, can constitute probable cause for an arrest. The outcome could set an important precedent for how the technology is used in criminal investigations nationwide.

What Parks’ case makes clear

Nijeer Parks was not arrested because a witness picked him out of a lineup or because physical evidence tied him to a crime. He was arrested because a piece of software produced a list, a detective made a snap judgment, and no one in the chain of command paused to verify the result. The tools that were supposed to aid the investigation replaced it.

His case is a warning about what happens when police departments adopt powerful surveillance technology without equally powerful accountability measures. Until courts or legislatures draw firm lines around how facial recognition can be used, the people most likely to pay the price for its failures are the people the research shows it fails most often: Black Americans who, like Parks, may find themselves surrounded in a parking lot, guns drawn, with no idea why.

 

More from Vinyl and Velvet:



Leave a Reply

Your email address will not be published. Required fields are marked *