Predicting Police Misconduct
The patterns were there before the headlines. The work was turning those signals into something people could act on early.
When a police officer has a serious incident with the public, the ripple effects are enormous. A wrongful arrest. An excessive force complaint. A fatal shooting. The person on the other end often gets hurt or killed. The community around them loses trust. The city pays out millions in settlements. The officer's career ends, or worse. Almost every case involves a cascade: a series of smaller red flags that went unnoticed, building pressure until it finally boiled over in a way nobody could undo. The whole pattern is usually visible in the data, if you know what to look for and you're willing to act on it early.
I joined a company that had built an early warning system to find those patterns before they turned into incidents. The idea was simple to say and hard to do. Every police department in America generates an enormous amount of structured data every day: use-of-force reports, citizen complaints, arrests, traffic stops, backup requests, dispatch records. Most of it sits in an archive after the fact. What if you could run all of it through a model that flagged when a specific officer's behavior looked statistically similar to other officers who later had serious incidents? The department could intervene quietly, and early. More training. A conversation with a supervisor. Time off. A reassignment. Before the situation escalated into the kind of thing that ends up on the news and in court.
The company's early work had proven the approach. I came in to run the product, figure out what actually worked versus what just looked impressive on paper, and help extend the system across more departments. The work meant sitting with chiefs and city risk managers and understanding the gap between what they said out loud (nothing) and what they actually needed (a way to act on warning signs without starting a political war inside their own department). The product had to thread a very fine needle. It had to identify real risk without stigmatizing officers who had bad days for reasons that didn't predict anything. It needed to give supervisors cover to intervene without turning into a witch hunt. And whatever it recommended had to hold up under scrutiny from internal affairs, civil rights lawyers, and the press.
By the time I left, the system was being used across a large national network of agencies and was sitting on a deep dataset of officer behavior. The company went on to sell the business line as part of an exit. A lot of the adverse incidents that would have happened didn't, which is a strange outcome to point to. You're counting things that didn't occur. But it was the point of the whole thing.