Predictive policing essentially refers to using computer algorithms to predict where crime is going to happen. A computer can do this by analyzing past crime statistics. For example, the algorithm looks at the last year‘s worth of arrest reports, tracking where those arrests were made, who was involved and when they took place. It can then dispatch officers to these areas, sometimes preventing crime by sending them there in advance.
Because the computer is telling officers where to look for crime, it may sound like it would be impossible for the computer to be biased. It shouldn’t target someone based on their race or ethnic background, for instance. But it seems that the opposite is actually true.
Amplifying existing biases
The problem is that the algorithm gets its data from the police officers who make these arrests. It can then amplify this data, creating a feedback loop that means the algorithm does not work as intended.
For example, say that the police officers who are inputting the data are biased against a certain ethnic group. The officers naturally go to these neighborhoods and arrest people from this group more often. As a result, the computer model begins to target them, leading to even more arrests. The computer isn’t biased itself, assuming that people from this group are more likely to break the law. But the police officers are, and that fact is reflected in the algorithm.
Your criminal defense options
In a perfect world, everyone should be treated equally and fairly by the police. Unfortunately, that doesn’t always happen, even when computers are used to assist police officers. Those who have been arrested and feel that their rights have been violated may need to know about all of their defense options.