Protecting Victim Privacy While Increasing Law Enforcement Transparency: Finding the Balance with Police Data Initiatives

One of the hallmark efforts of the outgoing Obama administration has been the Police Data Initiative, launched to improve the relationship between law enforcement agencies and the communities they serve. The Police Data Initiative encourages local law enforcement agencies to publicly share information about 911 calls, stops, arrests, and other police activities so that community members can look both at individual cases, as in some high-profile events covered by the media, and at trends that might reveal disproportionate response over time.

It has been more than two decades since the Violence Against Women Act was first passed, and we have seen significant improvements in the criminal justice system’s response to domestic violence, sexual assault, and stalking. This success is due in great part to the efforts of victim advocates and law enforcement officials working together to improve systems. But as we celebrate these successes, we know this work is far from finished, and that there is still much work to be done to improve police response - particularly within marginalized communities.

As we work with law enforcement to improve responses to victims and communities, we must ensure that the privacy and safety of victims who interact with law enforcement is a fundamental cornerstone of those efforts. Police data released to the public has the potential to reveal victims’ identities and consequently put them at risk of further harm, harassment, or damage to their reputation. These concerns can also significantly impact a survivor’s decision on whether they even contact law enforcement for help in an emergency.

For more than a year, Safety Net has explored the issue of how to maintain victim privacy and safety while simultaneously supporting the overall intention behind the Police Data Initiative. These efforts have been made possible by the support of the Office on Violence Against Women (U.S. Department of Justice) and Harvard University’s Berkman Center for Internet & Society, and in partnership with the White House, the Police Foundation, the International Association of Chiefs of Police, the Sunlight Foundation, the National Institute of Standards and Technology, the Vera Institute of Justice, and others.

Today, we are pleased to announce the release of a guide that outlines the results of these efforts titled, “How Law Enforcement Agencies Releasing Open Data Can Protect Victim Privacy & Safety”, which was authored collaboratively with the Police Foundation. This guide describes the need for victim privacy to be a central consideration in efforts to share data with the public, and provides specific recommendations that will assist local law enforcement agencies in their efforts to ensure victim privacy while increasing transparency.

In the coming weeks, we will be releasing a similar guide written for advocates, as well as an issue summary that describes how the Police Data Initiative intersects with our work to ensure the safety and privacy of survivors.

 

YouTube’s New Tools Attempt to Address Online Harassment

Online harassment and abuse can take many forms. Threating and hateful comments turn up across online communities from newspapers to blogs to social media. Anyone posting online can be the target of these comments, which cross the line from honest disagreement to vengeful and violent attacks. This behavior is more than someone saying something you don’t like or saying something “mean” – it often includes ongoing harassment that can be nasty, personal, or threatening in nature. For survivors of abuse, threatening comments can be traumatizing, frightening, and can lead some people to not participate in online spaces.

YouTube recently created new tools to combat online abuse occurring within comments. These tools let users who post on their site choose words or phrases to “blacklist” as well as the option to use a beta (or test) version of a filter that will flag potentially inappropriate comments. With both tools, the comments are held for the user’s approval before going public. Users can also select other people to help moderate the comments.

Here’s a summary of the tools, pulled from YouTube:

  • Choose Moderators: This was launched earlier in the year and allows users to give select people they trust the ability to remove public comments.

  • Blacklist Words and Phrases: Users can have comments with select words or phrases held back from being posted until they are approved.

  • Hold Potentially Inappropriate Comments for Review: Currently available in beta, this feature offers an automated system that will flag and hold, according to YouTube’s algorithm, any potentially inappropriate comments for approval before they are published. The algorithm may, of course, pull content that the user thinks is fine, but it will improve in its detection based on the users’ choices.

Survivors who post online know that abusive comments can come in by the hundreds or even thousands. While many sites have offered a way to report or block comments, these steps have only been available after a comment is already public, and each comment may have to be reported one by one. This new approach helps to catch abusive comments before they go live, and takes the pressure off of having to watch the comment feed 24 hours a day.

These tools also offer survivors a means to be proactive in protecting their information and safety. Since many online harassment includes tactics such as doxing (where personal information of someone is posted online with the goal of causing them harm), a YouTube user can add their personal information to the list of words and phrases that are not allowed to be posted. This can include part or all of phone numbers, addresses, email addresses, or usernames of other accounts. Proactively being able to block someone from posting your personal content in this space will be a great tool.

Everyone has the right to express themselves safely online, and survivors should be able to fully participate in online spaces. Connecting with family and friends online helps protect against the isolation that many survivors experience. These new tools can help to protect survivors’ voices online.

New Resource: Tech Safety App

We’re thrilled to announce the release of our Tech Safety App! The Tech Safety App is an educational mobile app that helps users identify how abusers can harass them by misusing technology and learn what steps they can take to enhance their technology safety and privacy.

This app takes advantage of the NNEDV Safety Net project’s more than 15 years of working on the intersection of technology abuse and violence against women, and who have provided expert advice, trainings, and consultation on this issue to thousands of survivors of abuse, victim service providers, and technology companies. This app is another way to get information into the hands of survivors.

The Tech Safety App walks users through understanding how a particular technology could be misused, what they can do about it, and offers safety tips on how to increase their safety and privacy. The app also includes a wide range of resources, including those on this site, the WomensLaw.org legal hotline, and other hotlines.

The Tech Safety App will be launched at a reception on Monday, July 25, 2016 from 5:00 pm – 7:30 pm at the Hilton Financial District during NNEDV Safety Net’s 4th Annual Technology Summit. At this Summit, nearly 250 victim advocates, attorneys, law enforcement professionals, victim service providers from across the United States and around the world will attend to learn about how technology is misused to harass and how providers can address these crimes.

Download the app today, and let us know what you think!