Facebook’s Proactive Approach to Addressing Nonconsensual Distribution of Intimate Images

It’s well-known that technology has made sharing sexually intimate content easier. While many people share intimate images without any problems, there’s a growing issue with non-consensual distribution of intimate images (NCII[1]), or what is often referred to as “revenge porn.” Perpetrators often share - or threaten to share - intimate images in an effort to control, intimidate, coerce, shame, or humiliate others. A survivor threatened by or already victimized by someone who’s shared their intimate images not only deserves the opportunity to hold their perpetrator accountable, but also should have better options for removing content or keeping it from being posted in the first place.

Recently, Facebook announced a new pilot project aimed at stopping NCII before it can be uploaded onto their platforms. This process gives people who wish to participate the option to submit intimate images or videos they’re concerned someone will share without their permission to a small, select group of specially trained professionals within Facebook. Once submitted, the images are given what’s called a “hash value”, and the actual images are deleted. “Hashing” basically means that the images are turned into a digital code that is a unique identifier, similar to a fingerprint. Once the image has been hashed, Facebook deletes it, and all that’s left is the code. That code is then used as a way for Facebook to identify if someone is attempting to upload the image and prevent it from being posted on Facebook, Messenger, and Instagram.

Facebook’s new pilot project may not be something everyone feels comfortable using, but for some it may bring much peace of mind. For those who believe it may help in their situation, we’ve outlined detailed information about how the process works:

  1. Victims work with a trusted partner. Individuals who believe they’re at risk of NCII and wish to have their images hashed should first contact one of Facebook’s trusted partners: the Cyber Civil Rights Initiative, YWCA Canada, UK Revenge Porn Hotline, and the eSafety Commissioner in Australia. These partners will help them through the process and identify other assistance that may be useful to them.
  2. Partner organizations help ensure appropriate use. The partner organization will carefully discuss the individual’s situation with them before helping them start the hashing process. This helps ensure that individuals are seeking to protect their own image and not trying to misuse the feature against another person. It’s important to note that the feature is meant for adults and not for images of people under 18. If the images are of someone under 18, they will be reported to the National Center for Missing and Exploited Children. Partner organizations will help to explain the reporting process so that individuals can make appropriate decisions for their own case.
  3. The Image will be reviewed by trained staff at Facebook. If the images meet Facebook’s definitions of NCII, a one-time link is sent to the individual’s e-mail. The link will take the individual to a portal where they can directly upload the images. All submissions are then added to a secure review queue where they will be reviewed by a small team specifically trained in reviewing content related to NCII abuse.
  4. NCII will be hashed and deleted: All images that are reviewed and found to meet Facebook’s definition of NCII will be translated into a set of numerical values to create a code called a “hash.” The actual image will then be deleted. If an image is reviewed and Facebook determines it does not match their definition of NCII, the individual will receive an email letting them know (so it’s critical that someone use an email that cannot be accessed by someone else). If the content submitted does not meet Facebook’s definition of NCII, then the concerned individual may still have other options. For example, they may be able to report an image for a violation of Facebook’s Community Standards.
  5. Hashed images will be blocked: If someone tries to upload a copy of the original image that was hashed, Facebook will block the upload and provide a pop-up message notifying the person that their attempted upload violates Facebook’s policies.

This proactive approach has been requested by many victims, and may be appropriate on a case-by-case basis. People who believe they’re at risk of exposure and are considering this process as an option should carefully discuss their situation with one of Facebook’s partner organizations. This will help them make sure they’re fully informed about the process so that they can feel empowered to decide if this is something that’s appropriate for their unique circumstances.  

For more information about how survivors can increase their privacy and safety on Facebook, check out our Facebook Privacy & Safety Guide for Survivors of Abuse.


 

[1] NCII refers to private, sexual content that a perpetrator shares publicly or sends to other individuals without the consent of the victim. How we discuss an issue is essential to resolving it. The term “revenge porn” is misleading, because it suggests that a person shared the intimate images as a reaction to a victim’s behavior.

What’s the Deal with Snap Map?

snap map image

Snapchat recently released a new feature called Snap Map. It was immediately met with a flurry of negative feedback and concerns for user privacy and safety. As with any technology, device, platform, or service, new features can have an unexpected impact on user safety and privacy. The following is our assessment of potential privacy issues and possibilities for misuse within Snap Map.

The Snap Map feature allows users to share their location with other friends on Snapchat and to share Snaps on a map. The ability for others to see your location can definitely sound a little creepy, particularly if you’re concerned about your privacy. While there are a few things to consider and be aware of to protect your privacy, there are also a few features that make us a little less worried about Snap Map.

1.     The user controls the feature, and therefore controls their privacy.
Snap Map is an opt-in feature, not an opt-out feature; meaning it is off by default until a user chooses to turn it on. Opt-in by default is an important safety feature, but it is noteworthy that a person with access to the account could still turn on location sharing without the account owner’s knowledge. Because of this, it’s important that users know how to find the location sharing setting so that they can check to see if someone has turned it on without their permission.

2.     Users also control the audience, even if the feature is on.
If you choose to use Snap Map, you can keep it in Ghost Mode. Ghost Mode means that your location isn’t shared with anyone at all, but that you are able to see yourself on the map. You can also choose between sharing your location with all of your friends, or with just a few select friends. Ghost Mode is the default setting when you have opted into using the Snap Map feature, that way you don’t share your location with anyone unless you choose to, even if you open the feature to check it out. If you decide to no longer share your location, even with a few selected friends, you last location is removed from the map.

3.     Submitted Snaps don’t show username, but images can still be identifying.
You can submit a Snap to “Our Story” to be shared on the Snap Map, although not all submitted Snaps are accepted to be on the Snap Map. Ones that are accepted do not show the username of the person who submitted it, but it will show up on the Snap Map at or near your location. Certain information in the Snap could make it more identifying (signs or landmarks can identify exact location, and clothing or tattoos can identify a person, even if their face isn’t shown). Also, users should be aware that Snaps submitted to “Our Story” may show on Snap Map regardless of their chosen location setting. This is important to consider, especially if other people are in your Snaps and you don’t have their permission to share.

4.     Notifications for the win!
We are always fans of user notifications when there is a feature that could be a potential safety and privacy risk. Snapchat will send reminders if location sharing has been left on for a period of time; making sure that users know their location is being shared. These notifications can also greatly decrease the chance that someone could turn on someone else’s Snap Map without their knowledge.

5.     When you’re sharing, you’re always sharing.
It’s really important to understand that once you opt-in and choose an audience to share your location with, that audience will continually be able to see your updated location every time you open the app, whether or not you are engaging with them or sending anyone a Snap. This might be the biggest concern, since if people don’t clearly understand this they may inadvertently share their location without realizing it.

Overall, Snap Map definitely makes it easier for people to share—and to receive—information about another person’s location. As with similar features on other platforms, users should be cautious and make informed, thoughtful decisions on how to protect their privacy; including if, when, and how they use it. It’s also really important to consider the privacy of others. You might not know what could be a safety or privacy risk for each of your friends, so you should never share images, videos, or location information about others without their consent. The good news is that this feature does have some built-in privacy options and gives users control over what is shared. Learn more about manage your location settings in Snap Map and check out SnapChat’s Approach to Privacy

YouTube’s New Tools Attempt to Address Online Harassment

Online harassment and abuse can take many forms. Threating and hateful comments turn up across online communities from newspapers to blogs to social media. Anyone posting online can be the target of these comments, which cross the line from honest disagreement to vengeful and violent attacks. This behavior is more than someone saying something you don’t like or saying something “mean” – it often includes ongoing harassment that can be nasty, personal, or threatening in nature. For survivors of abuse, threatening comments can be traumatizing, frightening, and can lead some people to not participate in online spaces.

YouTube recently created new tools to combat online abuse occurring within comments. These tools let users who post on their site choose words or phrases to “blacklist” as well as the option to use a beta (or test) version of a filter that will flag potentially inappropriate comments. With both tools, the comments are held for the user’s approval before going public. Users can also select other people to help moderate the comments.

Here’s a summary of the tools, pulled from YouTube:

  • Choose Moderators: This was launched earlier in the year and allows users to give select people they trust the ability to remove public comments.

  • Blacklist Words and Phrases: Users can have comments with select words or phrases held back from being posted until they are approved.

  • Hold Potentially Inappropriate Comments for Review: Currently available in beta, this feature offers an automated system that will flag and hold, according to YouTube’s algorithm, any potentially inappropriate comments for approval before they are published. The algorithm may, of course, pull content that the user thinks is fine, but it will improve in its detection based on the users’ choices.

Survivors who post online know that abusive comments can come in by the hundreds or even thousands. While many sites have offered a way to report or block comments, these steps have only been available after a comment is already public, and each comment may have to be reported one by one. This new approach helps to catch abusive comments before they go live, and takes the pressure off of having to watch the comment feed 24 hours a day.

These tools also offer survivors a means to be proactive in protecting their information and safety. Since many online harassment includes tactics such as doxing (where personal information of someone is posted online with the goal of causing them harm), a YouTube user can add their personal information to the list of words and phrases that are not allowed to be posted. This can include part or all of phone numbers, addresses, email addresses, or usernames of other accounts. Proactively being able to block someone from posting your personal content in this space will be a great tool.

Everyone has the right to express themselves safely online, and survivors should be able to fully participate in online spaces. Connecting with family and friends online helps protect against the isolation that many survivors experience. These new tools can help to protect survivors’ voices online.