Meta will auto-blur nudity in Instagram DMs in latest teen safety step | TechCrunch

Meta will auto-blur nudity in Instagram DMs in latest teen safety step | TechCrunch

Meta said on Thursday that it is testing new features on Instagram intended to help safeguard young people from unwanted nudity or sextortion scams. This includes a feature called “Nudity Protection in DMs,” which automatically blurs images detected as containing nudity.

The tech giant said it will also nudge teens to protect themselves by serving a warning encouraging them to think twice about sharing intimate images. Meta hopes this will boost protection against scammers who may send nude images to trick people into sending their own images in return.

The company said it is also implementing changes that will make it more difficult for potential scammers and criminals to find and interact with teens. Meta said it is developing new technology to identify accounts that are “potentially” involved in sextortion scams, and will apply limits on how these suspect accounts can interact with other users.

In another step announced on Thursday, Meta said it has increased the data it is sharing with the cross-platform online child safety program, Lantern , to include more “sextortion-specific signals.”

The social networking giant has had long-standing policies that ban people from sending unwanted nudes or seeking to coerce others into sharing intimate images. However, that doesn’t stop these problems from occurring and causing misery for scores of teens and young people — sometimes with extremely tragic results .

We’ve rounded up the latest crop of changes in more detail below.

Nudity Protection in DMs aims to protect teen users of Instagram from cyberflashing by putting nude images behind a safety screen. Users will be able to choose whether or not to view such images.

“We’ll also show them a message encouraging them not to feel pressure to respond, with an option to block the sender and report the chat,” said Meta.

The nudity safety screen will be turned on by default for users under 18 globally. Older users will see a notification encouraging them to turn the feature on.

“When nudity protection is turned on, people sending images containing nudity will see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if they’ve changed their mind,” the company added.

Anyone trying to forward a nude image will see the same warning encouraging them to reconsider.

The feature is powered by on-device machine learning, so Meta said it will work within end-to-end encrypted chats because the image analysis is carried out on the user’s own device.

The nudity filter has been in development for nearly two years .

In another safeguarding measure, Instagram users who send or receive nudes will be directed to safety tips (with information about the potential risks involved), which, according to Meta, have been developed with guidance from experts.

“These tips include reminders that people may screenshot or forward images without your knowledge, that your relationship to the person may change in the future, and that you should review profiles carefully in case they’re not who they say they are,” the company wrote in a statement. “They also link to a range of resources, including Meta’s Safety Center, support helplines , StopNCII.org for those over 18, and Take It Down for those under 18.”

The company is also testing showing pop-up messages to people who may have interacted with an account that has been removed for sextortion. These pop-ups will also direct users to relevant resources.

“We’re also adding new child safety helplines from around the world into our in-app reporting flows. This means when teens report relevant issues — such as nudity, threats to share private images or sexual exploitation or solicitation — we’ll direct them to local child safety helplines where available,” the company said.

While Meta says it removes sextortionists’ accounts when it becomes aware of them, it first needs to spot bad actors to shut them down. So, the company is trying to go further by “developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior.”

“While these signals aren’t necessarily evidence that an account has broken our rules, we’re taking precautionary steps to help prevent these accounts from finding and interacting with teen accounts,” the company said. “This builds on the work we already do to prevent other potentially suspicious accounts from finding and interacting with teens.”

It’s not clear what technology Meta is using to do this analysis, nor which signals might denote a potential sextortionist (we’ve asked for more details). Presumably, the company may analyze patterns of communication to try to detect bad actors.

Accounts that get flagged by Meta as potential sextortionists will face restrictions on messaging or interacting with other users.

“[A]ny message requests potential sextortion accounts try to send will go straight to the recipient’s hidden requests folder, meaning they won’t be notified of the message and never have to see it,” the company wrote.

Users who are already chatting with potential scam or sextortion accounts will not have their chats shut down, but will be shown Safety Notices “encouraging them to report any threats to share their private images, and reminding them that they can say ‘no’ to anything that makes them feel uncomfortable,” according to the company.

Teen users are already protected from receiving DMs from adults they are not connected with on Instagram (and also from other teens, in some cases). But Meta is taking this a step further: The company said it is testing a feature that hides the “Message” button on teenagers’ profiles for potential sextortion accounts — even if they’re connected.

“We’re also testing hiding teens from these accounts in people’s follower, following and like lists, and making it harder for them to find teen accounts in Search results,” it added.

It’s worth noting the company is under increasing scrutiny in Europe over child safety risks on Instagram , and enforcers have questioned its approach since the bloc’s Digital Services Act (DSA) came into force last summer.

Meta has announced measures to combat sextortion before — most recently in February , when it expanded access to Take It Down . The third-party tool lets people generate a hash of an intimate image locally on their own device and share it with the National Center for Missing and Exploited Children, helping to create a repository of non-consensual image hashes that companies can use to search for and remove revenge porn.

The company’s previous approaches to tackle that problem had been criticized , as they required young people to upload their nudes. In the absence of hard laws regulating how social networks need to protect children, Meta was left to self-regulate for years — with patchy results.

However, some requirements have landed on platforms in recent years — such as the U.K.’s Children Code (which came into force in 2021) and the more recent DSA in the EU — and tech giants like Meta are finally having to pay more attention to protecting minors.

For example, in July 2021 , Meta started defaulting young people’s Instagram accounts to private just ahead of the U.K. compliance deadline. Even tighter privacy settings for teens on Instagram and Facebook followed in November 2022 .

This January , the company announced it would set stricter messaging settings for teens on Facebook and Instagram by default, shortly before the full compliance deadline for the DSA kicked in in February .

This slow and iterative feature creep at Meta concerning protective measures for young users raises questions about what took the company so long to apply stronger safeguards. It suggests Meta opted for a cynical minimum in safeguarding in a bid to manage the impact on usage, and prioritize engagement over safety. That is exactly what Meta whistleblower Francis Haugen repeatedly denounced her former employer for.

Asked why the company is not also rolling out these new protections to Facebook, a spokeswoman for Meta told TechCrunch, “We want to respond to where we see the biggest need and relevance — which, when it comes to unwanted nudity and educating teens on the risks of sharing sensitive images — we think is on Instagram DMs, so that’s where we’re focusing first.”

Meta is rolling out tighter teen messaging limitations and parental controls

Meta will auto-blur nudity in Instagram DMs in latest teen safety step | TechCrunch

Meta will auto-blur nudity in Instagram DMs in latest teen safety step | TechCrunch

Meta will auto-blur nudity in Instagram DMs in latest teen safety step | TechCrunch

Meta said on Thursday that it is testing new features on Instagram intended to help safeguard young people from unwanted nudity or sextortion scams. This includes a feature called “Nudity Protection in DMs,” which automatically blurs images detected as containing nudity.

The tech giant said it will also nudge teens to protect themselves by serving a warning encouraging them to think twice about sharing intimate images. Meta hopes this will boost protection against scammers who may send nude images to trick people into sending their own images in return.

The company said it is also implementing changes that will make it more difficult for potential scammers and criminals to find and interact with teens. Meta said it is developing new technology to identify accounts that are “potentially” involved in sextortion scams, and will apply limits on how these suspect accounts can interact with other users.

In another step announced on Thursday, Meta said it has increased the data it is sharing with the cross-platform online child safety program, Lantern , to include more “sextortion-specific signals.”

The social networking giant has had long-standing policies that ban people from sending unwanted nudes or seeking to coerce others into sharing intimate images. However, that doesn’t stop these problems from occurring and causing misery for scores of teens and young people — sometimes with extremely tragic results .

We’ve rounded up the latest crop of changes in more detail below.

Nudity Protection in DMs aims to protect teen users of Instagrams from cyberflashing by putting nude images behind a safety screen. Users will be able to choose whether or not to view such images.

“We’ll also show them a message encouraging them not to feel pressure to respond, with an option to block the sender and report the chat,” said Meta.

The nudity safety-screen will be turned on by default for users under 18 globally. Older users will see a notification encouraging them to turn the feature on.

“When nudity protection is turned on, people sending images containing nudity will see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if they’ve changed their mind,” the company added.

Anyone trying to forward a nude image will see the same warning encouraging them to reconsider.

The feature is powered by on-device machine learning, so Meta said it will work within end-to-end encrypted chats because the image analysis is carried out on the user’s own device.

The nudity filter has been in development for nearly two years .

In another safeguarding measure, Instagram users who send or receive nudes will be directed to safety tips (with information about the potential risks involved), which, according to Meta, have been developed with guidance from experts.

“These tips include reminders that people may screenshot or forward images without your knowledge, that your relationship to the person may change in the future, and that you should review profiles carefully in case they’re not who they say they are,” the company wrote in a statement. “They also link to a range of resources, including Meta’s Safety Center, support helplines , StopNCII.org for those over 18, and Take It Down for those under 18.”

The company is also testing showing pop-up messages to people who may have interacted with an account that has been removed for sextortion. These pop-ups will also direct users to relevant resources.

“We’re also adding new child safety helplines from around the world into our in-app reporting flows. This means when teens report relevant issues — such as nudity, threats to share private images or sexual exploitation or solicitation — we’ll direct them to local child safety helplines where available,” the company said.

While Meta says it removes sextortionists’ accounts when it becomes aware of them, it first needs to spot bad actors to shut them down. So, the company is trying to go further by “developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior.”

“While these signals aren’t necessarily evidence that an account has broken our rules, we’re taking precautionary steps to help prevent these accounts from finding and interacting with teen accounts,” the company said. “This builds on the work we already do to prevent other potentially suspicious accounts from finding and interacting with teens.”

It’s not clear what technology Meta is using to do this analysis, nor which signals might denote a potential sextortionist (we’ve asked for more details). Presumably, the company may analyze patterns of communication to try to detect bad actors.

Accounts that get flagged by Meta as potential sextortionists will face restrictions on messaging or interacting with other users.

“[A]ny message requests potential sextortion accounts try to send will go straight to the recipient’s hidden requests folder, meaning they won’t be notified of the message and never have to see it,” the company wrote.

Users who are already chatting with potential scam or sextortion accounts will not have their chats shut down, but will be shown Safety Notices “encouraging them to report any threats to share their private images, and reminding them that they can say ‘no’ to anything that makes them feel uncomfortable,” according to the company.

Teen users are already protected from receiving DMs from adults they are not connected with on Instagram (and also from other teens, in some cases). But Meta is taking this a step further: The company said it is testing a feature that hides the “Message” button on teenagers’ profiles for potential sextortion accounts — even if they’re connected.

“We’re also testing hiding teens from these accounts in people’s follower, following and like lists, and making it harder for them to find teen accounts in Search results,” it added.

It’s worth noting the company is under increasing scrutiny in Europe over child safety risks on Instagram , and enforcers have questioned its approach since the bloc’s Digital Services Act (DSA) came into force last summer.

Meta has announced measures to combat sextortion before — most recently in February , when it expanded access to Take It Down . The third-party tool lets people generate a hash of an intimate image locally on their own device and share it with the National Center for Missing and Exploited Children, helping to create a repository of non-consensual image hashes that companies can use to search for and remove revenge porn.

The company’s previous approaches to tackle that problem had been criticized , as they required young people to upload their nudes. In the absence of hard laws regulating how social networks need to protect children, Meta was left to self-regulate for years — with patchy results.

However, some requirements have landed on platforms in recent years — such as the UK’s Children Code (came into force in 2021) and the more recent DSA in the EU — and tech giants like Meta are finally having to pay more attention to protecting minors.

For example, in July 2021 , Meta started defaulting young people’s Instagram accounts to private just ahead of the UK compliance deadline. Even tighter privacy settings for teens on Instagram and Facebook followed in November 2022 .

This January , the company announced it would set stricter messaging settings for teens on Facebook and Instagram by default, shortly before the full compliance deadline for the DSA kicked in in February .

This slow and iterative feature creep at Meta concerning protective measures for young users raises questions about what took the company so long to apply stronger safeguards. It suggests Meta opted for a cynical minimum in safeguarding in a bid to manage the impact on usage, and prioritize engagement over safety. That is exactly what Meta whistleblower, Francis Haugen , repeatedly denounced her former employer for.

Asked why the company is not also rolling out these new protections to Facebook, a spokeswoman for Meta told TechCrunch, “We want to respond to where we see the biggest need and relevance — which, when it comes to unwanted nudity and educating teens on the risks of sharing sensitive images — we think is on Instagram DMs, so that’s where we’re focusing first.”

Meta is rolling out tighter teen messaging limitations and parental controls

Meta will auto-blur nudity in Instagram DMs in latest teen safety step | TechCrunch

Yahoo is acquiring Instagram co-founders’ AI-powered news startup Artifact | TechCrunch

Yahoo is acquiring Instagram co-founders’ AI-powered news startup Artifact | TechCrunch

Yahoo is acquiring Artifact, the AI-powered news app from Instagram’s co-founders Kevin Systrom and Mike Krieger, the company announced on Tuesday. The financial terms of the deal were not disclosed. Artifact will no longer operate as a stand-alone app, and its AI-powered personalization technology will be integrated across Yahoo, including the Yahoo News app in the coming months. Yahoo is TechCrunch’s parent company.

Systrom and Krieger will work with Yahoo in an “advisory capacity” during this transition.

The announcement comes a few months after Artifact said it would be winding down operations as the market opportunity wasn’t big enough to warrant continued investment. Although Artifact started as a simple news app, the end result seemed more like a Twitter replacement. There’s already a lot of competition in that space with numerous challengers, including Meta’s Threads.

Artifact’s technology surfaces content users want to see and becomes more attuned to their interests over time. As a result, users receive a personalized feed of news stories that they want to read. The app also included several AI tools to summarize news rewrite clickbait headlines and surface the best content. Yahoo says bringing these capabilities into its portfolio “accelerates the opportunity to connect users with even richer content experiences and tailored personalization.”

“Artifact has become a beloved product and we’re thrilled to be able to continue to grow that technology and further our mission of becoming the trusted guide to digital information and the best curator connecting people to the content that matters most to them,” said Kat Downs Mulder, SVP and general manager of Yahoo News, in a press release.

Systrom said in the press release that Artifact’s technology has the opportunity to benefit millions of people, and that “Yahoo brings the scale to help the product achieve what we envisioned while upholding the belief that connecting people to the trusted sources of news and information is as critical as ever.”

Yahoo is acquiring Instagram co-founders’ AI-powered news startup Artifact | TechCrunch

Yahoo is acquiring Instagram co-founders’ AI-powered news startup Artifact | TechCrunch

Yahoo is acquiring Instagram co-founders’ AI-powered news startup Artifact | TechCrunch

Yahoo is acquiring Artifact, the AI-powered news app from Instagram’s co-founders Kevin Systrom and Mike Krieger, the company announced on Tuesday. The financial terms of the deal were not disclosed. Artifact will no longer operate as a standalone app, and its AI-powered personalization technology will be integrated across Yahoo, including the Yahoo News app in the coming months. Yahoo is TechCrunch’s parent company.

Systrom and Krieger will work with Yahoo in an “advisory capacity” during this transition.

The announcement comes a few months after Artifact said it would be winding down operations as the market opportunity wasn’t big enough to warrant continued investment. Although Artifact started off as a simple news app, the end result seemed more like a Twitter replacement. There’s already a lot of competition in that space with numerous competitors, including Meta’s Threads.

Artifact’s technology surfaces content users want to see and becomes more attuned to their interests over time. As a result, users receive a personalized feed of news stories that they want to read. The app also included several AI tools to summarize news rewrite clickbait headlines and surface the best content. Yahoo says bringing these capabilities into its portfolio “accelerates the opportunity to connect users with even richer content experiences and tailored personalization.”

“Artifact has become a beloved product and we’re thrilled to be able to continue to grow that technology and further our mission of becoming the trusted guide to digital information and the best curator connecting people to the content that matters most to them,” said Kat Downs Mulder, SVP and General Manager of Yahoo News, in a press release.

Systrom said in the press release that Artifact’s technology has the opportunity to benefit millions of people, and that “Yahoo brings the scale to help the product achieve what we envisioned while upholding the belief that connecting people to the trusted sources of news and information is as critical as ever.”

Yahoo is acquiring Instagram co-founders’ AI-powered news startup Artifact | TechCrunch

Yahoo is acquiring Instagram co-founders’ AI-powered news startup Artifact | TechCrunch

Instagram co-founders’ AI-powered news app Artifact may not be shutting down after all | TechCrunch

Artifact , the well-received AI-powered news app from Instagram’s co-founders Kevin Systrom and Mike Krieger, may not be shutting down as planned. The company announced in January the award-winning app would be winding down operations as the market opportunity wasn’t “big enough to warrant continued investment.” However, despite an end-of-life date of February 2024, the app has continued to function in the many weeks since.

As it turns out, that’s not by mistake.

Systrom tells us that he and Krieger are continuing to keep Artifact alive for the time being and have not yet given up on a plan to maintain the app in the future — news that will likely give fans of the news discovery app a bit of hope.

“It takes a lot less to run it than we had imagined,” Systrom confirmed to TechCrunch, adding that it’s just himself and Krieger running Artifact right now. “It will still likely go away, but we’re exploring all possible routes for it going forward.” (Perhaps an exit deal is at hand?)

Artifact made a splash at launch, not only because it was the first major effort at a new social app from Instagram’s co-founders, but also because of its clever use of AI. The personalized news reading app leveraged AI to help users discover the news they were most interested in from a variety of pre-vetted sources, and offered up features to summarize news in various styles (like “Gen Z” or “Explain Like I’m Five”). It could also rewrite clickbait headlines for better clarity, among other things.

artifact’s “gen z summary” feature is so deeply out of pocket. i’ll miss it when the app goes down. pic.twitter.com/5PaMavJbNS

— @samhenrigold@hachyderm.io (@samhenrigold) March 16, 2024

Following Artifact’s announcement of its impending closure, interest in using AI to summarize the news has heated up.

Browser startup Arc implemented an AI-powered “pinch to summarize” feature ahead of its $50 million fundraise. Other startups have also turned to AI to improve the news reading experience, like RSS reader Feeeed, AI-powered news reader Bulletin and Particle, an AI news reader built by former Twitter engineers , including the senior director of Product Management at Twitter, Sara Beykpour , and former senior engineer at both Twitter and Tesla, Marcel Molina . The latter recently raised $4.4 million in seed funding, indicating investor interest in this space is growing, too.

Artifact, meanwhile, had been self-funded by the founders to the tune of “single-digit millions,” and it seems they have the funds to continue to run the app — at least in the near term.

Unfortunately for Artifact’s early adopters, the app has been stripped of its social features , like commenting and posting , but it continues to offer news reading and AI summarization features in the version that remains live today.

Instagram co-founders’ news aggregation startup Artifact to shut down

Artifact takes on X and Threads with new Posts feature

 

Instagram co-founders' AI-powered news app Artifact may not be shutting down after all | TechCrunch