A terrifying AI-generated woman is lurking in the abyss of latent space • TechCrunch A terrifying AI generated woman haunts latent space

A terrifying AI-generated woman is lurking in the abyss of latent space • TechCrunch A terrifying AI generated woman haunts latent space

There’s a ghost in the machine. Machine learning, that is.

We are all regularly amazed by AI’s capabilities in writing and creation, but who knew it had such a capacity for instilling horror? A chilling discovery by an AI researcher finds that the “latent space” comprising a deep learning model’s memory is haunted by least one horrifying figure — a bloody-faced woman now known as “Loab.”

(Warning: disturbing imagery ahead.)

But is this AI model truly haunted, or is Loab just a random confluence of images that happens to come up in various strange technical circumstances? Surely it must be the latter unless you believe spirits can inhabit data structures, but it’s more than a simple creepy image — it’s an indication that what passes for a brain in an AI is deeper and creepier than we might otherwise have imagined.

Loab was discovered — encountered? summoned? — by Steph Swanson, a musician and artist who goes by Supercomposite on Twitter. She explained the Loab phenomenon in a thread that achieved a large amount of attention for a random creepy AI thing, something there is no shortage of on the platform, suggesting it struck a chord (minor key, no doubt).

Swanson was playing around with a custom AI text-to-image model, similar to but not DALL-E or Stable Diffusion, and specifically experimenting with “negative prompts.”

Ordinarily, you give the model a prompt, and it works its way towards creating an image that matches it. If you have one prompt, that prompt has a “weight” of 1, meaning that’s the only thing the model is working towards.

You can also split prompts, saying things like “hot air balloon::0.5, thunderstorm::0.5” and it will work towards both of those things equally — this isn’t really necessary, since the language part of the model would also accept “hot air balloon in a thunderstorm” and you might even get better results.

But the interesting thing is that you can also have negative prompts, which causes the model to work away from that concept as actively as it can.

This process is far less predictable, because no one knows how the data is actually organized in what one might anthropomorphize as the “mind” or memory of the AI, known as latent space.

“The latent space is kind of like you’re exploring a map of different concepts in the AI. A prompt is like an arrow that tells you how far to walk in this concept map and in which direction,” Swanson told me.

Here’s a helpful rendering of a much, much simpler latent space in an old Google translation model working on a single sentence in multiple languages:

The latent space of a system like DALL-E is orders of magnitude larger and more complex, but you get the general idea. If each dot here was a million spaces like this one it’s probably a bit more accurate.

The latent space of a system like DALL-E is orders of magnitude larger and more complex, but you get the general idea. If each dot here was a million spaces like this one it’s probably a bit more accurate.

“So if you prompt the AI for an image of ‘a face,’ you’ll end up somewhere in the middle of the region that has all the of images of faces and get an image of a kind of unremarkable average face,” she said. With a more specific prompt, you’ll find yourself among the frowning faces, or faces in profile, and so on. “But with negatively weighted prompt, you do the opposite: you run as far away from that concept as possible.”

But what’s the opposite of “face”? Is it the feet? Is it the back of the head? Something faceless, like a pencil? While we can argue it amongst ourselves, in a machine learning model it was decided during the process of training, meaning however visual and linguistic concepts got encoded into its memory, they can be navigated consistently — even if they may be somewhat arbitrary.

Image Credits: Steph Swanson

Image Credits: Steph Swanson

We saw a related concept in a recent AI phenomenon that went viral because one model seemed to reliably associate some nonsense words with birds and insects. But it wasn’t that DALL-E had a “secret language” in which “ Apoploe vesrreaitais ” means birds — it’s just that the nonsense prompt basically had it throwing a dart at a map of its mind and drawing whatever it lands nearby, in this case birds because the first word is kind of similar to some scientific names . So the arrow just pointed generally in that direction on the map.

Swanson was playing with this idea of navigating the latent space, having given the prompt of “Brando::-1,” which would have the model produce whatever it thinks is the very opposite of “Brando.” It produced a weird skyline logo with nonsense but somewhat readable text: “DIGITA PNTICS.”

Weird, right? But again, the model’s organization of concepts wouldn’t necessarily make sense to us. Curious, Swanson wondered it she could reverse the process. So she put in the prompt: “DIGITA PNITICS skyline logo::-1.” If this image was the opposite of “Brando,” perhaps the reverse was true too and it would find its way to, perhaps, Marlon Brando?

Instead, she got this:

Image Credits: Steph Swanson

Image Credits: Steph Swanson

Over and over she submitted this negative prompt, and over and over the model produced this woman, with bloody, cut, or unhealthily red cheeks and a haunting, otherworldly look. Somehow, this woman — whom Swanson named “Loab” for the text that appears in the top right image there — reliably is the AI model’s best guess for the most distant possible concept from a logo featuring nonsense words.

What happened? Swanson explained how the model might think when given a negative prompt for a particular logo, continuing her metaphor from before.

“You start running as fast as you can away from the area with logos,” she said. “You maybe end up in the area with realistic faces, since that is conceptually really far away from logos. You keep running, because you don’t actually care about faces, you just want to run as far away as possible from logos. So no matter what, you are going to end up at the edge of the map. And Loab is the last face you see before you fall off the edge.”

Image Credits: Steph Swanson

Image Credits: Steph Swanson

Negative prompts don’t always produce horrors, let alone so reliably. Anyone who has played with these image models will tell you it can actually be quite difficult to get consistent results for even very straightforward prompts.

Put in a one for “a robot standing in a field” four or forty times and you may get as many different takes on the concept, some hardly recognizable as robots or fields. But Loab appears consistently with this specific negative prompt, to the point where it feels like an incantation out of an old urban legend.

You know the type: “Stand in a dark bathroom looking at the mirror and say ‘Bloody Mary’ three times.” Or even earlier folk instructions of how to reach a witch’s abode or the entrance to the underworld: holding a sprig of holly, walk backwards 100 steps from a dead tree with your eyes closed.

“DIGITA PNITICS skyline logo::-1” isn’t quite as catchy, but as magic words go the phrase is at least suitably arcane. And it has the benefit of working. Only on this particular model, of course — every AI platform’s latent space is different, though who knows but Loab may be lurking in DALL-E or Stable Diffusion too, waiting to be summoned.

Loab as an ancient statue, but it’s unmistakably her.

Loab as an ancient statue, but it’s unmistakably her.

In fact, the incantation is strong enough that Loab seems to infect even split prompts and combinations with other images.

“Some AIs can take other images as prompts, they basically can interpret the image, turning it into a directional arrow on the map just like they treat text prompts,” explained Swanson. “I used Loab’s image and one or more other images together as a prompt… she almost always persists in the resulting picture.”

Sometimes more complex or combination prompts treat one part as more of a loose suggestion. But ones that include Loab seem not just to veer toward the grotesque and horrifying, but to include her in a very recognizable fashion. Whether she’s being combined with bees, video game characters, film styles, or abstractions, Loab is front and center, dominating the composition with her damaged face, neutral expression, and long dark hair.

It’s unusual for any prompt or imagery to be so consistent — to haunt other prompts the way she does. Swanson speculated on why this might be.

“I guess because she is very far away from a lot of concepts and so it’s hard to get out of her little spooky area in latent space. The cultural question, of why the data put this woman way out there at the edge of the latent space, near gory horror imagery, is another thing to think about,” she said.

Although it’s an oversimplification, latent space really like a map, and the prompts like directions for navigating it — and the system draws whatever ends up being around where it’s asked to go, whether it’s well-trodden ground like “still life by a Dutch master” or a synthesis of obscure or disconnected concepts: “robots battle aliens in a cubist etching by Dore.” As you can see:

Image Credits: TechCrunch / DALL-E

Image Credits: TechCrunch / DALL-E

A purely speculative explanation of why Loab exists has to do with how that map is laid out. As Swanson suggested, it’s likely that, simply due to the fact that company logos and horrific, scary imagery are very far from one another conceptually.

A negative prompt doesn’t mean “take ten data-steps in the other direction,” it means keep going as far as you can, and it’s more than possible that images at the farthest reaches of an AI’s latent space have more extreme or uncommon values. Wouldn’t you organize it that way, with stuff that has lots of commonalities or cross-references in the “center,” however you define that — and weird, wild stuff that’s rarely relevant out at the “edge”?

Therefore negative prompts may act like a way to explore the frontier of the AI’s mind map, skimming the concepts it deems too outlandish to store among prosaic concepts like happy faces, beautiful landscapes, or frolicking pets.

The unnerving fact is no one really understands how latent spaces are structured or why. There is of course a great deal of research on the subject, and some indications that they are organized in some ways like how our own minds are — which makes sense, since they were more or less built in imitation of them. But in other ways they have totally unique structures connecting across vast conceptual distances.

To be clear, it’s not as if there is some clutch of images specifically of Loab waiting to be found — they’re definitely being created on the fly, and Swanson told me there’s no indication the digital cryptid is based on any particular artist or work. That’s why latent space is latent ! These images emerged from a combination of strange and terrible concepts that all happen to occupy the same area in the model’s memory, much like how in the diagram earlier, languages were clustered based on their similarity.

From what dark corner or unconscious associations sprang Loab, fully formed and coherent? We can’t yet trace the path the model took to reach her location; a trained model’s latent space is vast and impenetrably complex.

The only way we can reach the spot again is through the magic words, spoken while we step backwards through that space with our eyes closed, until we reach the witch’s hut that can’t be approached by ordinary means. Loab isn’t a ghost, but she is an anomaly, yet paradoxically she may be one of an effectively infinite number of anomalies waiting to be summoned from the farthest, unlit reaches of any AI model’s latent space.

It may not be supernatural… but sure as hell ain’t natural.

A terrifying AI-generated woman is lurking in the abyss of latent space • TechCrunch A terrifying AI generated woman haunts latent space

India to control which lending apps are permitted to app stores in latest crackdown

India to control which lending apps are permitted to app stores in latest crackdown

India plans to mandate which loan apps are allowed on app stores in the country, the latest in a series of recent steps from the world’s second largest internet market to crackdown on sketchy and unethical lenders.

The Reserve Bank of India, the country’s central bank, will prepare a “whitelist” of all legal apps and the nation’s IT ministry will ensure that only whitelisted apps are hosted on app stores, the Finance Ministry said in a statement .

The central bank will also monitor “mule” or “rented” accounts for money laundering practices and review and cancel licenses of non banking financial institutions if they are found guilty, the Finance Ministry said Friday.

Payment aggregators in the country will be required to register themselves in within specific timeframe, and India’s Ministry of Corporate Affairs will work to identify shell companies and revoke their registrations to prevent misuse.

Indian authorities in recent quarters have been clamping down on lending apps that levy exorbitant fees and use unethical means to collect the payments back. India’s central bank said earlier this month that it was moving ahead with new guidelines for digital lending that will mandate firms to provide more disclosure and transparency to benefit consumers as well as restrict several business practices.

Scores of lending apps have mushroomed in India in recent quarters, many offering loans to customers without any credit score and with poor savings and later used unethical means to collect their money back. Law enforcement agencies in the country have made hundreds of arrests on complaints alleging abuse and harassment from agents collecting repayments on behalf of loan apps.

Due to the psychological pressure built by the predators behind these platforms, some individuals have also committed suicide, according to local media reports. Though the latest move aims to limit the availability of loan apps through app stores — a bold step, certainly — several such apps are in circulation from outside of the app stores. These links are provided to customers through text messages and even via ads.

Google said last month that it had blocked over 2,000 unethical lending apps in India this year .

“The Finance Minister expressed concern on increasing instances of illegal loan apps offering loans/micro credits, especially to vulnerable & low-income group people at exorbitantly high interest rates and processing/hidden charges, and predatory recovery practices involving blackmailing, criminal intimidation etc,” the ministry said today.

“Smt. Sitharaman [Finance Minister] also noted the possibility of money laundering, tax evasions, breach/privacy of data, and misuse of unregulated payment aggregators, shell companies, defunct NBFCs etc. for perpetrating such actions.”

Apple and Google didn’t immediately respond to requests for comment.

India to control which lending apps are permitted to app stores in latest crackdown

The EU’s AI Act could have a chilling effect on open source efforts, experts warn

The EU’s AI Act could have a chilling effect on open source efforts, experts warn

The nonpartisan think tank Brookings this week published a piece decrying the bloc’s regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the E.U.’s draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.

If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it’s not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.

“This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public’s understanding of AI,” Alex Engler, the analyst at Brookings who published the piece, wrote. “In the end, the [E.U.’s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI.”

In 2021, the European Commission — the E.U.’s politically independent executive arm — released the text of the AI Act, which aims to promote “trustworthy AI” deployment in the E.U. As they solicit input from industry ahead of a vote this fall, E.U. institutions are seeking to make amendments to the regulations that attempt to balance innovation with accountability. But according to some experts, the AI Act as written would impose onerous requirements on open efforts to develop AI systems.

The legislation contains carve-outs for some categories of open source AI, like those exclusively used for research and with controls to prevent misuse. But as Engler notes, it’d be difficult — if not impossible — to prevent these projects from making their way into commercial systems, where they could be abused by malicious actors.

In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.

Oren Etzioni, the founding CEO of the Allen Institute for AI, agrees that the current draft of the AI Act is problematic. In an email interview with TechCrunch, Etzioni said that the burdens introduced by the rules could have a chilling effect on areas like the development of open text-generating systems, which he believes are enabling developers to “catch up” to big tech companies like Google and Meta.

“The road to regulation hell is paved with the E.U.’s good intentions,” Etzioni said. “Open source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided ‘as is’ — consider the case of a single student developing an AI capability; they cannot afford to comply with E.U. regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on reproducibility of scientific results.”

Instead of seeking to regulate AI technologies broadly, E.U. regulators should focus on specific applications of AI, Etzioni argues. “There is too much uncertainty and rapid change in AI for the slow-moving regulatory process to be effective,” he said. “Instead, AI applications such as autonomous vehicles, bots, or toys should be the subject of regulation.”

Not every practitioner believes the AI Act is in need of further amending. Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, thinks it’s “perfectly fine” to regulate open source AI “a little more heavily” than needed. Setting any sort of standard can be a way to show leadership globally, he posits — hopefully encouraging others to follow suit.

“The fearmongering about ‘stifling innovation’ comes mostly from people who want to do away with all regulation and have free rein, and that’s generally not a view I put much stock into,” Cook said. “I think it’s okay to legislate in the name of a better world, rather than worrying about whether your neighbour is going to regulate less than you and somehow profit from it.”

To wit, as my colleague Natasha Lomas has previously noted, the E.U.’s risk-based approach lists several prohibited uses of AI (e.g., China-style state social credit scoring) while imposing restrictions on AI systems considered to be “high-risk” — like those having to do with law enforcement. If the regulations were to target product types as opposed to product categories (as Etzioni argues they should), it might require thousands of regulations — one for each product type — leading to conflict and even greater regulatory uncertainty.

An analysis written by Lilian Edwards, a law professor at the Newcastle School and a part-time legal advisor at the Ada Lovelace Institute, questions whether the providers of systems like open source large language models (e.g., GPT-3) might be liable after all under the AI Act. Language in the legislation puts the onus on downstream deployers to manage an AI system’s uses and impacts, she says — not necessarily the initial developer.

“[T]he way downstream deployers use [AI] and adapt it may be as significant as how it is originally built,” she writes. “The AI Act takes some notice of this but not nearly enough, and therefore fails to appropriately regulate the many actors who get involved in various ways ‘downstream’ in the AI supply chain.”

At AI startup Hugging Face, CEO Clément Delangue, counsel Carlos Muños Ferrandis and policy expert Irene Solaiman say that they welcome regulations to protect consumer safeguards, but that the AI Act as proposed is too vague. For instance, they say, it’s unclear whether the legislation would apply to the “pre-trained” machine learning models at the heart of AI-powered software or only to the software itself.

“This lack of clarity, coupled with the non-observance of ongoing community governance initiatives such as open and responsible AI licenses, might hinder upstream innovation at the very top of the AI value chain, which is a big focus for us at Hugging Face,” Delangue, Ferrandis and Solaiman said in a joint statement. “From a competition and innovation perspective, if you already place overly heavy burdens on openly released features at the top of the AI innovation stream you risk hindering incremental innovation, product differentiation and dynamic competition, this latter being core in emergent technology markets such as AI-related ones … The regulation should take into account the innovation dynamics of AI markets and thus clearly identify and protect core sources of innovation in these markets.”

As for Hugging Face, the company advocates for improved AI governance tools regardless of the AI Act’s final language, like “responsible” AI licenses and model cards that include information like the intended use of an AI system and how it works. Delangue, Ferrandis and Solaiman point out that responsible licensing is starting to become a common practice for major AI releases, such as Meta’s OPT-175 language model .

“Open innovation and responsible innovation in the AI realm are not mutually exclusive ends, but rather complementary ones,” Delangue, Ferrandis and Solaiman said. “The intersection between both should be a core target for ongoing regulatory efforts, as it is being right now for the AI community.”

That well may be achievable. Given the many moving parts involved in E.U. rulemaking (not to mention the stakeholders affected by it), it’ll likely be years before AI regulation in the bloc starts to take shape.

The EU's AI Act could have a chilling effect on open source efforts, experts warn

AI is getting better at generating porn. We might not be prepared for the consequences. – TechCrunch

AI is getting better at generating porn. We might not be prepared for the consequences. – TechCrunch

A red-headed woman stands on the moon, her face obscured. Her naked body looks like it belongs on a poster you’d find on a hormonal teenager’s bedroom wall — that is, until you reach her torso, where three arms spit out of her shoulders.

AI-powered systems like Stable Diffusion , which translate text prompts into pictures, have been used by brands and artists to create concept images, award-winning (albeit controversial ) prints and full-blown marketing campaigns.

But some users, intent on exploring the systems’ murkier side, have been testing them for a different sort of use case: porn.

AI porn is about as unsettling and imperfect as you’d expect (that red-head on the moon was likely not generated by someone with an extra arm fetish). But as the tech continues to improve, it will evoke challenging questions for AI ethicists and sex workers alike.

Pornography created using the latest image-generating systems first arrived on the scene via the discussion boards 4chan and Reddit earlier this month, after a member of 4chan leaked the open source Stable Diffusion system ahead of its official release. Then, last week, what appears to be one of the first websites dedicated to high-fidelity AI porn generation launched.

Called Porn Pen, the website allows users to customize the appearance of nude AI-generated models — all of which are women — using toggleable tags like “babe,” “lingerie model,” “chubby,” ethnicities (e.g. “Russian” and “Latina”) and backdrops (e.g. “bedroom,” “shower” and wildcards like “moon”). Buttons capture models from the front, back or side, and change the appearance of the generated photo (e.g. “film photo,” “mirror selfie”). There must be a bug on the mirror selfies, though, because in the feed of user-generated images, some mirrors don’t actually reflect a person — but of course, these models are not people at all. Porn Pen functions like “ This Person Does Not Exist ,” only it’s NSFW.

On Y Combinator’s Hacker News forum , a user purporting to be the creator describes Porn Pen as an “experiment” using cutting-edge text-to-image models. “I explicitly removed the ability to specify custom text to avoid harmful imagery from being generated,” they wrote. “New tags will be added once the prompt-engineering algorithm is fine-tuned further.” The creator did not respond to TechCrunch’s request for comment.

But Porn Pen raises a host of ethical questions, like biases in image-generating systems and the sources of the data from which they arose. Beyond the technical implications, one wonders whether new tech to create customized porn — assuming it catches on — could hurt adult content creators who make a living doing the same.

“I think it’s somewhat inevitable that this would come to exist when [OpenAI’s] DALL-E did,” Os Keyes, a PhD candidate at Seattle University, told TechCrunch via email. “But it’s still depressing how both the options and defaults replicate a very heteronormative and male gaze.”

Ashley, a sex worker and peer organizer who works on cases involving content moderation, thinks that the content generated by Porn Pen isn’t a threat to sex workers in its current state.

“There is endless media out there,” said Ashley, who did not want her last name to be published for fear of being harassed for their job. “But people differentiate themselves not by just making the best media, but also by being an accessible, interesting person. It’s going to be a long time before AI can replace that.”

On existing monetizable porn sites like OnlyFans and ManyVids, adult creators must verify their age and identity so that the company knows they are consenting adults. AI-generated porn models can’t do this, of course, because they aren’t real.

Ashley worries, though, that if porn sites crack down on AI porn, it might lead to harsher restrictions for sex workers, who are already facing increased regulation from legislation like SESTA/FOSTA . Congress introduced the Safe Sex Workers Study Act in 2019 to examine the affects of this legislation, which makes online sex work more difficult. This study found that “community organizations [had] reported increased homelessness of sex workers” after losing the “economic stability provided by access to online platforms.”

“SESTA was sold as fighting child sex trafficking, but it created a new criminal law about prostitution that had nothing about age,” Ashley said.

Currently, few laws around the world pertain to deepfaked porn. In the U.S., only Virginia and California have regulations restricting certain uses of faked and deepfaked pornographic media.

Systems such as Stable Diffusion “learn” to generate images from text by example. Fed billions of pictures labeled with annotations that indicate their content — for example, a picture of a dog labeled “Dachshund, wide-angle lens” — the systems learn that specific words and phrases refer to specific art styles, aesthetics, locations and so on.

This works relatively well in practice. A prompt like “a bird painting in the style of Van Gogh” will predictably yield a Van Gogh-esque image depicting a bird. But it gets trickier when the prompts are vaguer, refer to stereotypes or deal with subject matter with which the systems aren’t familiar.

For example, Porn Pen sometimes generates images without a person at all — presumably a failure of the system to understand the prompt. Other times, as alluded to earlier, it shows physically improbable models, typically with extra limbs, nipples in unusual places and contorted flesh.

“By definition [these systems are] going to represent those whose bodies are accepted and valued in mainstream society,” Keyes said, noting that Porn Pen only has categories for cisnormative people. “It’s not surprising to me that you’d end up with a disproportionately high number of women, for example.”

While Stable Diffusion, one of the systems likely underpinning Porn Pen, has relatively few “NSFW” images in its training dataset , early experiments from Redditors and 4chan users show that it’s quite competent at generating pornographic deepfakes of celebrities (Porn Pen — perhaps not coincidentally — has a “celebrity” option). And because it’s open source, there’d be nothing to prevent Porn Pen’s creator from fine-tuning the system on additional nude images.

“It’s definitely not great to generate [porn] of an existing person,” Ashley said. “It can be used to harass them.”

Deepfake porn is often created to threaten and harass people. These images are almost always developed without the subject’s consent out of malicious intent . In 2019, the research company Sensity AI found that 96% of deepfake videos online were non-consensual porn.

Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, says that there’s a possibility the dataset includes people who’ve not consented to their image being used for training in this way, including sex workers.

“Many of [the people in the nudes in the training data] may derive their income from producing pornography or pornography-adjacent content,” Cook said. “Just like fine artists, musicians or journalists, the works these people have produced are being used to create systems that also undercut their ability to earn a living in the future.”

In theory, a porn actor could use copyright protections, defamation and potentially even human rights laws to fight the creator of a deepfaked image. But as a piece in MIT Technology Review notes, gathering evidence in support of the legal argument can prove to be a massive challenge.

When more primitive AI tools popularized deepfaked porn several years ago, a Wired investigation found that nonconsensual deepfake videos were racking up millions of views on mainstream porn sites like Pornhub. Other deepfaked works found a home on sites akin to Porn Pen — according to Sensity data, the top four deepfake porn websites received more than 134 million views in 2018.

“AI image synthesis is now a widespread and accessible technology, and I don’t think anyone is really prepared for the implications of this ubiquity,” Cook continued. “In my opinion, we have rushed very, very far into the unknown in the last few years with little regard for the impact of this technology.”

To Cook’s point, one of the most popular sites for AI-generated porn expanded late last year through partner agreements, referrals and an API, allowing the service — which hosts hundreds of nonconsensual deepfakes — to survive bans on its payments infrastructure. And in 2020, researchers discovered a Telegram bot that generated abusive deepfake images of more than 100,000 women, including underage girls.

“I think we’ll see a lot more people testing the limits of both the technology and society’s boundaries in the coming decade,” Cook said. “We must accept some responsibility for this and work to educate people about the ramifications of what they are doing.”

AI is getting better at generating porn. We might not be prepared for the consequences. – TechCrunch

Shuffles, Pinterest’s invite-only collage-making app, is blowing up on TikTok — here’s how to get in – TechCrunch

Shuffles, Pinterest’s invite-only collage-making app, is blowing up on TikTok — here’s how to get in – TechCrunch

Collage-style video “mood boards” are going viral on TikTok — and so is the app making them possible. Pinterest’s recently soft-launched collage-maker Shuffles has been climbing up the App Store’s Top Charts thanks to demand from Gen Z users who are leveraging the new creative expression tool to make , publish and share visual content . These “ aesthetic collages are then set to music and posted to TikTok or shared privately with friends or with the broader Shuffles community.

Despite being in invite-only status, Shuffles has already spent some time as the No. 1 Lifestyle app on the U.S. App Store.

During the week of August 15-22, 2022, Shuffles ranked No. 5 in the Top Lifestyle Apps by downloads on iPhone in the U.S., according to metrics provided by app intelligence firm data.ai — an increase of 72 places in the rankings compared to the week prior. It was the No. 1 Lifestyle app on iPhone by Sunday, August 21st, and broke into the Top 20 non-gaming apps on iOS as a whole in the U.S. that same day, after jumping up 22 ranks from the day prior.

Additionally, the firm Sensor Tower found the app is now No. 66 Overall on the U.S. iPhone App Store and is the No. 1 Overall app in Ireland, New Zealand and the U.K. It’s No. 2 Overall in Australia and No. 3 in Canada.

First launched in late July 2022, the app has seen 211,000 iOS downloads worldwide in the month it’s been live — 160,000 of those downloads were in the U.S., data.ai says. Sensor Tower, meanwhile, estimates the app has seen approximately 338,000 installs during this time.

Considering it’s still not “publicly launched,” Shuffles appears to be an out-the-gate hit for Pinterest, which has been trying to reinvent itself for the creator-driven, video-first era with products like Idea Pins, similar to TikTok, and live video shopping on Pinterest TV. 

Similarly, Shuffles is also targeting a younger demographic that’s using social media in a new way: for self-expression, not just networking.

The new app allows users to build their own collages using Pinterest’s photo library or by snapping photos of objects they want to include using the camera. One clever feature involves its use of technology, built in-house, that allows users to cut out objects from their photos, their Pinterest boards or by searching for new Pins.

Pinterest debuts a new app, Shuffles, for collage-making and moodboards

This is similar to iOS 16’s forthcoming image cutout feature that is, arguably, one of the more fun additions to ship with Apple’s new mobile operating system. Here, you can effortlessly copy an object from one of your photos — like your dog, for instance — then paste that cutout anywhere you choose, like in an iMessage chat. This feels a bit magical, as you only need to touch and hold to lift the image away from the background.

Shuffles, meanwhile, makes image cutouts even easier. When you either search for or snap a photo, the app often automatically identifies the object in the photos and you only have to tap the “Add” button to place it into your collage, where it can be resized and moved around the screen. At other times, you can use the included tool to cut out the portion of the image you want to utilize in your creation.

You also can choose to add effects and motion to the images to make them shake, spin, pulse, swivel and more. For instance, you could add an image of a record player, then animate it so it actually spins.

Image Credits: Pinterest

Image Credits: Pinterest

The final product can be saved locally to your device, shared in a message with friends, or published to a dedicated community using a hashtag. These hashtags are browsable in the app’s discovery section where collages tagged with popular hashtags — like #moodboard, #vintage or #aesthetic, for instance — are also showcased.

While the app does make for good TikToks, it helps drive traffic to Pinterest too. The objects in users’ collages are linked to Pinterest and a tap will bring you to a dedicated page for the item in question, which you can then open to view directly in Pinterest. In the case of items that are available for purchase — like fall fashion or home decor, for instance — users could also buy the item by clicking through to the retailer’s website.

Demand for the app has been aided by its exclusivity, for the time being.

Users need an invite code to get in — and they can only get it from an existing Shuffles user who has just five invites to share.

Invite codes have often been used to drive demand for new products, after seeing outsized success as a growth mechanism for Google’s new email system Gmail in the early 2000s. But in later years, their usage has felt less authentic, as they became a way for app marketers to push users to post to social media in exchange for early access to a new product.

With Pinterest, however, the use of the invite code mechanism is not tied to a request that users must take some sort of action to be let in. Instead, you have to know someone to get an invite, which has led some TikTokers to lament how they’ve had to beg friends for codes.

(Beg no more: Pinterest provided TechCrunch readers with an invite code to use for Shuffles: FTSNFUFC. If that runs out, you can visit Pinterest’s Instagram or Twitter account for future code drops. This is not an advertisement or paid promotion, we’re just sharing the code!) 

Pinterest told TechCrunch the app is invite-only because, technically, it hasn’t publicly launched.

Shuffles, we’re told, is the first-ever standalone app created by Pinterest’s in-house incubator, TwoTwenty. The team, which also had a hand in the creation of Pinterest TV, is focused on researching and testing new product ideas and iterating on those that gain traction.

As to why the app is resonating with Gen Z, it seems to be the combination of the technology used to simplify collage making with the desire for creative expression tools that serve the demographic’s social habits.

“The app is seeing budding download momentum, targeting younger users. It’s building off the empowerment of creativity and user-generated content, popularized in many ways by TikTok,” Lexi Sydow, head of Insights at data.ai, told TechCrunch. “Especially for younger generations, photo editing and creative projects are mobile-first more than ever, leveraging robust mobile apps to create robust projects that once required sophisticated desktop software. The app takes collaging one step further with simple embedded tools that would require multiple steps or coordination across multiple apps,” she explained.

“Users curate their mood boards and ‘vibes’, which touches on a similar cultural thread to visual-first campaigns by Spotify showing your unique music tastes. The app inherently relies on Gen Z’s social habits where users leverage social media apps to share with their networks and close circles of friends. The app has received 4.31 out of 5 stars to date since launch, with 72% of all reviews being 5 stars,” Sydow added.

Shuffles is currently iOS-only and a free download on the App Store.

Shuffles, Pinterest’s invite-only collage-making app, is blowing up on TikTok — here’s how to get in – TechCrunch

Deepfakes for all: Uncensored AI art model prompts ethics questions – TechCrunch

Deepfakes for all: Uncensored AI art model prompts ethics questions – TechCrunch

A new open source AI image generator capable of producing realistic pictures from any text prompt has seen stunningly swift uptake in its first week. Stability AI’s Stable Diffusion , high fidelity but capable of being run on off-the-shelf consumer hardware, is now in use by art generator services like Artbreeder, Pixelz.ai and more. But the model’s unfiltered nature means not all the use has been completely above board.

Fast Style Transfer for Arbitrary Styles with Beginner Notes from Tyler Garrett

For the most part, the use cases have been above board. For example, NovelAI has been experimenting with Stable Diffusion to produce art that can accompany the AI-generated stories created by users on its platform. Midjourney has launched a beta that taps Stable Diffusion for greater photorealism.

But Stable Diffusion has also been used for less savory purposes. On the infamous discussion board 4chan, where the model leaked early, several threads are dedicated to AI-generated art of nude celebrities and other forms of generated pornography.

View at Medium.com

Emad Mostaque, the CEO of Stability AI, called it “unfortunate” that the model leaked on 4chan and stressed that the company was working with “leading ethicists and technologies” on safety and other mechanisms around responsible release. One of these mechanisms is an adjustable AI tool, Safety Classifier, included in the overall Stable Diffusion software package that attempts to detect and block offensive or undesirable images.

However, Safety Classifier — while on by default — can be disabled.

Stable Diffusion is very much new territory. Other AI art-generating systems, like OpenAI’s DALL-E 2, have implemented strict filters for pornographic material. (The license for the open source Stable Diffusion prohibits certain applications, like exploiting minors, but the model itself isn’t fettered on the technical level.) Moreover, many don’t have the ability to create art of public figures, unlike Stable Diffusion. Those two capabilities could be risky when combined, allowing bad actors to create pornographic “deepfakes” that — worst-case scenario — might perpetuate abuse or implicate someone in a crime they didn’t commit.

Stable Diffusion

A deepfake of Emma Watson, created by Stable Diffusion and published on 4chan.

A deepfake of Emma Watson, created by Stable Diffusion and published on 4chan.

Women, unfortunately, are most likely by far to be the victims of this. A study carried out in 2019 revealed that, of the 90% to 95% of deepfakes that are non-consensual, about 90% are of women. That bodes poorly for the future of these AI systems, according to Ravit Dotan, an AI ethicist at the University of California, Berkeley.

“I worry about other effects of synthetic images of illegal content — that it will exacerbate the illegal behaviors that are portrayed,” Dotan told TechCrunch via email. “E.g., will synthetic child [exploitation] increase the creation of authentic child [exploitation]? Will it increase the number of pedophiles’ attacks?”

Montreal AI Ethics Institute principal researcher Abhishek Gupta shares this view. “We really need to think about the lifecycle of the AI system which includes post-deployment use and monitoring, and think about how we can envision controls that can minimize harms even in worst-case scenarios,” he said. “This is particularly true when a powerful capability [like Stable Diffusion] gets into the wild that can cause real trauma to those against whom such a system might be used, for example, by creating objectionable content in the victim’s likeness.”

Something of a preview played out over the past year when, at the advice of a nurse, a father took pictures of his young child’s swollen genital area and texted them to the nurse’s iPhone. The photo automatically backed up to Google Photos and was flagged by the company’s AI filters as child sexual abuse material, which resulted in the man’s account being disabled and an investigation by the San Francisco Police Department.

If a legitimate photo could trip such a detection system, experts like Dotan say, there’s no reason deepfakes generated by a system like Stable Diffusion couldn’t — and at scale.

“The AI systems that people create, even when they have the best intentions, can be used in harmful ways that they don’t anticipate and can’t prevent,” Dotan said. “I think that developers and researchers often underappreciated this point.”

View at Medium.com

Of course, the technology to create deepfakes has existed for some time, AI-powered or otherwise. A 2020 report from deepfake detection company Sensity found that hundreds of explicit deepfake videos featuring female celebrities were being uploaded to the world’s biggest pornography websites every month; the report estimated the total number of deepfakes online at around 49,000, over 95% of which were porn. Actresses including Emma Watson, Natalie Portman, Billie Eilish and Taylor Swift have been the targets of deepfakes since AI-powered face-swapping tools entered the mainstream several years ago, and some, including Kristen Bell, have spoken out against what they view as sexual exploitation .

But Stable Diffusion represents a newer generation of systems that can create incredibly — if not perfectly — convincing fake images with minimal work by the user. It’s also easy to install, requiring no more than a few setup files and a graphics card costing several hundred dollars on the high end. Work is underway on even more efficient versions of the system that can run on an M1 MacBook.

Stable Diffusion

A Kylie Kardashian deepfake posted to 4chan.

A Kylie Kardashian deepfake posted to 4chan.

Sebastian Berns, a Ph.D. researcher in the AI group at Queen Mary University of London, thinks the automation and the possibility to scale up customized image generation are the big differences with systems like Stable Diffusion — and main problems. “Most harmful imagery can already be produced with conventional methods but is manual and requires a lot of effort,” he said. “A model that can produce near-photorealistic footage may give way to personalized blackmail attacks on individuals.”

Berns fears that personal photos scraped from social media could be used to condition Stable Diffusion or any such model to generate targeted pornographic imagery or images depicting illegal acts. There’s certainly precedent. After reporting on the rape of an eight-year-old Kashmiri girl in 2018, Indian investigative journalist Rana Ayyub became the target of Indian nationalist trolls, some of whom created deepfake porn with her face on another person’s body. The deepfake was shared by the leader of the nationalist political party BJP, and the harassment Ayyub received as a result became so bad the United Nations had to intervene.

“Stable Diffusion offers enough customization to send out automated threats against individuals to either pay or risk having fake but potentially damaging footage being published,” Berns continued. “We already see people being extorted after their webcam was accessed remotely. That infiltration step might not be necessary anymore.”

With Stable Diffusion out in the wild and already being used to generate pornography — some non-consensual — it might become incumbent on image hosts to take action. TechCrunch reached out to one of the major adult content platforms, OnlyFans, but didn’t hear back as of publication time. A spokesperson for Patreon, which also allows adult content, noted that the company has a policy against deepfakes and disallows images that “repurpose celebrities’ likenesses and place non-adult content into an adult context.”

If history is any indication, however, enforcement will likely be uneven — in part because few laws specifically protect against deepfaking as it relates to pornography. And even if the threat of legal action pulls some sites dedicated to objectionable AI-generated content under, there’s nothing to prevent new ones from popping up.

In other words, Gupta says, it’s a brave new world.

“Creative and malicious users can abuse the capabilities [of Stable Diffusion] to generate subjectively objectionable content at scale, using minimal resources to run inference — which is cheaper than training the entire model — and then publish them in venues like Reddit and 4chan to drive traffic and hack attention,” Gupta said. “There is a lot at stake when such capabilities escape out “into the wild” where controls such as API rate limits, safety controls on the kinds of outputs returned from the system are no longer applicable.”

Deepfakes for all: Uncensored AI art model prompts ethics questions – TechCrunch