Photo editing app VSCO launches marketplace to connect photographers with brands | TechCrunch

Photo editing app VSCO launches marketplace to connect photographers with brands | TechCrunch

Working with brands is one of the primary ways artists and content creators can earn substantial income today. However, it’s not easy for creators to connect with brands, and companies looking for new or specialized talent also have a difficult time finding them. Photo editing app VSCO is trying to solve this problem for its primary audience, professional photographers, with a new marketplace called VSCO Hub that aims to connects them with brands.

VSCO’s platform is similar to other social platforms like Instagram , YouTube and TikTok , which already offer creator marketplaces to help businesses discover content creators and strike partnerships with them.

VSCO’s CEO, Eric Wittman, who joined the company last September , equates the new portal to LinkedIn. “The VSCO Hub is almost like LinkedIn. Brands act as recruiters, and they can easily find someone for their projects and connect with them,” he told TechCrunch over a call.

To get access to VSCO Hub, photographers will have to sign up for the $59.99 Pro plan, which has more than 160,000 subscribers. For businesses, the platform offers filters like location, category, price, and availability that they can use to narrow down their search for photographers.

The platform has a cool marquee feature, too: It allows creative directors to upload a reference image and search for photographers who might have similar work in their portfolios.

VSCO Refrence image upload on VSCO Hub

Image Credits: VSCO

Image Credits: VSCO

VSCO doesn’t take a cut from the project payments and just acts as a connection layer.

Wittman blames social network algorithms for the lack of discoverability in the photography community, saying marketers spend many hours on Google or Instagram trying to find photographers whose work matches their creative vision.

“Because of how social networks have changed themselves and their algorithms, it is really hard for photographers to get discovered by potential clients. We saw the need of our photography community and decided to build VSCO Hub,” he said.

VSCO is also looking to bolster its search with AI. Wittman said the company is internally testing a way for brands to enter text queries and find images through semantic search, and this feature will make it to the platform soon. VSCO also plans to add more filters to help companies narrow down their searches.

Wittman and VSCO’s stance on the use of AI is similar to a lot of other platforms: The technology will be used to help artists, not replace them. The CEO said his company is looking to release more AI-powered tools to assist photographers in their workflows.

Notably, tools like Sequoia-backed Visual Electric and Facetune maker Lighttrick’s LTX Studio help artists with ideation and focus on workflows. Wittman believes that ideation — not production — is where creatives will look to hone AI’s prowess more.

Photo editing app VSCO launches marketplace to connect photographers with brands | TechCrunch

Sperm whale ‘alphabet’ discovered, thanks to machine learning | TechCrunch

Sperm whale ‘alphabet’ discovered, thanks to machine learning | TechCrunch

Researchers at MIT CSAIL and Project CETI believe that they have unlocked a kind of sperm whale “alphabet” with the aid of machine learning technologies. Results from the study, which were published under the title, “ Contextual and Combinatorial Structure in Sperm Whale Vocalizations ,” point to key breakthroughs in our understanding of cetacean communication.

The study deals with codas — a series of clicks that serve different linguistic functions. “What we have discovered is that there is previously undescribed variation in the coda structure,” CSAIL director Daniela Rus told TechCrunch. “We have discovered that coda types are not arbitrary, but rather they form a newly discovered combinatory coding system.”

While whale vocalization has been a key subject of research for decades, the teams behind this new research suggest that they’ve uncovered a level of previously unknown nuance among the chatty sea mammals. The paper notes that previous research has noted 150 different sperm whale codas.

“A subset of these have been shown to encode information about caller and clan identity,” it explains. “However, almost everything else about the sperm whale communication system, including basic questions about its structure and information-carrying capacity, remains unknown.”

The teams drew on work from Roger Payne, the pioneering marine biologist who passed away last June. Payne’s most influential work involved the songs of humpback whales. “He has really inspired us to want to use our most advanced technologies to want to have a deeper understanding of the whales,” says Rus.

The teams deployed machine learning solutions to analyze a dataset of 8,719 sperm whale codas collected by researcher Shane Gero off the coast of the small eastern Caribbean island, Dominica.

“We would get the inputs, and then we adjust our machine learning, to visualize better and to understand more,” explains Rus. “And then we would analyze the output with a biologist.”

The team’s method marked a change from older analysis, which studied individual coda. A richer picture forms when the sounds are studied in context, as exchanges between whales. Contextual details are classified using music terminology. That includes tempo, rhythm, ornamentation and rubato. From there, the team isolated what it refers to as a sperm whale phonetic alphabet.

“This phonetic alphabet makes it possible to systematically explain the observed variability in the coda structure,” says Rus. “We believe that it’s possible that this is the first instance outside of human language where a communication provides an example of the linguistic concept of duality of patterning. That refers to a set of individually meaningless elements that can be combined to form larger meaningful units, sort of like combining syllables into words.”

The meaning of those “words” take on different meanings based on various context. The paper adds:

Our results demonstrate that sperm whale vocalizations form a complex combinatorial communication system: the seemingly arbitrary inventory of coda types can be explained by combinations of rhythm, tempo, rubato, and ornamentation features. Sizable combinatorial vocalization systems are exceedingly rare in nature; however, their use by sperm whales shows that they are not uniquely human, and can arise from dramatically different physiological, ecological, and social pressures.

While the breakthrough is exciting for all involved, there’s still a lot of work to be done, first with sperm whales and then potentially broadening out to other species like humpbacks.

“We decided to go to sperm whales because we had an extensive dataset, and we have the possibility of collecting many more datasets,” says Rus. “Also, because the clicks form a kind of discrete communication system, it is much easier to analyze than a continuous communication system. But even Roger Payne’s work showed that the songs of humpback whales are not random. There are segments that get repeated and there is interesting structure there. We just haven’t gotten to do an in-depth study.”

Sperm whale ‘alphabet’ discovered, thanks to machine learning | TechCrunch

Kevin Eisenfrats is developing the ‘male IUD’ | TechCrunch

Kevin Eisenfrats is developing the ‘male IUD’ | TechCrunch

Interest in male birth control has increased in the past few years, especially since the U.S. overturned Roe vs. Wade, which protected a woman’s right to have an abortion. Since then, states have tried to make abortion nearly impossible, prompting an increased look at contraceptives to allow both men and women to have more control over family planning. This conversation has led to the topic of male birth control — something doctors haven’t quite mastered until now, perhaps.

Kevin Eisenfrats is the founder of Contraline , a company that has developed a male contraceptive in the form of a non-hormonal, sperm-blocking gel that’s injected into the scrotum. Eisenfrats discussed building this company, medical testing for it, and the medical innovation he had to create to make it all possible on TechCrunch’s Found podcast .

“Believe it or not, people have actually been working on male contraceptives since the female contraceptive pill came out in 1960,” Eisenfrats told Found. “So it’s not like this is a forgotten area of research. It’s just that the science is really, really difficult.”

Eisenfrats was inspired to launch his company after watching the MTV show “16 and Pregnant.” Years later, Contraline’s latest product, ADAM, is entering clinical trials in Australia, a country he says has so far been most receptive to the idea of male contraception. He plans to head to the U.S. soon and is gearing up for the long FDA approval process. So far, Eisenfrats hasn’t had the hardest time fundraising — and says there has been much support even given the U.S. political climate, saying the debates have only increased interest in his work.

“We attract a certain type of investor that is really here for the long run,” he continued.

He also spoke about the importance of hiring the right team when it comes to a product like this and broke down some of the challenges that come with being the founder of a medical startup. For him especially, there have been regulatory hurdles, fundraising, and testing the medical hypothesis before landing on the right one.

All of the challenges have made him and his team stronger, he said, and he hinted about one day wanting to expand into Europe and other markets. He also spoke about possibly wanting to find ways to use his technology to develop non-hormonal female contraceptives, tackling other sorts of reproductive health issues that remain unsolved.

“We want to go after these big unsolved reproductive health problems,” he said. “We’re willing to take that risk that others are not willing to take.”

Kevin Eisenfrats is developing the 'male IUD' | TechCrunch

OpenAI says it’s building a tool to let content creators ‘opt out’ of AI training | TechCrunch

OpenAI says it’s building a tool to let content creators ‘opt out’ of AI training | TechCrunch

OpenAI says that it’s developing a tool to let creators better control how their content’s used in training generative AI.

The tool, called Media Manager, will allow creators and content owners to identify their works to OpenAI and specify how they want those works to be included or excluded from AI research and training.

The goal is to have the tool in place by 2025, OpenAI says, as the company works with “creators, content owners and regulators” toward a standard — perhaps through the industry steering committee it recently joined.

“This will require cutting-edge machine learning research to build a first-ever tool of its kind to help us identify copyrighted text, images, audio and video across multiple sources and reflect creator preferences,” OpenAI wrote in a blog post . “Over time, we plan to introduce additional choices and features.”

It’d seem Media Manager, whatever form it ultimately takes, is OpenAI’s response to growing criticism of its approach to developing AI, which relies heavily on scraping publicly available data from the web. Most recently, eight prominent U.S. newspapers including the Chicago Tribune sued OpenAI for IP infringement relating to the company’s use of generative AI, accusing OpenAI of pilfering articles for training generative AI models that it then commercialized without compensating — or crediting — the source publications.

Generative AI models including OpenAI’s — the sorts of models that can analyze and generate text, images, videos and more — are trained on an enormous number of examples usually sourced from public sites and data sets. OpenAI and other generative AI vendors argue that fair use, the legal doctrine that allows for the use of copyrighted works to make a secondary creation as long as it’s transformative, shields their practice of scraping public data and using it for model training. But not everyone agrees.

OpenAI, in fact, recently argued that it would be impossible to create useful AI models absent copyrighted material.

But in an effort to placate critics and defend itself against future lawsuits, OpenAI has taken steps to meet content creators in the middle.

OpenAI last year allowed artists to “opt out” of and remove their work from the data sets that the company uses to train its image-generating models. The company also lets website owners indicate via the robots.txt standard, which gives instructions about websites to web-crawling bots, whether content on their site can be scraped to train AI models. And OpenAI continues to ink licensing deals with large content owners, including news organizations , stock media libraries and Q&A sites like Stack Overflow .

Some content creators say OpenAI hasn’t gone far enough, however.

Artists have described OpenAI’s opt-out workflow for images, which requires submitting an individual copy of each image to be removed along with a description, as onerous. OpenAI reportedly pays relatively little to license content. And, as OpenAI itself acknowledges in the blog post Tuesday, the company’s current solutions don’t address scenarios in which creators’ works are quoted, remixed or reposted on platforms they don’t control.

Beyond OpenAI, a number of third parties are attempting to build universal provenance and opt-out tools for generative AI.

Startup Spawning AI , whose partners include Stability AI and Hugging Face, offers an app that identifies and tracks bots’ IP addresses to block scraping attempts, as well as a database where artists can register their works to disallow training by vendors who choose to respect the requests. Steg.AI and Imatag help creators establish ownership of their images by applying watermarks imperceptible to the human eye. And Nightshade , a project from the University of Chicago, “poisons” image data to render it useless or disruptive to AI model training.

OpenAI says it's building a tool to let content creators 'opt out' of AI training | TechCrunch

Copilot Chat in GitHub’s mobile app is now generally available | TechCrunch

Copilot Chat in GitHub’s mobile app is now generally available | TechCrunch

GitHub on Tuesday announced that Copilot Chat, its AI chat interface for asking coding-related questions and code generation, is now generally available in its mobile app. The Microsoft-owned developer platform first announced this feature last November.

At first glance, a mobile app may not be the most obvious place to use GitHub’s Copilot Chat. That’s not, after all, where developers do their work. But GitHub is betting that there are quite a few use cases for Copilot Chat on mobile that make this a worthwhile effort.

As Mario Rodriguez , GitHub’s recently promoted SVP of Product, told me, the mobile app is already very popular for performing tasks like starring repos and some of the social features GitHub has to offer. Many developers are also using the app, which launched in late 2019 , to quickly review small pull requests while on the go. Some developers are also already using Copilot Chat, which launched on mobile in beta a few months ago, to ask additional questions about those pull requests.

Image Credits: GitHub / Getty Images

Image Credits: GitHub / Getty Images

General coding questions are also a popular use case. “We see that a fair amount: You’re on the go, maybe with friends, and someone asks you a question. You’re like, ‘well, actually, I don’t remember the details of that, so let me look that up really quick and ask Copilot […],” Rodriguez explained.

Some developers, too, are using the mobile chat feature to ask questions about specific repos while on the go.

“Mobile is usually optimized to get a task done,” Rodriguez said when I asked him how the team thinks about designing for mobile. “If you think about the way that we have done our mobile interfaces, it’s optimized to get a task done because if you are on the go, the time for you to do something to completion is very short at times. You might be just having coffee and you might only have five minutes before the kids wake up and come down the stairs. So you want to get something done very, very quickly.”

Image Credits: GitHub

Image Credits: GitHub

To enable this, the Copilot icon is now front-and-center in the mobile app. “When you open the mobile app, there’s a little Copilot icon right there, and it’s very quick to start a conversation with Copilot and get the answer you need,” Rodriguez said. “I think the innovation that we’re bringing to the mobile device at the very beginning is going to be mainly that: How can we get you to that answer very quickly.”

But he also said that the company’s future vision is quite a bit broader and more akin to what GitHub is doing with its recently announced Workspace , which the company describes as a “Copilot-native developer environment” that allows developers to use natural language to plan, build and test code in natural language.

“I think how we want to evolve Copilot is not only about, OK, let’s help you get some tasks done, but really take it to the next level and really get you to create a program in your natural language in a very quick way,” Rodriguez said. He said that this could enable many people who aren’t trained as developers to build tools that help them get their jobs done faster.

Copilot Workspace is GitHub’s take on AI-powered software engineering

GitHub makes Copilot Chat generally available, letting devs ask questions about code

Copilot Chat in GitHub's mobile app is now generally available | TechCrunch

Google’s Pixel Tablet is now available without the thing that makes it interesting | TechCrunch

Google’s Pixel Tablet is now available without the thing that makes it interesting | TechCrunch

When we reviewed the Pixel Tablet roughly this time last year, we noted in no uncertain terms, “It’s all about the dock.” It was, after all, the one thing that truly distinguished the device from countless other uninspired Android tablets. Now I’m pleased to report that you, the consumer, can purchase the device without the bit that really made it good and interesting.

More buying options are always good, of course, but the charging speaker dock is far and away the most interesting part of the system.  It puts the device to use as a big smart display when you’re not using it tablet mode.

Think of it a big more versatile version of the Nest Hub Max — a device which we’ve not heard about in a couple of years.

(Sidenote: what the hell is going on with the Nest Hub line, anyway? Or Google Assistant, for that matter? Will Gemini replace Assistant outright soon? Will these questions and more be answered next week at Google I/O ? )

The announcement rolled out alongside the arrival of the Google’s Pixel 8a budget handset . It also coincidentally dropped a couple of hours after Apple introduced new versions of the iPad Air and iPad Pro .

The Pixel Slate is also now available in Italy and Spain for the first time.

Google's Pixel Tablet is now available without the thing that makes it interesting | TechCrunch