AWS, Amazon’s cloud computing business, wants to become the go-to place companies host and fine-tune their custom generative AI models.
Today, AWS announced the launch of Custom Model Import (in preview), a new feature in Bedrock, AWS’ enterprise-focused suite of generative AI services. The feature lets organizations import and access their in-house generative AI models as fully managed APIs.
Companies’ proprietary models, once imported, benefit from the same infrastructure as other generative AI models in Bedrock’s library (e.g., Meta’s Llama 3 or Anthropic’s Claude 3). They’ll also get tools to expand their knowledge, fine-tune them and implement safeguards to mitigate their biases .
“There have been AWS customers that have been fine-tuning or building their own models outside of Bedrock using other tools,” Vasi Philomin, VP of generative AI at AWS, told TechCrunch in an interview. “This Custom Model Import capability allows them to bring their own proprietary models to Bedrock and see them right next to all of the other models that are already on Bedrock — and use them with all of the workflows that are also already on Bedrock, as well.”
According to a recent poll by Cnvrg, Intel’s AI-focused subsidiary, the majority of enterprises are approaching generative AI by building their own models and refining them to their applications. The enterprises say that they see infrastructure, including cloud compute infrastructure, as their greatest barrier to deployment, per the poll.
With Custom Model Import, AWS aims to fill that need while maintaining pace with cloud rivals. (Amazon CEO Andy Jassy foreshadowed as much in his recent annual letter to shareholders.)
For some time, Vertex AI, Google’s analog to Bedrock, has allowed customers to upload generative AI models, tailor them and serve them through APIs. Databricks, too, has long provided toolsets to host and tweak custom models, including its own recently released DBRX .
Asked what sets Custom Model Import apart, Philomin asserted that it — and by extension Bedrock — offers a wider breadth and depth of model customization options than the competition, adding that “tens of thousands” of customers today are using Bedrock.
“Number one, Bedrock provides several ways for customers to deal with serving models,” Philomin said. “Number two, we have a whole bunch of workflows around these models — and now customers’ can stand right next to all of the other models that we have already available. A key thing that most people like about this is the ability to be able to experiment across multiple different models using the same workflows, and then actually take them to production from the same place.”
So what are the alluded-to model customization options?
Philomin points to Guardrails, which lets Bedrock users configure thresholds to filter — or at least attempt to filter — models’ outputs for things like hate speech, violence and private personal or corporate information. (Generative AI models are notorious for going off the rails in problematic ways , including leaking sensitive info; AWS’ models have been no exception .) He also highlighted Model Evaluation, a Bedrock tool customers can use to test how well a model — or several — performs across a given set of criteria.
Both Guardrails and Model Evaluation are now generally available following a several-months-long preview.
I feel compelled to note here that Custom Model Import only supports three model architectures at the moment: Hugging Face’s Flan-T5, Meta’s Llama and Mistral’s models. Also, Vertex AI and other Bedrock-rivaling services, including Microsoft’s AI development tools on Azure, offer more or less comparable safety and evaluation features (see Azure AI Content Safety , model evaluation in Vertex , etc.).
What is unique to Bedrock, though, is AWS’ Titan family of generative AI models. And, coinciding with the release of Custom Model Import, there have been several noteworthy developments on that front.
Titan Image Generator , AWS’ text-to-image model, is now generally available after launching in preview last November. As before, Titan Image Generator can create new images from a text description or customize existing images — for example, swapping out an image’s background while retaining the subjects in the image.
Compared to the preview version, Titan Image Generator in GA can generate images with more “creativity,” said Philomin without going into detail. (Your guess as to what that means is as good as mine.)
I asked Philomin if he had any more details to share about how Titan Image Generator was trained.
At the model’s debut last November, AWS was vague about which data, exactly, it used in training Titan Image Generator. Few vendors readily reveal such information; they see training data as a competitive advantage and thus keep it and info relating to it close to the chest.
Training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much. Several cases making their way through the courts reject vendors’ fair use defenses, arguing that text-to-image tools replicate artists’ styles without the artists’ explicit permission, and allow users to generate new works resembling artists’ originals for which artists receive no payment.
Philomin would only tell me that AWS uses a combination of first-party and licensed data.
“We have a combination of proprietary data sources, but also we license a lot of data,” he said. “ We actually pay copyright owners licensing fees in order to be able to use their data, and we do have contracts with several of them.”
It’s more detail than we got in November. But I have a feeling that Philomin’s answer won’t satisfy everyone, particularly the content creators and AI ethicists arguing for greater transparency around generative AI model training.
In lieu of transparency, AWS says it’ll continue to offer an indemnification policy that covers customers in the event a Titan model like Titan Image Generator regurgitates (i.e., spits out a mirror copy of) a potentially copyrighted training example. (Several rivals, including Microsoft and Google, offer similar policies covering their image generation models.)
To address another pressing ethical threat — deepfakes — AWS says that images created with Titan Image Generator will, as during the preview, come with a “tamper-resistant” invisible watermark. Philomin says that the watermark has been made more resistant in the GA release to compression and other image edits and manipulations.
Segueing into less controversial territory, I asked Philomin whether AWS, like Google, OpenAI and others, is exploring video generation given the excitement around (and investment in) the tech. Philomin didn’t say that AWS wasn’t … but he wouldn’t hint at any more than that.
“Obviously, we’re constantly looking to see what new capabilities customers want to have, and video generation definitely comes up in conversations with customers,” Philomin said. “I’d ask you to stay tuned.”
In one last piece of Titan-related news, AWS released the second generation of its Titan Embeddings model, Titan Text Embeddings V2. This model converts text to numerical representations, called embeddings, to power search and personalization applications. The first-generation Embeddings model did that, too, but AWS claims that Titan Text Embeddings V2 is overall more efficient, cost-effective and accurate.
“What the Embeddings V2 model does is reduce the overall storage [necessary to use the model] by up to four times while retaining 97% of the accuracy,” Philomin claimed, “outperforming other models that are comparable.”
We’ll see if real-world testing bears that out.