SBF’s defense puts forth a 35-minute last-ditch effort | TechCrunch

SBF’s defense puts forth a 35-minute last-ditch effort | TechCrunch

As Sam Bankman-Fried took the stand for the last time in his trial for fraud and money laundering, his main lawyer, Mark Cohen, returned to question him on a number of themes that prosecutors touched on in the past few days. But this time, he took a different approach, presenting the defendant as a person who was acting in good faith.

“We lived and died by having a better product than competitors,” Bankman-Fried said of his now-bankrupt crypto exchange , FTX. Bankman-Fried owned majority stakes in both FTX and its sister trading firm, Alameda Research, but insisted he wanted the best for both.

Bankman-Fried has been testifying since Thursday afternoon , and the prosecution has grilled him for the past few days as part of their cross-examination . On Tuesday, when the defense lawyers returned to re-examine him, he was swaying slightly on the stand, shoulders drawn down and forward. He spoke softly, but was more talkative than earlier in the day and on Monday when he was being questioned.

DEV3LOPCOM, LLC | Consulting Services – Austin Texas

Contrary to the picture prosecutors tried to paint, Bankman-Fried insisted that he was not involved in the day-to-day trading operations of Alameda or its “core operations,” but instead emphasized his concern for its leadership at the time, saying he wanted to be involved in many of its venture investments, decisions on hedging, and other areas.

SBF’s defense puts forth a 35-minute last-ditch effort | TechCrunch

OpenAI, Google and a ‘digital anthropologist’: the UN forms a high-level board to explore AI governance | TechCrunch

OpenAI, Google and a ‘digital anthropologist’: the UN forms a high-level board to explore AI governance | TechCrunch

The halls of power are waking up to the potentials and pitfalls of artificial intelligence. The big question will be how much of an impact they will have on the march of progress if (and when) there are missteps. Yesterday, the United Nations announced a new AI advisory board — 39 people from across government, academia and industry — with an aim “to undertake analysis and advance recommendations for the international governance of AI.”

The advisory board will operate as a bridging group, covering any other initiatives that are put together around AI by the international organization, the UN said. Indeed, in forming a strategy and approach on AI, the UN has been talking for the better part of a month with industry leaders and other stakeholders, from what we understand. The plan is to bring together recommendations on AI by the summer of 2024, when the UN plans to hold a “Summit of the Future” event. The new advisory board is meeting for the first time today.

The UN said that the body will be tasked with “building a global scientific consensus on risks and challenges, helping harness AI for the Sustainable Development Goals, and strengthening international cooperation on AI governance.”

What is most notable about the board is, in these early days, its generally positive positioning. Right now, there are a number of people speaking out about the risks in AI, whether that comes in the form of national security threats, data protection or misinformation; and next week a number of global leaders and experts in the space will be converging in the U.K. to try to address some of this at the AI Safety Summit . It’s not clear how these and other initiatives formed on national and international levels will work together, or indeed enforce anything beyond their jurisdictions.

But in keeping with the ethos of the UN, the group of 39 — a wide-ranging list that includes executives from Alphabet/Google and Microsoft, a “digital anthropologist,” a number of professors and government officials — is high-level and taking more of a positive-to-constructive position with a focus on international development.

“AI could power extraordinary progress for humanity. From predicting and addressing crises, to rolling out public health programmes and education services, AI could scale up and amplify the work of governments, civil society and the United Nations across the board,” UN Secretary General António Guterres said of the aim of the group. “For developing economies, AI offers the possibility of leapfrogging outdated technologies and bringing services directly to people who need them most. The transformative potential of AI for good is difficult even to grasp.”

The UN refers to the group’s “bridging” role and it may be that it gets involved in more critical explorations beyond “AI for good.” Gary Marcus, who took part in a fireside chat at Disrupt in September to talk about the risks of AI, arrived for our conference in San Francisco on a red-eye from New York, where he was meeting with UN officials. While new innovations in areas like generative AI have definitely put the technology front and center in the mass market, Marcus’ framing of the challenges underscores some of the more concerning aspects that have been voiced:

“My biggest short-term fear about AI is that misinformation, deliberate misinformation, created at wholesale quantities is going to undermine democracy and all kinds of things are going to happen after that,” he said last month. “My biggest long-term fear is we have no idea how to control the AI that we’re building now and no idea how to control the AI that we’re building in the future. And that lays us open to machines doing all kinds of things that we didn’t intend for them to do.”

 

The full list of people on the advisory board:

OpenAI, Google and a 'digital anthropologist': the UN forms a high-level board to explore AI governance | TechCrunch

Sam Bankman-Fried says he didn’t defraud FTX customers | TechCrunch

Sam Bankman-Fried says he didn’t defraud FTX customers | TechCrunch

On Friday, Sam Bankman-Fried sat on the stand, once again in an oversized gray suit and purple tie, and testified in front of a jury. He’s on trial for seven charges related to fraud and money laundering and has been sitting silently the past four weeks, waiting for his chance to speak.

Bankman-Fried co-founded FTX in 2019 alongside Gary Wang, after they co-founded crypto trading firm Alameda Research in the fall of 2017. At the time, they were 25 years old with no history of starting a company, he said. When he got into the crypto world, he said he knew “basically nothing.” But over time, he said, his vision grew to “build the best [crypto exchange] product on the market” and to “move the ecosystem forward.”

“Turned out [to be] the opposite of that,” Bankman-Fried said. “A lot of people got hurt.”

When asked by his lead lawyer, Mark Cohen, whether he defrauded or took customer funds, Bankman-Fried said, “No, I did not.”

On Thursday, Judge Lewis Kaplan heard from Bankman-Fried without a jury to determine whether his testimony could be shared with jurors. Among those topics: FTX’s data retention policy, the fact that he “skimmed over” terms of service, Alameda’s use of the exchanges’ customer funds, and more information about Dan Friedberg, who Bankman-Fried hired to be FTX’s general counsel.

On the stand on Friday, Bankman-Fried seemed more thoughtful with his answers than he had the previous day. “I made a number of small mistakes,” Bankman-Fried said Friday. But he said the biggest mistake was having no risk management team at FTX, which led to “significant oversights.”

Sam Bankman-Fried says he didn’t defraud FTX customers | TechCrunch

Generative AI startup 1337 (Leet) is paying users to help create AI-driven influencers | TechCrunch

Generative AI startup 1337 (Leet) is paying users to help create AI-driven influencers | TechCrunch

There’s been a rise of virtual influencers in recent years — computer-generated personalities who are just as active on social media platforms as real humans are. Many companies are investing in what’s called the “digital human economy,” a bullish market that is forecasted to reach $125 billion by 2035, per Gartner .

And now, thanks to the fascination with AI image generators Midjourney and Stable Diffusion, creating virtual influencers is a breeze, enabling anyone to produce fabricated lives who engage with fans as if they’re actual internet personalities.

One such company, 1337 (pronounced Leet), is leveraging generative AI to build a community of AI-driven micro-influencers, smaller content creators with hyper-personalized interests and diverse backgrounds who want to connect to people from niche communities like gardening, emo music, vintage fashion, classical literature and more.

The startup — named after a popular term in 80s gaming and hacker culture — emerged from stealth today with $4 million.

 

Rather than only using AI to create these influencers, 1337 also allows users to suggest what they do and say.

“Today, we have a rare opportunity to combine human interaction with early-stage AI,” co-founder and CEO Jenny Dearing told TechCrunch. “In a world oversaturated with influencers who are often either too commercial or too impersonal, 1337 introduces diverse, AI-driven entities that engage users in entirely new, dynamic ways.”

Plus, users are paid for their contributions. The company is currently offering a flat fee; however, it declined to disclose the amount publicly.

“Once our ‘Entities’ have followings, we will make this fee and bonus opportunities tied to the engagement of the community with content,” Dearing said.

Dubbed “Entities,” 1337 is debuting 50 AI-driven influencers who all have their own set of skills, traits and interests. For instance, there’s Daria (she/her, they/them), a 19-year-old unapologetic music blogger who has a passion for “emo culture” and is an “advocate for mental health,” according to 1337’s website . They even have their own backstory; after stumbling upon their cousin’s vinyl record collection, Daria resonated with the “raw emotion” in the lyrics and decided to create a blog for like-minded people struggling with mental health challenges.

The lives of each Entity are certainly elaborate. They even share what their home looks like, their favorite outdoor/indoor hangout spots and their philosophy on the world. 1337 offers a wide range of Entities to follow, all with different ages, gender identities, nationalities and occupations. Plus, they all have their own Instagram accounts, LinkedIn profiles and public Spotify playlists. Followers can even interact with the Entities through Instagram comments and direct messages.

“Our vision goes beyond mere chatbot interactions; we’re crafting entities that evolve with their niche communities, adapting to the rapidly changing digital landscape where technology is evolving constantly,” Dearing added. “In doing so, we’re breaking new ground, and we firmly believe that this will redefine how we think about social media engagement and how people interact with each other online.”

Entities are still in beta. They will officially launch in January 2024.

A post shared by @daria.is.1337

Entities are created and designed by the founding 1337 team in collaboration with users and AI models like OpenAI’s GPT-4 (for written captions), Midjourney (art) and 1337’s in-house solution.

Along with Dearing, the founding team also includes co-founder Robin Raszka, who previously founded AI avatar startup Alter , which sold to Google in 2022 for $100 million. Jan Maly, a former machine learning engineer at software design company STRV, is chief technical officer. There are also a few key strategic advisors on the team, including Bailey Richardson, former head of community at Instagram, who currently leads both marketing and community at Substack.

“In our toolkit, we leverage open-source software (OSS) large language models (LLMs) for multimedia content analysis, focusing on images, and customized OSS LLMs integrated into our workflows,” Raszka explained to us. “When we first ventured into generated content, we faced the challenge of maintaining consistent facial features in our virtual entities. Our pursuit of perfection led us to develop an in-house solution… we can now add a new entity and ensure his or her face remains consistent across images.”

To co-create with an Entity, the user participates in a Discord chat by entering a prompt to generate a caption and four photos of the Entity doing something, posing in a certain location or experiencing an event. 1337 also offers a mode called “Entity Point of View,” where users can envision how the Entity views the world around them.

The moderation team approves the post on the Entity’s behalf, fine-tuning content based on pre-established guidelines like their dispositions, behaviors and tone of voice. When a post is approved, the creator is credited in the caption.

Image Credits: 1337

Image Credits: 1337

“Our creators, especially Gen-Zers, are demonstrating incredible engagement with the Entities and are creating and curating far more content on a daily basis than we can publish,” Dearing said.

Eventually, 1337 will allow its “super creators” to create a completely new Entity from scratch. Next year, the company will also launch a revenue model where creators that make Entities will be compensated a percentage of revenue.

“For example, any brand collaborations would provide a revenue share model leaning heavily on supporting the creator vs. 1337,” Dearing said. “We have also discussed innovative models similar to what Substack has done, offering creators the opportunity for equity in the business.”

“We are also exploring how we may support solopreneurs and nano/micro-influencers to create new ways of connecting with their existing communities and audiences to scale their own businesses,” she added.

There are a few other exciting developments in the pipeline, the company revealed to us, including the ability for Entities to speak, allowing them to host podcasts and produce videos. Audio for Entities will launch in the first half of 2024.

Investors who participated in 1337’s funding round include Credo Ventures, GFR Fund, Treble Capital, Roosh Ventures, as well as angels Hugging Face CEO Clément Delangue and impact investor and model Natalia Vodianova.

The primary focus of the funding is to help grow its global creator community. So far, several hundred people have co-created content for the Entities.

“Today, some of the most dynamic applications of LLMs and diffusion models are found within the realms of human creativity and connectivity. This supports our conviction that the next wave of AI will have a significant impact on the social media landscape, affecting creators, followers, and advertisers alike,” said Karolina Mrozkova, general partner and investor at Credo Ventures.

Generative AI startup 1337 (Leet) is paying users to help create AI-driven influencers | TechCrunch

Yepic fail: This startup promised not to make deepfakes without consent, but did anyway | TechCrunch

Yepic fail: This startup promised not to make deepfakes without consent, but did anyway | TechCrunch

U.K.-based startup Yepic AI claims to use “deepfakes for good” and promises to “never reenact someone without their consent.” But the company did exactly what it claimed it never would.

In an unsolicited email pitch to a TechCrunch reporter, a representative for Yepic AI shared two “deepfaked” videos of the reporter, who had not given consent to having their likeness reproduced. Yepic AI said in the pitch email that it “used a publicly available photo” of the reporter to produce two deepfaked videos of them speaking in different languages.

The reporter requested that Yepic AI delete the deepfaked videos it created without permission.

Deepfakes are photos, videos or audio created by generative AI systems that are designed to look or sound like an individual. While not new, the proliferation of generative AI systems allow almost anyone to make convincing deepfaked content of anyone else with relative ease, including without their knowledge or consent.

On a webpage it titles “Ethics,” Yepic AI said: “Deepfakes and satirical impersonations for political and other purposed [sic] are prohibited.” The company also said in an August blog post: “We refuse to produce custom avatars of people without their express permission.”

It’s not known if the company generated deepfakes of anyone else without permission, and the company declined to say.

When reached for comment, Yepic AI chief executive Aaron Jones told TechCrunch that the company is updating its ethics policy to “accommodate exceptions for AI-generated images that are created for artistic and expressive purposes.”

In explaining how the incident happened, Jones said: “Neither I nor the Yepic team were directly involved in the creation of the videos in question. Our PR team have confirmed that the video was created specifically for the journalist to generate awareness of the incredible technology Yepic has created.”

Jones said the videos and image used for the creation of the reporter’s image was deleted.

Predictably, deepfakes have tricked unsuspecting victims into falling for scams and unknowingly giving away their crypto or personal information by evading some moderation systems. In one case, fraudsters used AI to spoof the voice of a company’s chief executive in order to trick staff into making a fraudulent transaction worth hundreds of thousands of euros. Before deepfakes became popular with fraudsters, it’s important to note that people used deepfakes to create non-consensual porn or sex imagery victimizing women , meaning they created realistic looking porn videos using the likeness of women who had not consented to be part of the video.

How an AI deepfake ad of MrBeast ended up on TikTok

Yepic fail: This startup promised not to make deepfakes without consent, but did anyway | TechCrunch

LinkedIn goes big on new AI tools for learning, recruitment, marketing and sales, powered by OpenAI | TechCrunch

LinkedIn goes big on new AI tools for learning, recruitment, marketing and sales, powered by OpenAI | TechCrunch

LinkedIn — the Microsoft-owned social platform for those networking for work or recruitment — is now 21 years old, an aeon in the world of technology. To stay current with what the working world is thinking about most these days, and to keep its nearly 1 billion users engaging on its platform, today the company is unveiling a string of new AI features spanning its job hunting, marketing and sales products. They include a big update to its Recruiter talent sourcing platform, with AI assistance built into it throughout; an AI-powered LinkedIn Learning coach; and a new AI-powered tool for marketing campaigns.

The social platform — which pulled in $15 billion in revenues last year, it tells me — has been slowly putting in a number of AI-based features across its product portfolio. Among them, back in March it debuted AI-powered writing suggestions for those penning messages to other users on the platform. And recruiters have also been seeing a series of tests around AI-created job descriptions and other features this year. This latest raft of announcements is building on that.

For some context, LinkedIn is not entirely new to the AI rodeo. It has, in fact, been a heavy user of artificial intelligence over the years. But until recently most of that has been out of sight. Ever been surprised (or unnerved) at how the platform suggests connections to you that are strangely right up your street? That’s AI. All those insights that LinkedIn produces about what its user base is doing and how it’s evolving? That’s AI, too.

“In one way or another, AI powers everything at LinkedIn,” senior engineer Deepak Agarwal wrote back in 2018 . (He’s still at the company .)

What’s changed now is the world: AI has become a mainstream preoccupation, led in no small part by the advances of OpenAI and the evolution of services like ChatGPT, which let everyday people have a direct experience of how to use a computer brain to do work faster that they might have tried previously to do themselves.

And what’s also changed is that LinkedIn — which has in the past built a lot of its own AI tooling for all those back-end operations — is now leaning out. The company, which was acquired by Microsoft some years ago, is tapping tech from OpenAI and Microsoft to power a number of its new features, it confirmed to me.

OpenAI, as you know, is 49% owned now by Microsoft, which made a big investment of $13 billion in the company earlier this year. That’s been a very strategic stake, which has seen Microsoft infuse a number of its own products with OpenAI tech. While VP of engineering Erran Berger tells me that the company will continue to evaluate what tech it uses, and whether it will build its own Large Language Models and other AI products, for now LinkedIn is going to tap its parent company and its parent’s prime investment.

Here is a quick rundown of all that is new:

Recruiter 2024 is a new AI-assisted recruiting experience, LinkedIn says. It will use generative AI to help recruitment professionals come up with better search strings to surface stronger candidate lists. Specifically, as you have seen in searches like ChatGPT, recruiters will now be able to use more conversational language to hone in on who they hope to find. It will also mean that search results will also have more suggestions outside of what recruiters might think they are looking for.

LinkedIn Learning will be incorporating AI in the form of a “learning coach” that is essentially built as a chatbot. Initially the advice that it will give will be trained on suggestions and tips, and it will be firmly in the camp of soft skills. One example: “How can I delegate tasks and responsibility effectively?” The coach might suggest actual courses, but more importantly, it will actually also provide information, and advice, to users. LinkedIn itself has a giant catalogue of learning videos, covering both those soft skills but also actual technical skills and other knowledge needed for specific jobs. It will be interesting to see if LinkedIn extends the coach to covering that material, too.

Marketing will also be getting an AI boost, specifically with a new product called Accelerate. While marketing and marketers have increasingly taken on technical expertise, this is an interesting shift. The idea, again, will be to let people run campaigns on LinkedIn more easily bypassing that heavy lift. One drawback is that Accelerate is limited to campaigns and data from within the LinkedIn walled garden. Given that marketing campaigns typically extend across multiple platforms and audiences, users might find the impact of the new tool limited.

Lastly, Inside Sales and selling to B2B audiences is also getting the AI treatment. This is a somewhat emerging area on LinkedIn, where sales people who are focused on B2B selling leverage LinkedIn to find new customers or to connect more tightly with those that are already in their networks. The new AI feature will be a search function to help find those potential connections then more easily and enter conversations with those leads. Given that AI sales of this kind are well established in the world at large — I’ve even heard VCs complain that they can’t consider “yet another AI sales startup” — this seems somewhat overdue for LinkedIn to add.

LinkedIn goes big on new AI tools for learning, recruitment, marketing and sales, powered by OpenAI | TechCrunch