US, UK police identify and charge Russian leader of LockBit ransomware gang | TechCrunch

US, UK police identify and charge Russian leader of LockBit ransomware gang | TechCrunch

The identity of the leader of one of the most infamous ransomware groups in history has finally been revealed.

On Tuesday, a coalition of law enforcement led by the U.K.’s National Crime Agency announced that Russian national, Dmitry Yuryevich Khoroshev, 31, is the person behind the nickname LockBitSupp, the administrator and developer of the LockBit ransomware. The U.S. Department of Justice also announced the indictment of Khoroshev, accusing him of computer crimes, fraud and extortion.

“Today we are going a step further, charging the individual who we allege developed and administered this malicious cyber scheme, which has targeted over 2,000 victims and stolen more than $100 million in ransomware payments,” Attorney General Merrick B. Garland was quoted as saying in the announcement.

According to the DOJ, Khoroshev is from Voronezh, a city in Russia around 300 miles south of Moscow.

“Dmitry Khoroshev conceived, developed, and administered Lockbit, the most prolific ransomware variant and group in the world, enabling himself and his affiliates to wreak havoc and cause billions of dollars in damage to thousands of victims around the globe,” said U.S. Attorney Philip R. Sellinger for the District of New Jersey, where Khoroshev was indicted.

The law enforcement coalition announced the identity of LockBitSupp in press releases, as well as on LockBit’s original dark web site, which the authorities seized earlier this year . On the site, the U.S. Department of State announced a reward of $10 million for information that could help the authorities to arrest and convict Khoroshev.

The U.S. government also announced sanctions against Khoroshev, which effectively bars anyone from transacting with him, such as victims paying a ransom. Sanctioning the people behind ransomware makes it more difficult for them to profit from cyberattacks. Violating sanctions, including paying a sanctioned hacker, can result in heavy fines and prosecution.

LockBit has been active since 2020, and, according to the U.S. cybersecurity agency CISA , the group’s ransomware variant was “the most deployed” in 2022.

Europol, which participated in the law enforcement operation, said in a statement that authorities now have over 2,500 decryption keys that can help victims unlock data previously encrypted by the gang.

The NCA published an infographic on the seized LockBit site, which included statistics on LockBit’s activities. According to the data, the group targeted more than 100 hospitals, health care companies and facilities, including a children’s hospital. In that case, LockBit said the attack was a mistake and it would block the “partner” responsible for the attack and provide the decryptor keys to unlock the files. However, according to the NCA, “that was a lie,” since the partner remained active and the decryptor keys “didn’t work properly.”

The NCA, for its part, invited Khoroshev to get in touch if he disputes their findings. “You’re welcome to do this in person?” the NCA said.

On Sunday, the law enforcement coalition restored LockBit ’s seized dark web site to publish a list of posts that were intended to tease the latest revelations . In February, authorities announced that they took control of LockBit’s site and had replaced the hackers’ posts with their own posts, which included a press release and other information related to what the coalition called “Operation Cronos.”

Shortly after, LockBit appeared to make a return with a new site and a new list of alleged victims, which was being updated as of Monday, according to a security researcher who tracks the group.

For weeks, LockBit’s leader, known as LockBitSupp, had been vocal and public in an attempt to dismiss the law enforcement operation, and to show that LockBit is still active and targeting victims. In March, LockBitSupp gave an interview to news outlet The Record in which they claimed that Operation Cronos and law enforcement’s actions don’t “affect business in any way.”

“I take this as additional advertising and an opportunity to show everyone the strength of my character. I cannot be intimidated. What doesn’t kill you makes you stronger,” LockBitSupp told The Record.

US, UK police identify and charge Russian leader of LockBit ransomware gang | TechCrunch

Here’s everything Apple just announced at its Let Loose event, including new iPad Pro with M4 chip, iPad Air, Apple Pencil and more | TechCrunch

Here’s everything Apple just announced at its Let Loose event, including new iPad Pro with M4 chip, iPad Air, Apple Pencil and more | TechCrunch

Today is Apple iPad Event day, and we’re ready to bring you all the iPad goodness you can stand, including if some of the rumors are true of what’s coming, like a new iPad Pro, iPad Air, Apple Pencil and a keyboard case. Don’t have time to watch ? That’s ok — we’ve summed up the most important parts of the event below.

iPad Air, Apple iPad Event 2024

Image credit: Apple

Image credit: Apple

The iPad lineup is getting a facelift today, and one of the most important additions is that it now comes in two sizes, the 11-inch display and a 13-inch display. The cost is $599 for 11 inch and $799 for 13-inch. You can preorder today, and it will be available “next week.” Read more

And, as a special bonus, Apple finally places the front-facing camera on the landscape edge of the iPad. Read more

iPad Pro 5, Apple ipad Event 2024

Image Credits: Apple

Image Credits: Apple

The iPad Pro is being touted as the thinnest iPad ever. Features include a visual experience with an OLED display in two panels called Tandem OLED. It also has a nanotextured glass option for less glare. And, it features the next generation of Apple silicon called M4, a jump from M2. It also has a 12.9-inch iPad Air and new gestures for the Apple Pencil.

In the U.S., the 11-inch iPad Pro starts at $999 for the Wi-Fi model, and $1,199 for the Wi-Fi + Cellular model. The 13-inch iPad Pro starts at $1,299 for the Wi-Fi model, and $1,499 for the Wi-Fi + Cellular model.  Read more

Apple M4 chip

Image credit: Apple

Image credit: Apple

The M4 chip is the fourth generation of its custom SoCs. They feature a new display engine, as well as a significantly updated CPU and GPU cores. The base M4 chips come with 10 CPU and 10 GPU cores.

Apple claims that the new CPU is 50% faster than the M2 chips which powered the last generation of iPad Pros, while the GPU will offer a 4x increase in rendering performance, all while still offering the same performance per Watt as the M3. Apple stressed that the new GPU architecture features dynamic caching, hardware-accelerated mesh shading and ray tracing, something that’s a first for the iPad. Read more

“We’ve always envisioned iPad as a magical sheet of glass,” said John Ternus, SVP, Hardware Engineering during Apple’s iPad event in Cupertino on Tuesday. “And with the new iPad Pro, we wanted to give customers an even more remarkable visual experience.”

The company did that by bringing OLED to iPad for the very first time, suggesting that the technology helps get the light and color accuracy that iPad Pro owners want – but that it lacks the brightness. The company solved that by creating the Tandem OLED screen, which can support an incredible 1,000 nits of full-screen brightness for both SDR and HDR content, and 1,600 nits of peak HDR brightness. The company says no other device delivers this level of display quality. Read more

Apple Pencil Pro, Apple iPad Event 2024

Image credit: Apple

Image credit: Apple

Shocking as it may seem, it’s been nearly a decade since the first Apple Pencil was announced, way back in 2015. The stylus hasn’t seen much in the way of updates since then. The most significant arrived in 2018, bringing magnetic charging to the line. Last year, meanwhile, saw the arrival of a less expensive model with fewer features and USB-C charging.

It comes in at $129. Many of the new features with the Apple Pencil Pro comes from the squeeze. You can take animations, move and rotate the object and even lens blurring. Read more

 

Apple, magic keyboard, Apple iPad event 2024

Image credit: Apple

Image credit: Apple

Apple announced a new and improved Magic Keyboard, its keyboard accessory for iPad. This is the first major revision since 2020.

The Magic Keyboard has been “completely redesigned” to be much thinner and lighter, Apple says, and now includes a function row for quick access to controls like screen brightness. Beyond that, the new Magic Keyboard features aluminum palm rests and a larger trackpad. Plus it’s more responsive, Apple says, with haptic feedback.

In the U.S. the new 11-inch Magic Keyboard is available for $299 and the new 13-inch Magic Keyboard is available for $349. It comes with layouts for over 30 languages. Read more

Apple Final Cut Camera

Image credit: Apple

Image credit: Apple

The latest version of Final Cut Pro introduces a new feature to speed up your shoot: Live Multicam. It’s a bold move from Apple, transforming your iPad into a multicam production studio, enabling creatives to connect and preview up to four cameras all at once, al in one place. From the command post, directors can remotely direct each video angle and dial in exposure, white balance, focus, and more, all within the Final Cut Camera app.

The new companion app lets users connect multiple iPhones or iPads (presumably using the same protocols as the Continuity Camera feature launched a few years ago). Final Cut Pro automatically transfers and syncs each Live Multicam angle so you can seamlessly move from production to editing. Read more

Read more about Apple's 2024 iPad Event

Here's everything Apple just announced at its Let Loose event, including new iPad Pro with M4 chip, iPad Air, Apple Pencil and more | TechCrunch

Exclusive: Wayve co-founder Alex Kendall on the autonomous future for cars and robots | TechCrunch

Exclusive: Wayve co-founder Alex Kendall on the autonomous future for cars and robots | TechCrunch

U.K.-based autonomous vehicle startup Wayve started life as a software platform loaded into a tiny electric “car” called Renault Twizy. Festooned with cameras, the company’s co-founders and PhD graduates, Alex Kendall and Amar Shah, tuned the deep-learning algorithms powering the car’s autonomous systems until they’d got it to drive around the medieval city unaided.

No fancy Lidar cameras or radars were needed. They suddenly realized they were on to something.

Fast-forward to today and Wayve, now an AI model company, has raised a $1.05 billion Series C funding round led by SoftBank, NVIDIA and Microsoft. That makes this the UK’s largest AI fundraise to date, and among the top 20 AI fundraises globally. Even Meta’s head of AI, Yann LeCun, invested in the company when it was young.

Wayve now plans to sell its autonomous driving model to a variety of auto OEMs as well as to makers of new autonomous robots.

Alex Kendall CEO, Wayve

Alex Kendall, CEO, Wayve. Image Credits: Wayve

Alex Kendall, CEO, Wayve. Image Credits: Wayve

In an exclusive interview, I spoke to Alex Kendall, co-founder and CEO of Wayve, about how the company has been training the model, the new fundraise, licensing plans, and the wider self-driving market.

(Note: The following interview has been edited for length and clarity)

TechCrunch: What tipped the balance to attain this level of funding?

Kendall: Seven years ago, we started the company to build an embodied AI. We have been heads-down building technology […] What happened last year was everything really started to work […] All the elements that are required to make this product dream a reality [came together], and, in particular, the first opportunity to get embodied AI deployed at scale.

Now production vehicles are coming out with GPUs, surrounding cameras, radar and, of course, the appetite to now bring AI onto, and enable, an accelerated journey from assisted to automated driving. So this fundraise is a validation of our technological approach and gives us the capital to go and turn this technology into a product and bring it to market.

Very soon you’ll be able to buy a new car and it’ll have Wayve’s AI on it […] Then this goes into enabling all kinds of embodied AI, not just cars, but other forms of robotics. I think what we want to achieve here is to go way beyond where AI is today with language models and chatbots. To really enable a future where we can trust intelligent machines that we can delegate tasks to, and of course, they can enhance our lives. Self-driving will be the first example of that.

TC: How have you been training your self-driving model these last couple of years?

Kendall: We partnered with Tesco and Ocado to collect data to trial autonomy. That’s been a great way for us to get this technology off the ground, and it continues to be a really important part of our growth story.

TC: What is the plan around licensing the AI to OEMs, to automotive manufacturers? What will be the benefits?

Kendall: We want to enable all the auto manufacturers around the world to work with our AI, of course, across a wide variety of sources. More importantly, we’ll get diverse data from different cars and markets, and that’s going to produce the most intelligent and capable embodied AI.

TC: Which car makers have you sold it to? Who have you landed?

Kendall:   We’re working with a number of the top 10 automakers in the world. We’re not ready to announce who they are today.

TC: What moved the needle for Softbank and the other investors in terms of your technology? Was it because you’re effectively platform-independent and every car will now sport cameras around it?

Kendall: That’s largely correct. SoftBank has publicly commented on their focus on AI and robotics, and self-driving [tech] is just the intersection of that. What we’ve seen so far with the AV 1.0 approaches is where they throw all of the infrastructure, HD maps, etc., in a very constrained setting to prove out this technology. But it’s a very far journey from there to something that’s possible to deploy at scale.

We’ve found that — and this is where SoftBank and Wayve are completely aligned in the vision for creating autonomy at scale — by deploying this software and a diverse set of vehicles around the world, millions of vehicles, we can not only build a sustainable business, we can also get diverse data from around the world to train and validate the safety case to be able to deploy AV at scale through “hands off, eyes off” driving around the world.

This architecture operates with the intelligence onboard to make its own decisions. It’s trained on video as well as language, and we bring in general purpose reasoning and knowledge into the system, too. So it can deal with the long-tail, unexpected events that you see on the road. This is the path we’re on.

TC: Where do you see yourself in the landscape at the moment in terms of what’s deployed out there already?

Kendall: There have been a bunch of really exciting proof points, but self-driving has largely plateaued for three years, and there’s been a lot of consolidation in the AV space. What this technology represents, what AI represents, is that it’s completely game-changing. It allows us to drive without the cost and expense of Lidar and HD. That allows us to have the onboard intelligence to operate. It can handle the complexities of unclear lane markings, cyclists and pedestrians, and it’s intelligent enough to predict how others are going to move so it can negotiate and operate in very tight spaces. This makes it possible to deploy technology in a city without causing angst or road rage around you, and to drive in a way that conforms with the driving culture.

TC: You did your first experiments back in the day, peppering the Renault Tizzy with cameras. What’s going to happen when car manufacturers put lots of cameras around their cars?

Kendall: Car manufacturers are already building vehicles that make this possible. I wouldn’t name brands, but pick your favorite brand, and particularly with the higher-end vehicles, they have surround cameras, surround radar, and an onboard GPU. All of that is what makes this possible. Also, they’ve now put in place Software Defined Vehicles , so we can do over-the-air updates and get data off the vehicles.

TC: What’s been your “playbook”?

Kendall: We built a company that has all the pillars required to build. Our playbook has been AI, talent, data and compute. On the talent front, we’ve built a brand that’s at the cross-section of AI and robotics, and we’ve been fortunate enough to bring some of the best minds around the world to come work on this problem. Microsoft’s been a long-standing partner of ours, and the amount of GPU compute they’re giving us in Azure is going to allow us to train a model at the scale of something that we haven’t seen before. A truly enormous, embodied AI model that can actually build the safe and intelligent behavior we need for this problem. And then NVIDIA, of course. Their chips are best-in-class in the market today and make it possible to deploy this technology.

TC: Will all of the training data you get from the brands you work with be mixed together into your model?

Kendall: That’s right. That’s exactly the model we’ve been able to prove. No single car manufacturer is going to produce a model that is safe enough on their own. Being able to train an AI on data from many different car manufacturers is going to be safer and more performant than just one. It’s going to come from more markets.

TC: So you’re effectively going to be the holder of probably the largest amount of training data around driving in the world?

Kendall: That’s certainly our ambition. But we want to make sure that this AI goes beyond driving — like a true embodied AI. It’s the first vision-language-action model that’s capable of driving a car. It’s not just trained on driving data, but also internet-scale text and other sources. We even train our model on the PDF documents from the U.K. government that tells you the highway code. We’re going to different sources of data.

TC: So it’s not just cars, but robots as well?

Kendall: Exactly. We’re building the embodied AI foundation model as a general-purpose system trained on very diverse data. Think about domestic robotics. The data [from that] is diverse. It’s not some constrained environment like manufacturing.

TC: How do you plan to scale the company?

Kendall: We continue to grow our AI, engineering and product teams both here [in the U.K.] and in Silicon Valley, and we just started a small team in Vancouver as well. We’re not going to ‘blitzscale’ the company, but use disciplined, purposeful growth. The HQ will remain in the U.K.

TC: Where do you think the centers of talent and innovation are in Europe for AI?

Kendall: It’s pretty hard to look anywhere outside London. I think London is by far the dominant place in Europe. We’re based in London, Silicon Valley and Vancouver — probably in the top five or six hubs in the world. London has been a great spot for us so far. We grew out of academic innovation in Cambridge to begin with. Where we are now to the next chapter is somewhat a road less well trodden. But in terms of where we are now, it’s been a brilliant ecosystem [in the U.K.].

There are a lot of good things to be said around cooperation, law and tax. On the regulation front, we’ve worked with the government for the last five years now on new legislation for self-driving in the U.K. It passed the House of Lords, it’s almost through the House of Commons, and should soon come into law and make all of this legal in the U.K. The ability for the government to lean into this to work with us […] we’ve really worked in the weeds for that and had over 15 ministers visit. It’s been a really great partnership so far, and we’ve certainly felt the support of the government.

TC: Do you have any comments on the EU’s approach to self-driving?

Kendall: Self-driving is not part of the AI act. It’s a specific vertical and should be regulated with subject matter experts and as a specific vertical. It’s not some uncoordinated catch-all, and I’m glad about that. It’s not the fastest way to innovate in specific verticals. I think we can do this responsibly by working with specific automotive regulatory bodies that understand the problem space. So sector-specific regulation is really important. I’m pleased the EU has taken that approach to self-driving.

Exclusive: Wayve co-founder Alex Kendall on the autonomous future for cars and robots | TechCrunch

Photo editing app VSCO launches marketplace to connect photographers with brands | TechCrunch

Photo editing app VSCO launches marketplace to connect photographers with brands | TechCrunch

Working with brands is one of the primary ways artists and content creators can earn substantial income today. However, it’s not easy for creators to connect with brands, and companies looking for new or specialized talent also have a difficult time finding them. Photo editing app VSCO is trying to solve this problem for its primary audience, professional photographers, with a new marketplace called VSCO Hub that aims to connects them with brands.

VSCO’s platform is similar to other social platforms like Instagram , YouTube and TikTok , which already offer creator marketplaces to help businesses discover content creators and strike partnerships with them.

VSCO’s CEO, Eric Wittman, who joined the company last September , equates the new portal to LinkedIn. “The VSCO Hub is almost like LinkedIn. Brands act as recruiters, and they can easily find someone for their projects and connect with them,” he told TechCrunch over a call.

To get access to VSCO Hub, photographers will have to sign up for the $59.99 Pro plan, which has more than 160,000 subscribers. For businesses, the platform offers filters like location, category, price, and availability that they can use to narrow down their search for photographers.

The platform has a cool marquee feature, too: It allows creative directors to upload a reference image and search for photographers who might have similar work in their portfolios.

VSCO Refrence image upload on VSCO Hub

Image Credits: VSCO

Image Credits: VSCO

VSCO doesn’t take a cut from the project payments and just acts as a connection layer.

Wittman blames social network algorithms for the lack of discoverability in the photography community, saying marketers spend many hours on Google or Instagram trying to find photographers whose work matches their creative vision.

“Because of how social networks have changed themselves and their algorithms, it is really hard for photographers to get discovered by potential clients. We saw the need of our photography community and decided to build VSCO Hub,” he said.

VSCO is also looking to bolster its search with AI. Wittman said the company is internally testing a way for brands to enter text queries and find images through semantic search, and this feature will make it to the platform soon. VSCO also plans to add more filters to help companies narrow down their searches.

Wittman and VSCO’s stance on the use of AI is similar to a lot of other platforms: The technology will be used to help artists, not replace them. The CEO said his company is looking to release more AI-powered tools to assist photographers in their workflows.

Notably, tools like Sequoia-backed Visual Electric and Facetune maker Lighttrick’s LTX Studio help artists with ideation and focus on workflows. Wittman believes that ideation — not production — is where creatives will look to hone AI’s prowess more.

Photo editing app VSCO launches marketplace to connect photographers with brands | TechCrunch

Sperm whale ‘alphabet’ discovered, thanks to machine learning | TechCrunch

Sperm whale ‘alphabet’ discovered, thanks to machine learning | TechCrunch

Researchers at MIT CSAIL and Project CETI believe that they have unlocked a kind of sperm whale “alphabet” with the aid of machine learning technologies. Results from the study, which were published under the title, “ Contextual and Combinatorial Structure in Sperm Whale Vocalizations ,” point to key breakthroughs in our understanding of cetacean communication.

The study deals with codas — a series of clicks that serve different linguistic functions. “What we have discovered is that there is previously undescribed variation in the coda structure,” CSAIL director Daniela Rus told TechCrunch. “We have discovered that coda types are not arbitrary, but rather they form a newly discovered combinatory coding system.”

While whale vocalization has been a key subject of research for decades, the teams behind this new research suggest that they’ve uncovered a level of previously unknown nuance among the chatty sea mammals. The paper notes that previous research has noted 150 different sperm whale codas.

“A subset of these have been shown to encode information about caller and clan identity,” it explains. “However, almost everything else about the sperm whale communication system, including basic questions about its structure and information-carrying capacity, remains unknown.”

The teams drew on work from Roger Payne, the pioneering marine biologist who passed away last June. Payne’s most influential work involved the songs of humpback whales. “He has really inspired us to want to use our most advanced technologies to want to have a deeper understanding of the whales,” says Rus.

The teams deployed machine learning solutions to analyze a dataset of 8,719 sperm whale codas collected by researcher Shane Gero off the coast of the small eastern Caribbean island, Dominica.

“We would get the inputs, and then we adjust our machine learning, to visualize better and to understand more,” explains Rus. “And then we would analyze the output with a biologist.”

The team’s method marked a change from older analysis, which studied individual coda. A richer picture forms when the sounds are studied in context, as exchanges between whales. Contextual details are classified using music terminology. That includes tempo, rhythm, ornamentation and rubato. From there, the team isolated what it refers to as a sperm whale phonetic alphabet.

“This phonetic alphabet makes it possible to systematically explain the observed variability in the coda structure,” says Rus. “We believe that it’s possible that this is the first instance outside of human language where a communication provides an example of the linguistic concept of duality of patterning. That refers to a set of individually meaningless elements that can be combined to form larger meaningful units, sort of like combining syllables into words.”

The meaning of those “words” take on different meanings based on various context. The paper adds:

Our results demonstrate that sperm whale vocalizations form a complex combinatorial communication system: the seemingly arbitrary inventory of coda types can be explained by combinations of rhythm, tempo, rubato, and ornamentation features. Sizable combinatorial vocalization systems are exceedingly rare in nature; however, their use by sperm whales shows that they are not uniquely human, and can arise from dramatically different physiological, ecological, and social pressures.

While the breakthrough is exciting for all involved, there’s still a lot of work to be done, first with sperm whales and then potentially broadening out to other species like humpbacks.

“We decided to go to sperm whales because we had an extensive dataset, and we have the possibility of collecting many more datasets,” says Rus. “Also, because the clicks form a kind of discrete communication system, it is much easier to analyze than a continuous communication system. But even Roger Payne’s work showed that the songs of humpback whales are not random. There are segments that get repeated and there is interesting structure there. We just haven’t gotten to do an in-depth study.”

Sperm whale ‘alphabet’ discovered, thanks to machine learning | TechCrunch

Kevin Eisenfrats is developing the ‘male IUD’ | TechCrunch

Kevin Eisenfrats is developing the ‘male IUD’ | TechCrunch

Interest in male birth control has increased in the past few years, especially since the U.S. overturned Roe vs. Wade, which protected a woman’s right to have an abortion. Since then, states have tried to make abortion nearly impossible, prompting an increased look at contraceptives to allow both men and women to have more control over family planning. This conversation has led to the topic of male birth control — something doctors haven’t quite mastered until now, perhaps.

Kevin Eisenfrats is the founder of Contraline , a company that has developed a male contraceptive in the form of a non-hormonal, sperm-blocking gel that’s injected into the scrotum. Eisenfrats discussed building this company, medical testing for it, and the medical innovation he had to create to make it all possible on TechCrunch’s Found podcast .

“Believe it or not, people have actually been working on male contraceptives since the female contraceptive pill came out in 1960,” Eisenfrats told Found. “So it’s not like this is a forgotten area of research. It’s just that the science is really, really difficult.”

Eisenfrats was inspired to launch his company after watching the MTV show “16 and Pregnant.” Years later, Contraline’s latest product, ADAM, is entering clinical trials in Australia, a country he says has so far been most receptive to the idea of male contraception. He plans to head to the U.S. soon and is gearing up for the long FDA approval process. So far, Eisenfrats hasn’t had the hardest time fundraising — and says there has been much support even given the U.S. political climate, saying the debates have only increased interest in his work.

“We attract a certain type of investor that is really here for the long run,” he continued.

He also spoke about the importance of hiring the right team when it comes to a product like this and broke down some of the challenges that come with being the founder of a medical startup. For him especially, there have been regulatory hurdles, fundraising, and testing the medical hypothesis before landing on the right one.

All of the challenges have made him and his team stronger, he said, and he hinted about one day wanting to expand into Europe and other markets. He also spoke about possibly wanting to find ways to use his technology to develop non-hormonal female contraceptives, tackling other sorts of reproductive health issues that remain unsolved.

“We want to go after these big unsolved reproductive health problems,” he said. “We’re willing to take that risk that others are not willing to take.”

Kevin Eisenfrats is developing the 'male IUD' | TechCrunch