by tyler | Apr 25, 2024 | Tech
TikTok suspended a gamification feature in the European Union following an intervention by the bloc. With attention on TikTok’s growing pile of US legal woes , the announcement went mostly unnoticed when it occurred late local time Wednesday.
TikTok’s move came just two days after the EU opened an investigation into a so-called “task and reward” mechanism on the TikTok Lite app, citing concerns over an addictive design that could pose a mental health risk for young people. The feature allows users to earn points for doing things like watching and liking TikTok videos. ByteDance, TikTok’s parent, launched this version of TikTok Lite in France and Spain earlier this month.
Under the EU’s rebooted online governance and content moderation rulebook, the Digital Services Act (DSA), TikTok has a legal obligation to mitigate systemic risks in areas like child safety and mental health. Yet it failed to produce a risk assessment report on the feature when the bloc’s enforcers came knocking.
This is a big deal as the company could face large penalties under the DSA — of up to 6% of its global annual turnover — if it’s found to have broken the EU’s rules.
In a statement posted on X yesterday, TikTok claimed it’s “voluntarily suspending” the rewards feature in the region to address concerns. However, on Monday, the Commission signalled it was preparing to force TikTok’s hand, saying it was minded to use interim measures powers contained in the DSA to close down the app while it conducts an investigation into the feature.
The EU gave TikTok two days to provide arguments against an enforced shutdown. In the event, TikTok opted to preempt enforcement by announcing a “voluntary” suspension.
The development underlines how even the threat of interim enforcement can pack a punch that forces platform giants to rethink. (We’ve seen this sort of thing before in relation to similar powers contained in the bloc’s General Data Protection Regulation for example — such as a decision by Google, back in 2019 , to halt human review of audio snippets captured by its voice AI after a data protection authority had informed Google of an intention to use an urgency proceeding to order it to stop processing the data.)
This familiar crisis PR tactic aims to get ahead of the negative publicity associated with an enforced shutdown by taking action ahead of a formal order.
Nonetheless, the EU is taking the win: Responding to TikTok’s announcement with a counter post on X , the bloc’s internal-market-commissioner-cum-internet-sheriff, Thierry Breton, warned: “Our children are not guinea pigs for social media.”
Our children are not guinea pigs for social media.
I take note of TikTok’s decision to suspend the #TikTokLite “Reward Program” in the EU.
The cases against TikTok on the risk of addictiveness of the platform continue. #DSA ensures the safety of our 🇪🇺 online space. https://t.co/J1oI6zNI97
— Thierry Breton (@ThierryBreton) April 24, 2024
Breton went on to write that he “takes note” of TikTok’s suspension of the reward program for the Lite app in the EU, adding: “The cases against TikTok on the risk of addictiveness of the platform continue.”
TikTok was contacted for confirmation on the status of the TikTok Lite app in France and Spain. As the name suggests, TikTok Lite is an alternative TikTok app for users who have older phones or who mostly connect to 2G or 3G networks.
The EU has two DSA probes open on TikTok: The first, announced back in February , is looking into a broad sweep of suspected non-compliance in areas including addictive design, child protection, ads transparency and data access for researchers. The second, announced earlier this week , is focused on TikTok Lite.
Still, Elon Musk-owned X was the first very large online platform to go under DSA investigation back in December , just a few months after the late August compliance deadline had kicked in. That investigation also remains ongoing.
EU opens probe of TikTok Lite, citing concerns about addictive design
by tyler | Apr 25, 2024 | Tech
Meta’s tracking ads business could be facing further legal blows in the European Union: An influential advisor to the bloc’s top court affirmed Thursday that the region’s privacy laws limits on how long people’s data can be used for targeted advertising.
In the non-legally binding opinion, Advocate General Athanasios Rantos said use of personal data for advertising must be limited.
This is important because Meta’s tracking ads business relies upon ingesting vast amounts of personal data to build profiles of individuals to target them with advertising messages. Any limits on how it can use personal data could limit its ability to profit off of people’s attention.
A final ruling on the point remains pending — typically these arrive three to six months after an AG opinion — but the Court of Justice of the EU (CJEU) often takes a similar view to its advisors.
The CJEU’s role, meanwhile, is to clarify the application of EU law so its rulings are keenly watched as they steer how lower courts and regulators uphold the law.
Per AG Rantos, data retention for ads must take account of the principle of proportionality, a general principle of EU law that also applies to the bloc’s privacy framework, the General Data Protection Regulation (GDPR) — such as when determining a lawful basis for processing. A key requirement of the regulation is to have a legal basis for handling people’s information.
In a press release the CJEU writes with emphasis: “Rantos proposes that the Court should rule that the GDPR precludes the processing of personal data for the purposes of targeted advertising without restriction as to time . The national court must assess, based inter alia on the principle of proportionality, the extent to which the data retention period and the amount of data processed are justified having regard to the legitimate aim of processing those data for the purposes of personalised advertising .”
The CJEU is considering two legal questions referred to it by a court in Austria. These relate to a privacy challenge, dating back to 2020, brought against Meta’s adtech business by Max Schrems, a lawyer and privacy campaigner. Schrems is well known in Europe as he’s already racked up multiple privacy wins against Meta — which have led to penalties that have cost the tech giant well over a billion dollars in fines since the GDPR came into force.
An internal memo by Meta engineers , obtained by Motherboard/Vice back in 2022, painted a picture of a company unable to apply policies to limit its use of people’s data after ingestion by its ads systems as it had “built a system with open borders”, as the document put it. Although Meta disputed the characterization, claiming at the time the document “does not describe our extensive processes and controls to comply with privacy regulations”.
But it’s clear Meta’s core business model relies on its ability to track and profile web users to operate its microtargeted advertising business. So any hard legal limits on its ability to process and retain people’s data could have big implications for its profitability. To wit: Last year , Meta suggested around 10% of its worldwide ad revenue is generated in the EU.
In recent months, European Union lawmakers and regulators have also notably been dialling up pressure on the adtech giant to ditch its addiction to surveillance advertising — with the Commission explicitly name-checking the existence of alternative ad models, such as contextual advertising , when it opened an investigation into Meta’s binary “consent or pay” user offer last month , under the market power-focused Digital Markets Act .
A key GDPR steering body, meanwhile, also put out guidance on “consent or pay” earlier this month — stressing that larger ad platforms like Meta must give users a “real choice” about decisions affecting their privacy.
In today’s opinion, AG Rantos has also opined on a second point that’s been referred to the court: Namely whether making “manifestly” public certain personal information — in this case, info related to Schrems’ sexual orientation — gives Meta carte blanche to retrospectively claim it can use the sensitive data for ad targeting.
Schrems had complained he received ads on Facebook targeting his sexuality. He subsequently discussed his sexuality publicly but had argued the GDPR principle of purpose limitation must be applied in parallel, referencing a core plank of the regulation that limits further processing of personal data (i.e. without a new valid legal basis such as obtaining the user’s consent).
AG Rantos’ opinion appears to align with Schrems’. Discussing this point, the press release notes (again with emphasis): “while data concerning sexual orientation fall into the category of data that enjoy particular protection and the processing of which is prohibited, that prohibition does not apply when the data are manifestly made public by the data subject. Nevertheless, this position does not in itself permit the processing of those data for the purposes of personalised advertising. ”
In an initial reaction to the AG’s views on both legal questions, Schrems, who is founder and chairman of the European privacy rights nonprofit, noyb , welcomed the opinion, via his lawyer for the case against Meta, Katharina Raabe-Stuppnig.
“At the moment, the online advertising industry simply stores everything forever. The law is clear that the processing must stop after a few days or weeks. For Meta, this would mean that a large part of the information they have collected over the last decade would become taboo for advertising,” she wrote in a statement highlighting the importance of limits on data retention for ads.
“Meta has basically been building a huge data pool on users for 20 years now, and it is growing every day. EU law, however, requires ‘data minimisation’. If the Court follows the opinion, only a small part of this pool will be allowed to be used for advertising — even if have consented to ads,” she added.
On the issue of further use of sensitive data that’s been made public, she said: “This issue is highly relevant for anyone who makes a public statement. Do you retroactively waive your right to privacy for even totally unrelated information, or can only the statement itself be used for the purpose intended by the speaker? If the Court interprets this as a general ‘waiver’ of your rights, it would chill any online speech on Instagram, Facebook or Twitter.”
Reached for its own reaction to the AG opinion, Meta spokesman Matthew Pollard told TechCrunch it would await the court ruling.
The company also claims to have “overhauled privacy” since 2019, suggesting it’s spent €5BN+ on EU-related privacy compliance issues and expanding user controls. “Since 2019, we have overhauled privacy at Meta and invested over five billion Euros to embed privacy at the heart of our products,” wrote Meta in an emailed statement. “Everyone using Facebook has access to a wide range of settings and tools that allow people to manage how we use their information.”
On sensitive data, Pollard highlighted another claim by Meta that it “does not use sensitive data that users provide us to personalise ads”, as the statement puts it.
“We also prohibit advertisers from sharing sensitive information in our terms and we filter out any potentially sensitive information that we’re able to detect,” Meta also wrote, adding: “Further, we’ve taken steps to remove any advertiser targeting options based on topics perceived by users to be sensitive.”
In April 2021 , Meta announced a policy change in this area — saying it would no longer allow advertisers to target users with ads based on sensitive categories such as their sexual orientation, race, political beliefs or religion. However, in May 2022, an investigation by the data journalism nonprofit, The Markup , found it was easy for advertisers to circumvent Meta’s ban by using “obvious proxies”.
A CJEU ruling back in August 2022 also looks very relevant here as the court affirmed then that sensitive inferences should be treated as sensitive personal data under the GDPR. Or, put another way, using a proxy for sexual orientation to target ads requires obtaining the same stringent standard of “explicit consent” as directly targeting ads at a person’s sexual orientation would need in order to be lawful processing in the EU.
Meta’s ‘consent or pay’ tactic must not prevail over privacy, EU rights groups warn
Apple, Google and Meta face first formal investigations under EU’s DMA
by tyler | Apr 25, 2024 | Tech
Generative AI has captured the public imagination with a leap into creating elaborate, plausibly real text and imagery out of verbal prompts. But the catch — and there is often a catch — is that the results are often far from perfect when you look a little closer.
People point out strange fingers ; floor tiles slip away; and math problems are precisely that: problematically, sometimes they don’t add up.
Now, Synthesia — one of the ambitious AI startups working in video, specifically custom avatars designed for business users to create promotional, training and other enterprise video content — is releasing an update that it hopes will help it leapfrog over some of the challenges in its particular field. Its latest version features avatars — built based on actual humans captured in their studio — which provide more emotion, better lip tracking, and what it says are more expressive natural and human movements when they are fed text to generate videos.
The release comes on the heels of some impressive progress for the company to date. Unlike other generative AI players like OpenAI, which has built a two-pronged strategy — raising huge public awareness with consumer tools like ChatGPT while also building out a B2B offering, with its APIs used by independent developers as well as giant enterprises — Synthesia is leaning into the approach that some other prominent AI startups are taking.
Similar to how Perplexity’s focus on really nailing generative AI search, Synthesia is focused on really nailing how to build the most humanlike generative video avatars possible. More specifically, it is looking to do this only for the business market and use cases like training and marketing.
That focus has helped Synthesia stand out in what is become a very crowded market in AI that runs the risk of getting commoditized when hype settles down into more long-term concerns like ARR, unit economics, and operational costs attached to AI implementations.
Synthesia describes its new Expressive Avatars, the version being released Thursday, as a first of their kind: “The world’s first avatars fully generated with AI.” Built on large, pre-trained models, Synthesia says its breakthrough has been in how they are combined to achieve multimodal distributions that more closely mimic how actual humans speak.
These are generated on the fly, Synthesia says, which is meant to be closer to the experience we go through when we speak or react in life, and stands in contrast to how a lot of AI video tools based around avatars work today: typically these are actually many pieces of video that get quickly stitched together to create facial responses that line up, more or less, with the scripts that are fed into them. The aim is to appear less robotic, and more lifelike.
Previous version:
New version:
As you can see in the two examples here, one from Synthesia’s older version and the one being released today, there is still a ways to go still in development, something CEO Victor Riparbelli himself also admits.
“Of course its not 100% there yet, but it will be very, very soon, by the end of the year. It’ll be so mind blowing,” he told TechCrunch. “I think you can also see that the AI part of this is very subtle. With humans there’s so much information in the tiniest details, the tiniest like movements of our facial muscles. I think we could never sit down and describe, ‘yes you smile like this when you’re happy but that is fake right?’ That is such a complex thing to ever describe for humans, but it can be [captured in] deep learning networks. They’re actually able to figure out the pattern and then replicate it in a predictable way.” Next thing it’s working on, he added, is hands.
“Hands are like, super hard,” he added.
The focus on B2B also helps Synthesia anchor its messaging and product more on “safe” AI usage. That is essential especially with the huge concern today over deepfakes and using AI for malicious purposes like misinformation and fraud. Even so, Synthesia hasn’t managed to avoid controversy on that front altogether. As we’ve pointed out before, Synthesia’s tech has previously been misused to produce propaganda in Venezuela and false news reports promoted by pro-China social media accounts.
The company today noted that it has taken further steps to try to lock down that usage. Last month , it updated its policies, it said, “to restrict the type of content people can make, investing in the early detection of bad faith actors, increasing the teams that work on AI safety, and experimenting with content credentials technologies such as C2PA.”
Despite those challenges, the company has continued to grow.
Synthesia was last valued at $1 billion when it raised $90 million. Notably, that fundraise was almost a year ago, in June 2023.
Riparbelli (pictured above, right, with other co-founders Steffen Tjerrild, Professor Lourdes Agapito, Professor Matthias Niessner) said in an interview earlier this month that there are currently no plans to raise more, although that doesn’t really answer the question of whether Synthesia is getting proactively approached. (Note: we are very excited to have the actual human Riparbelli speaking at an event of ours in London in May , where I’m definitely going to ask about this again. Please come if you’re in town.)
What we do know for sure is that AI costs a lot of money to build and run, and Synthesia has been building and running a lot.
Prior to the launch of today’s version some 200,000 people have created more than 18 million video presentations across some 130 languages using Synthesia’s 225 legacy avatars, the company said. (It does not break out how many users are on its paid tiers, but there are a lot of big-name customers including Zoom, the BBC, DuPont and more, and enteprises do pay.) The startup’s hope, of course, is that with the new version getting pushed out today those numbers will go up even more.
by tyler | Apr 25, 2024 | Tech
ICICI Bank, one of India’s top private banks, exposed the sensitive data of thousands of new credit cards to customers who were not their intended recipients.
The Mumbai-based bank confirmed to TechCrunch Thursday that its digital channels “erroneously mapped” about 17,000 credit cards issued in the past few days to “wrong” users. The issue came to light after some customers raised concerns on social media about the bank’s iMobile Pay app exposing unknown customers’ credit card details, including their full number and card verification value (CVV).
“Our customers are our utmost priority, and we are wholeheartedly dedicated to safe guarding their interests,” said Kausik Datta, corporate communications head at ICICI Bank, said in a statement emailed to TechCrunch. “We regret the inconvenience caused. No instance of misuse of a card from this set has been reported to us. However, we assure that the Bank will appropriately compensate a customer in case of any financial loss.”
The spokesperson added that the number of impacted credit cards constituted about 0.1% of the bank’s credit card portfolio.
As reported by the finance-related forum Technofino, sensitive data such as the full card number, expiry date and CVV of unknown customers’ credit cards suddenly appeared for some users on the iMobile Pay app.
“I have access to someone else’s Amazon Pay CC due to a security glitch on the iMobile app. Although OTP restricts domestic transactions, but I can do international transactions using the details from the iMobile app,” one of the users wrote on the forum.
The bank spokesperson told TechCrunch it blocked the affected cards and is issuing new cards to customers.
ICICI Bank, which has over 6,000 branches in India, is in 17 countries worldwide. The iMobile Pay app, launched in 2008, has over 28 million users .
by tyler | Apr 25, 2024 | Tech
Two veteran security experts are launching a startup that aims to help other makers of cybersecurity products to up their game in protecting Apple devices.
Their startup is called DoubleYou , the name taken from the initials of its co-founder, Patrick Wardle, who worked at the U.S. National Security Agency between 2006 and 2008. Wardle then worked as an offensive security researcher for years before switching to independently researching Apple macOS defensive security. Since 2015, Wardle has developed free and open-source macOS security tools under the umbrella of his Objective-See Foundation , which also organizes the Apple-centric Objective By The Sea conference .
His co-founder is Mikhail Sosonkin, who was also an offensive cybersecurity researcher for years before working at Apple between 2019 and 2021. Wardle, who described himself as “the mad scientist in the lab,” said Sosonkin is the “right partner” he needed to make his ideas reality.
“Mike might not hype himself up, but he is an incredible software engineer,” Wardle said.
The idea behind DoubleYou is that, compared to Windows, there still are only a few good security products for macOS and iPhones. And that’s a problem because Macs are becoming a more popular choice for companies all over the world, meaning malicious hackers are also increasingly targeting Apple computers. Wardle and Sosonkin said there aren’t as many talented macOS and iOS security researchers, which means companies are struggling to develop their products.
Wardle and Sosonkin’s idea is to take a page out of the playbook of hackers that specialize in attacking systems, and applying it to defense. Several offensive cybersecurity companies offer modular products, capable of delivering a full chain of exploits, or just one component of it. The DoubleYou team wants to do just that — but with defensive tools.
“Instead of building, for example, a whole product from scratch, we really took a step back, and we said ‘hey, how do the offensive adversaries do this?’” Wardle said in an interview with TechCrunch. “Can we basically take that same model of essentially democratizing security but from a defensive point of view, where we develop individual capabilities that then we can license out and have other companies integrate into their security products?”
Wardle and Sosonkin believe that they can.
And while the co-founders haven’t decided on the full list of modules they want to offer, they said their product will certainly include a core offering, which includes the analyzing all new process to detect and block untrusted code (which in MacOS means they are not “notarized” by Apple), and monitoring for and blocking anomalous DNS network traffic, which can uncover malware when it connects to domains known to be associated to hacking groups. Wardle said that these, at least for now, will be primarily for macOS.
Also, the founders want to develop tools to monitor software that wants to become persistent — a hallmark of malware, to detect cryptocurrency miners and ransomware based on their behavior, and to detect when software tries to get permission to use the webcam and microphone.
Sosonkin described it as “an off-the-shelf catalog approach,” where every customer can pick and choose what components they need to implement in their product. Wardle described it as being like a supplier of car parts, rather than the maker of the whole car. This approach, Wardle added, is similar to the one he took in developing the various Objective-See tools such as Oversight , which monitors microphone and webcam usage; and KnockKnock , which monitors if an app wants to become persistent.
“We don’t need to use new technology to make this work. What we need is to actually take the tools available and put them in the right place,” Sosonkin said.
Wardle and Sosonkin’s plan, for now, is not to take any outside investment. The co-founders said they want to remain independent and avoid some of the pitfalls of getting outside investment, namely the need to scale too much and too fast, which will allow them to focus on developing their technology.
“Maybe in a way, we are kind of like foolish idealists,” Sosonkin said. “We just want to catch some malware. I hope we can make some money in the process.”
by tyler | Apr 25, 2024 | Tech
Carv , a data layer platform that lets web3 gaming and AI companies, as well as gamers, control and monetize their data, has raised a $10 million Series A round led by Tribe Capital and IOSG Ventures.
Carv’s new round comes approximately five months after it received a strategic investment led by HashKey Capital . The startup did not disclose its valuation and the total funding it has raised so far. In 2022, Carv was valued at roughly $40 million when it raised a seed round led by Temasek’s VC arm, Vertex Ventures.
Carv’s initial focus is on two key industries, gaming and AI, where it sees the biggest opportunity to help users control their data and monetize it. Users can choose to provide their data to Carv’s corporate customers in a way that preserves their privacy and is compliant with regulations, so that companies can use it for training AI models, market research and more.
“While user data has powered tremendous economic growth, individuals don’t share the value created when their information is leveraged to build billion-dollar businesses,” Victor Yu, co-founder and COO of Carv, told TechCrunch.
Carv offers three solutions: CARV Protocol , a modular data layer with cross-chain connectivity that connects web2 identities to web3 tokens ; CARV Play , a cross-platform credentialing system and game distribution platform; and CARV’s AI Agent, CARA, a personalized gaming assistant that integrates with web3 wallets and can recommend games, activities and projects.
“Carv differentiates itself by putting data ownership and monetization rights in the hands of users. Any revenue generated from leveraging users’ data gets shared back with the data creators and themselves,” Yu said. “Additionally, we’ve created a unified user ID standard ( ERC-7231 ) that bridges web2 and web3, enabling seamless data portability versus today’s siloed solutions.”
Carv has been profitable since December 2023, and generates monthly recurring revenue of more than $1 million, Yu said, adding that the company is also seeing significant month-over-month growth.
The company now has 2.5 million registered users and over 350 integrated gaming and AI company partners.
With the new capital, Carv plans to enhance the design of its CARV Protol to ensure it is scalable and can support a broader range of use cases. It will also launch CARV Link to improve on-chain identity and data authentication, and CARV Database to manage various types of user data.
Arweave, Consensys (developer of MetaMask and Linea), Draper Dragon, Fenbushi Capital, LiquidX, MARBLEX , ( the web3 arm of Korean gaming company Netmarble), No Limit Holdings, and OKX Ventures also participated in the Series A round.
Carv valued at $40M as investors race to back web3 identity builders