Vietnamese luxury EV-maker VinFast files to go public on Nasdaq • TechCrunch

Vietnamese luxury EV-maker VinFast files to go public on Nasdaq • TechCrunch

Vietnamese electric vehicle maker VinFast has filed for an initial public offering in the United States, the company said Tuesday. Shares will be listed on the Nasdaq under the ticker “VFS.”

VinFast, which was founded in 2017 and began operations in 2019, will convert to a Singapore public limited company for the IPO. The number of shares to be offered and the price range of the offering haven’t been disclosed.

The EV startup has been pursuing the U.S. market, most recently with a showcase of four SUVs presented at the LA Auto Show. Over the summer, VinFast received $1.2 billion in incentives to build a factory in North Carolina, where the automaker hopes to begin building cars by July 2024. VinFast has even promised a $7,500 discount to potential American buyers that would hold out to buy an EV eligible for U.S. EV tax incentives.

No date was given for the IPO, which was originally slated for Q4 of this year. It’s more likely we’ll see the company go public sometime next year, given current market uncertainty.

Unlike many EV companies that have chosen to go public through a special purpose acquisition merger , VinFast has already begun producing and shipping vehicles. The automaker shipped its first batch of 999 vehicles to the U.S. late last month.

Can VinFast make EV battery subscriptions a thing?

Vietnamese luxury EV-maker VinFast files to go public on Nasdaq • TechCrunch

As Butterfield exits stage left, it’s fair to wonder what’s happening at Salesforce • TechCrunch

As Butterfield exits stage left, it’s fair to wonder what’s happening at Salesforce • TechCrunch

It’s been a pretty rough week for Salesforce co-founder and CEO Marc Benioff and the folks at his company: Three talented executives — co-CEO Bret Taylor , Tableau CEO Mark Nelson  and Slack CEO and co-founder Stewart Butterfield — announced their resignations in quick succession.

It’s fair to ask what exactly is going on at Salesforce to lose three accomplished people so quickly, but it’s also important to parse each exit to determine whether they are part of a political battle or just some odd confluence of unconnected events.

The news seems to have spooked investors, with the company’s stock down nearly 17% over the last five days. But what do these departures mean to Salesforce and to the companies it spent so much money to acquire over the last several years? Further, how does it impact the executive depth that Benioff has worked so hard to build up? Finally, does he look for another co-CEO to help him run the company, or does he continue running it alone for the foreseeable future?

View at Medium.com

Let’s start with Nelson. He’s the least well-known of the three. Salesforce bought his company, Tableau, in 2019 for almost $16 billion . At the time, the company was run by Adam Selipsky, who left last year to become CEO at AWS when Andy Jassy was promoted to Amazon CEO after Jeff Bezos stepped back from that role.

For every action, there is an equal and opposite reaction in the C-suite, apparently.

As Butterfield exits stage left, it's fair to wonder what's happening at Salesforce • TechCrunch

View at Medium.com

Meta’s behavioral ads will finally face GDPR privacy reckoning in January • TechCrunch

Meta’s behavioral ads will finally face GDPR privacy reckoning in January • TechCrunch

Major privacy complaints targeting the legality of Meta’s core advertising business model in Europe have finally been settled via a dispute resolution mechanism baked into the EU’s General Data Protection Regulation (GDPR).

The complaints, which date back to May 2018 , take aim at the tech giant’s so-called “forced consent” to continue tracking and targeting users by processing their personal data to build profiles for behavioral advertising so the outcome could have major ramifications for how Meta operates if regulators order the company to amend its practices.

The GDPR also allows for large fines for major violations — of up to 4% of global annual turnover.

The European Data Protection Board (EDPB), a steering body for the GDPR, confirmed today it has stepped in to three binding decisions in the three complaints against Meta platforms Facebook, Instagram and WhatsApp.

The trio of complaints were filed by European privacy campaign group, noyb , as soon as the GDPR entered into application across the EU. So it’s taken some 4.5 years just to get to this point.

The EU’s flagship data protection regulation has been much criticised for the slow pace of enforcement on major cross-border complaints against tech giants and this clutch of strategic complaints is one of a handful of poster children for those gripes. But while decisions are now finally in sight the wrangling could still continue — since Meta may appeal against any enforcement, both in Irish courts and in front of EU judiciary (in the case of the EDPB’s binding decisions), potentially putting any corrective orders on hold pending the outcome of its appeals.

What exactly has been decided? The EDPB is not disclosing that yet. The protocol it’s following means it passes its binding decisions back to the Irish Data Protection Commission (DPC), Meta’s lead privacy regulator in the EU, which must then apply them in the final decisions it will issue.

The DPC now has one month to issue final decisions and confirm any financial penalties. So we should get the full gory details by early next year.

The Wall Street Journal may offer a glimpse of what’s to come: It’s reporting that Meta’s ad model will face restrictions in the EU — citing “people familiar with the situation”.

It also reports the company will face “significant” fines for breaching the GDPR.

“The board’s rulings Monday, which haven’t yet been disclosed publicly, don’t directly order Meta to change practices but rather call for Ireland’s Data Protection Commission to issue public orders that reflect its decisions, along with significant fines,” the WSJ wrote, citing (unnamed) sources.

Covering the WSJ’s report, Reuters noted that shares in Meta fell 5.3% in morning trading following the development.

A spokeswoman for the EDPB confirmed it cannot comment on the substance of the binding decisions it’s taken.

“In line with Art. 65 (5) GDPR, we cannot comment on the content of the decisions until after the Irish DPC has notified the controller of its final decisions,” she told TechCrunch. “As indicated in our press release , the EDPB looked into whether or not the processing of personal data for the performance of a contract is a suitable legal basis for behavioural advertising, but at this point in time we cannot confirm what the EDPB’s decision in this matter was.”

The DPC also declined comment on the newspaper’s report — but deputy commissioner Graham Doyle confirmed to us that it will announce binding decisions on these complaints in early January.

We’ve also reached out to Meta for a response to the development.

The company was recently spotted in a filing setting aside €3BN for data protection fines in 2022 and 2023 — a large chunk of which has yet to land.

GDPR fines for Meta so far this year include a €265M penalty for a Facebook data-scraping breach last month; €405M for an Instagram violation of children’s privacy back in September; and €17M for several 2018 Facebook data breaches issued in March — plus France’s data protection watchdog hit Meta with a €60M penalty in January over Facebook cookie consent violations of the EU’s ePrivacy Directive — for a total of €747M in publicly disclosed EU data protection and privacy fines… so, per its filing, the tech giant appears to be expecting 2023 to be considerably more expensive for its European business.

One thing is clear: A lot is at stake for the company.

As the EDPB’s press release confirms, its decisions “settle[s], among others, the question of whether or not the processing of personal data for the performance of a contract is a suitable legal basis for behavioural advertising, in the cases of Facebook and Instagram, and for service improvement, in the case of WhatsApp”.

So, depending on what’s been decided, Meta could finally be forced to ask users if they want to be tracked — a choice the adtech giant currently denies. On Facebook and Instagram it’s either agree to be profiled and targeted — or no service for you.

If Meta is forced to ask users if they want “personalized” ads (its favored euphemism for surveillance ads) that is definitely big news — given rates of denials when web users are actually given a choice over targeted ads are typically very high. (See for e.g. Apple’s App Tracking Transparency ‘request to track’ feature for third party iOS apps — where denials were running at circa 75%, per Adjust data released earlier this year and covered by MediaPost .)

The crux of noyb’s original complaints against Meta services was that users were not offered a choice to deny its processing for advertising — despite the GDPR stipulating that if consent is the legal basis being claimed for processing personal data it must be specific, informed and freely given. (Not, er, bundled, manipulated and forced!)

However — plot twist! — it later emerged that as the GDPR came into application Meta had quietly switched from claiming consent as its legal basis for this behavioral advertising processing to saying it is necessary for the performance of a contract — and claiming users of Facebook and Instagram are in a contract with Meta to receive targeting ads.

This argument implies that Meta’s core service is not social networking; it’s behavioural advertising. noyb’s honorary chairman and long-time privacy law thorn in Facebook’s side, Max Schrems, has called this an exceptionally shameless attempt to bypass the GDPR.

A draft decision by Ireland’s DPC on the complaints which was published by noyb last year (much to the DPC’s chagrin) revealed the Irish regulator had not been minded to object to Meta’s consent bypass. However other EU DPAs — which are able to lodge objections to a lead supervisor’s draft decision under the GDPR’s one-stop-shop mechanism for dealing with cross border complaints — did object and months of regulatory wrangling followed as different EU regulators slugged it out to see if they could agree.

Evidently, in this case, the DPAs could not find consensus between themselves — hence the EDPB stepping in with binding decisions now. And the Board’s decision is final.

Responding to this development — and citing the WSJ’s reporting — noyb writes in a press release that the EDPB has overturned the DPC’s much derided draft decision (which had also only proposed a paltry fine of $36M), saying the decision “requires that Meta may not use personal data for ads based on an alleged ‘contract’”.

“Users will therefore need to have a yes/no consent option,” it said — dubbing the outcome a “win” (even without knowing the exact size of the “substantial” fine it says was requested by the EDPB).

Other forms of advertising by Meta — like contextual ads where targeting is based on the content of the page being viewed — are not prohibited under the EDPB’s decision, per noyb, which predicts the decision will nonetheless “dramatically” limit Meta’s profits in the EU.

In a statement, Schrems said: “Instead of having a yes/no option for personalized ads, [Meta] just moved the consent clause in the terms and conditions. This is not just unfair but clearly illegal. We are not aware of any other company that has tried to ignore the GDPR in such an arrogant way.”

“This is a huge blow to Meta’s profits in the EU,” he added. “People now need to be asked if they want their data to be used for ads or not. They must have a ‘yes’ or ‘no’ answer and can change their mind at any time. The decision ensures a level playing field with other advertisers that also need to get opt-in consent.”

noyb’s take on the development also pours cold water on the prospect of any Meta appeal against this GDPR smackdown to its core business model — calling the chances of the company winning such an appeal “minimal” since the final decisions have been handed down by the EDPB, an expert body that’s responsible for ensuring harmonized application of the GDPR across the bloc (by, for example, providing guidance on how the rules should be applied in practice).

It also points to two similar cases already before the Court of Justice of the EU (CJEU) on Meta’s consent bypass — suggesting those “may settle the issue and all appeals for good”.

noyb further suggests Meta could face legal action from users — “over the illegal use of their data for the past 4.5 years”.

Meta is already facing a number of class action privacy-citing suits in Europe. Further GDPR enforcement will only dial up more momentum for damages claims as litigation funders scent victory.

Meta hit with ~$275M GDPR penalty for Facebook data-scraping breach

Meta’s surveillance biz model targeted in UK ‘right to object’ GDPR lawsuit

Meta's behavioral ads will finally face GDPR privacy reckoning in January • TechCrunch

Transformer Table: 130M Video Views On Instagram – Incredible Table & Black Friday Special!

Transformer Table: 130M Video Views On Instagram – Incredible Table & Black Friday Special!

One simple video showcasing Transformer Table’s innovative extendable dining table & bench set is taking the world by storm after amassing a whooping 300M views combined across social media platforms, and now prices are better than ever to get your own for Black Friday!

Here is the link to the video: https://www.instagram.com/reel/CiF2H2EoqVy/

MONTREAL, Nov. 22, 2022 /PRNewswire/ – Transformer Table, a modular furniture company based in Canada, makes home furnishings that are more than meets the eye. In September, Instagram creator Rasha Abdel Reda (@mynameisrasha) posted a seemingly simple video showcasing her expandable dining table & bench, and the internet couldn’t get enough. Today, this social media sensation alone was viewed by over 130M people and shared across the world by various other accounts, combining to a roaring 300M views across platforms. To put that into perspective, a Super Bowl commercial typically gets 106M views and it costs $6.5M for just 30 seconds. On the other hand, the Transformer Table video was shot on an iPhone in Rasha’s dining room, costing virtually nothing to create, and is worth an impressive $16M in media value!

The infamous clip has had a positive ripple effect on Transformer Table since its hike on the charts. Not only have their sales tripled, but they also took the opportunity to expand to over 35 new markets around the world with no less than free shipping. With more than 150k new followers in their Instagram community since the overnight success, the aftermath of this viral phenomenon is definitely a case study for the books!

How did such a straightforward video go so viral? The answer is in the actual furniture piece that was revealed. This company makes furniture like none other – their signature product, the Transformer Table, is a solid wood extendable table that starts off as small as just 18″ and can effortlessly grow all the way up to 10 ft long in seconds. It isn’t for nothing that they are one of the fastest-growing furniture companies in the world right now, and this is just another piece of the puzzle to their success story – a story that has been unraveling since 2016.

It’s easy to see why everyone is talking about this innovative furniture – no matter how much space you have at your disposal, or lack thereof, Transformer Table strives to offer unique multifunctional pieces that can thrive in any space. #EatTogether like space is no issue!

Transformer Table’s Black Friday deals are here! For a limited time, take advantage of the best sale of the year and get a free Transformer Bench with the purchase of the one and only solid wood extendable dining table. While quantities last, also get 25% off the matching Coffee Table which doubles as a panel storage solution. Last but not least, to mark the occasion, a brand new product is making its grand debut to complete this legendary line of transforming furniture.

Visit the Transformer Table website to discover their extraordinary modular furniture – they even offer free shipping across North America and to 35+ other countries worldwide.

SOURCE Transformer Table

UK watchdog warns against AI for emotional analysis, dubs ‘immature’ biometrics a bias risk • TechCrunch

UK watchdog warns against AI for emotional analysis, dubs ‘immature’ biometrics a bias risk • TechCrunch

The UK’s privacy watchdog has warned against use of so-called “emotion analysis” technologies for anything more serious than kids’ party games, saying there’s a discrimination risk attached to applying “immature” biometric tech that makes pseudoscientific claims about being able to recognize people’s emotions using AI to interpret biometric data inputs.

Such AI systems ‘function’, if we can use the word, by claiming to be able to ‘read the tea leaves’ of one or more biometric signals, such as heart rate, eye movements, facial expression, skin moisture, gait tracking, vocal tone etc, and perform emotion detection or sentiment analysis to predict how the person is feeling — presumably after being trained on a bunch of visual data of faces frowning, faces smiling etc (but you can immediately see the problem with trying to assign individual facial expressions to absolute emotional states — because no two people, and often no two emotional states, are the same; hence hello pseudoscience!).

The watchdog’s deputy commissioner, Stephen Bonner, appears to agree that this high tech nonsense must be stopped — saying today there’s no evidence that such technologies do actually work as claimed (or that they will ever work).

“Developments in the biometrics and emotion AI market are immature. They may not work yet, or indeed ever,” he warned in a statement. “While there are opportunities present, the risks are currently greater. At the ICO, we are concerned that incorrect analysis of data could result in assumptions and judgements about a person that are inaccurate and lead to discrimination.

“The only sustainable biometric deployments will be those that are fully functional, accountable and backed by science. As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area.”

In a blog post accompanying Bonner’s shot across the bows of dodgy biometrics, the Information Commission’s Office (ICO) said organizations should assess public risks before deploying such tech — with a further warning that those that fail to act responsibly could face an investigation. (So could also be risking a penalty.)

“The ICO will continue to scrutinise the market, identifying stakeholders who are seeking to create or deploy these technologies, and explaining the importance of enhanced data privacy and compliance, whilst encouraging trust and confidence in how these systems work,” added Bonner.

The watchdog has fuller biometrics guidance coming in the spring — which it said today will highlight the need for organizations to pay proper mind to data security — so Bonner’s warning offers a taster of more comprehensive steerage coming down the pipe in the next half year or so.

“Organisations that do not act responsibly, posing risks to vulnerable people, or fail to meet ICO expectations will be investigated,” the watchdog added.

Its blog post gives some examples of potentially concerning uses of biometrics — including AI tech being used to monitoring the physical health of workers via the use of wearable screening tools; or the use of visual and behavioural methods such as body position, speech, eyes and head movements to register students for exams.

“Emotion analysis relies on collecting, storing and processing a range of personal data, including subconscious behavioural or emotional responses, and in some cases, special category data. This kind of data use is far more risky than traditional biometric technologies that are used to verify or identify a person,” it continued. “The inability of algorithms which are not sufficiently developed to detect emotional cues, means there’s a risk of systemic bias, inaccuracy and even discrimination.”

It’s not the first time the ICO has had concerns over rising use of biometric tech. Last year the then information commissioner, Elizabeth Denham, published an opinion expressing concerns about what she couched as the potentially “significant” impacts of inappropriate, reckless or excess use of live facial recognition (LFR) technology — warning it could lead to a ‘big brother’ style surveillance of the public.

However that warning was targeting a more specific technology (LFR). And the ICO’s Bonner told the Guardian this is the first time the regulator has issued a blanket warning on the ineffectiveness of a whole new technology — arguing this is justified by the harm that could be caused if companies made meaningful decisions based on meaningless data, per the newspaper’s report.

The ICO may be feeling moved to make more substantial interventions in this area because UK lawmakers aren’t being proactive when it comes to biometrics regulation.

An independent review of UK legislation in this area, published this summer , concluded the country urgently needs new laws to govern the use of biometric technologies — and called for the government to come forward with primary legislation.

However the government does not appear to have paid much mind to such urging or these various regulatory warnings — with a planned data protection reform , which it presented earlier this year, eschewing action to boost algorithmic transparency across the public sector, for example, while — on biometrics specifically — it offered only soft-touch measures aimed at clarifying the rules on (specifically) police use of biometric data (taking about developing best practice standards and codes of conduct). So a far cry from the comprehensive framework called for by the Ada Lovelace research institute-commissioned independent law review.

In any case, the data reform bill remains on pause after a summer of domestic political turmoil that has led to two changes of prime minister in quick succession. A legislative rethink was also announced earlier this month by the (still in post) secretary of state for digital issues, Michelle Donelan — who used a recent Conservative Party conference speech to take aim at the EU’s General Data Protection Regulation (GDPR), aka the framework that was transposed into UK law back in 2018. She said the government would be “replacing” the GDPR with a bespoke British data protection system — but gave precious little detail on what exactly will be put in place of that foundational framework.

The GDPR regulates the processing of biometrics data when it’s used for identifying individuals — and also includes a right to human review of certain substantial algorithmic decisions. So if the government is intent on ripping up the current rulebook it raises the question of how — or even whether — biometric technologies will be regulated in the UK in the future?

And that makes the ICO’s public pronouncements on the risks of pseudoscientific biometric AI systems all the more important. (It’s also noteworthy that the regulator name-checks the involvement of the Ada Lovelace Institute (which commissioned the aforementioned legal review) and the British Youth Council which it says will be involved in a process of public dialogues it plans to use to help shape its forthcoming ‘people-centric’ biometrics guidance.)

“Supporting businesses and organisations at the development stage of biometrics products and services embeds a ‘privacy by design’ approach, thus reducing the risk factors and ensuring organisations are operating safely and lawfully,” the ICO added, in what could be interpreted as rather pointed remarks on government policy priorities.

The regulator’s concern about emotional analysis tech is not an academic risk, either.

For example, a Manchester, UK-based company called Silent Talker was one of the entities involved in a consortium developing a highly controversial ‘AI lie detector’ technology — called iBorderCtrl — that was being pitched as a way to speed up immigration checks all the way back in 2017. Ironically enough, the iBorderCtrl project garnered EU R&D funding, even as critics accused the research project of automating discrimination.

It’s not clear what the status of the underlying ‘AI lie detector’ technology is now. The Manchester company involved in the ‘proof of concept’ project — which was also linked to research at Manchester Metropolitan University — was dissolved this summer, per Companies House records. But the iBorderCtrl project was also criticized on transparency grounds, and has faced a number of freedom of information actions seeking to lift the lid on the project and the consortium behind it — with, apparently, limited success.

‘Orwellian’ AI lie detector project challenged in EU court

In another example, UK heath startup, Babylon AI, demonstrated an “emotion-scanning” AI embedded into a telehealth platform back in a 2018 presentation — saying the tech scanned facial expressions in real time to generate an assessment of how the person was feeling and present that to the clinician to potentially act on.

Its CEO Ali Parser said at the time that the emotion-scanning tech had been built and implied it would be coming to market — however the company later rowed back on the claim, saying the AI had only been used in pre-market testing and development had been deprioritized in favor of other AI-powered features.

The ICO will surely be happy that Babylon had a rethink on using AI to claim its software could perform remote emotion-scanning.

Its blog post goes on to cite other current examples where biometric tech, more broadly, is being used — including in airports for streamlining passenger journeys; financial companies using live facial recognition tech for remote ID checks; and companies using voice recognition for convenient account access, instead of having to remember passwords.

The regulator doesn’t make specific remarks on the cited use-cases but it looks likely it will be keeping a close eye on all applications of biometrics given the high potential risks to people’s privacy and rights — even as its most special attention will be directed toward uses of the tech that slip their chains and stray into the realms of science fiction.

The ICO’s blog post notes that its look into “ biometrics futures” is a key part of the its “horizon-scanning function”. Which is technocrat speak for ‘scrutiny of this type of AI tech being prioritized because it’s fast coming down the pipe at us all’.

“This work identifies the critical technologies and innovation that will impact privacy in the near future — its aim is to ensure that the ICO is prepared to confront the privacy challenges transformative technology can bring and ensure responsible innovation is encouraged,” it added.

Depressed? This algorithm can tell from the tone of your voice

UK watchdog warns against AI for emotional analysis, dubs 'immature' biometrics a bias risk • TechCrunch

Perceptron: AI saving whales, steadying gaits and banishing traffic • TechCrunch

Perceptron: AI saving whales, steadying gaits and banishing traffic • TechCrunch

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column,  Perceptron , aims to collect some of the most relevant recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.

Over the past few weeks, researchers at MIT have detailed their work on a system to track the progression of Parkinson’s patients by continuously monitoring their gait speed. Elsewhere, Whale Safe, a project spearheaded by the Benioff Ocean Science Laboratory and partners, launched buoys equipped with AI-powered sensors in an experiment to prevent ships from striking whales. Other aspects of ecology and academics also saw advances powered by machine learning.

The MIT Parkinson’s-tracking effort aims to help clinicians overcome challenges in treating the estimated 10 million people afflicted by the disease globally. Typically, Parkinson’s patients’ motor skills and cognitive functions are evaluated during clinical visits, but these can be skewed by outside factors like tiredness. Add to that fact that commuting to an office is too overwhelming a prospect for many patients, and their situation grows starker.

As an alternative, the MIT team proposes an at-home device that gathers data using radio signals reflecting off of a patient’s body as they move around their home. About the size of a Wi-Fi router, the device, which runs all day, uses an algorithm to pick out the signals even when there are other people moving around the room.

In a study published in the journal Science Translational Medicine the MIT researchers showed that their device was able to effectively track Parkinson’s progression and severity across dozens of participants during a pilot study. For instance, they showed that gait speed declined almost twice as fast for people with Parkinson’s compared to those without, and that daily fluctuations in a patient’s walking speed corresponded with how well they were responding to their medication.

Moving from healthcare to the plight of whales, the Whale Safe project — whose stated mission is to “utilize best-in-class technology with best-practice conservation strategies to create a solution to reduce risk to whales” — in late September deployed buoys equipped with onboard computers that can record whale sounds using an underwater microphone. An AI system detects the sounds of particular species and relays the results to a researcher, so that the location of the animal — or animals — can be calculated by corroborating the data with water conditions and local records of whale sightings. The whales’ locations are then communicated to nearby ships so they can reroute as necessary.

Collisions with ships are a major cause of death for whales — many species of which are endangered. According to research carried out by the nonprofit Friend of the Sea, ship strikes kill more than 20,000 whales every year. That’s destructive to local ecosystems, as whales play a significant role in capturing carbon from the atmosphere. A single great whale can sequester around 33 tons of carbon dioxide on average.

Benioff Ocean Science Laboratory

Image Credits: Benioff Ocean Science Laboratory

Image Credits: Benioff Ocean Science Laboratory

Whale Safe currently has buoys deployed in the Santa Barbara Channel near the ports of Los Angeles and Long Beach. In the future, the project aims to install buoys in other American coastal areas including Seattle, Vancouver and San Diego.

Conserving forests is another area where technology is being brought into play. Surveys of forest land from above using lidar are helpful in estimating growth and other metrics, but the data they produce aren’t always easy to read. Point clouds from lidar are just undifferentiated height and distance maps — the forest is one big surface, not a bunch of individual trees. Those tend to have to be tracked by humans on the ground.

Purdue researchers have built an algorithm (not quite AI but we’ll allow it this time) that turns a big lump of 3D lidar data into individually segmented trees, allowing not just canopy and growth data to be collected but a good estimate of actual trees. It does this by calculating the most efficient path from a given point to the ground, essentially the reverse of what nutrients would do in a tree. The results are quite accurate (after being checked with an in-person inventory) and could contribute to far better tracking of forests and resources in the future.

Self-driving cars are appearing on our streets with more frequency these days, even if they’re still basically just beta tests. As their numbers grow, how should policy makers and civic engineers accommodate them? Carnegie Mellon researchers put together a policy brief that makes a few interesting arguments .

Diagram showing how collaborative decision making in which a few cars opt for a longer route actually makes it faster for most. Image Credits: Carnegie Mellon University

Diagram showing how collaborative decision making in which a few cars opt for a longer route actually makes it faster for most. Image Credits: Carnegie Mellon University

The key difference, they argue, is that autonomous vehicles drive “altruistically,” which is to say they deliberately accommodate other drivers — by, say, always allowing other drivers to merge ahead of them. This type of behavior can be taken advantage of, but at a policy level it should be rewarded, they argue, and AVs should be given access to things like toll roads and HOV and bus lanes, since they won’t use them “selfishly.”

They also recommend that planning agencies take a real zoomed-out view when making decisions, involving other transportation types like bikes and scooters and looking at how inter-AV and inter-fleet communication should be required or augmented. You can read the full 23-page report here (PDF) .

Turning from traffic to translation, Meta this past week announced a new system, Universal Speech Translator, that’s designed to interpret unwritten languages like Hokkien. As an Engadget piece on the system notes, thousands of spoken languages don’t have a written component, posing a problem for most machine learning translation systems, which typically need to convert speech to written words before translating the new language and reverting the text back to speech.

To get around the lack of labeled examples of language, Universal Speech Translator converts speech into “acoustic units” and then generates waveforms . Currently, the system is rather limited in what it can do — it allows speakers of Hokkien, a language commonly used in southeastern mainland China, to translate to English one full sentence at a time. But the Meta research team behind Universal Speech Translator believes that it’ll continue to improve.

Illustration for AlphaTensor. Image Credits: DeepMind

Illustration for AlphaTensor. Image Credits: DeepMind

Elsewhere within the AI field, researchers at DeepMind detailed AlphaTensor , which the Alphabet-backed lab claims is the first AI system for discovering new, efficient and “provably correct” algorithms. AlphaTensor was designed specifically to find new techniques for matrix multiplication, a math operation that’s core to the way modern machine learning systems work.

To leverage AlphaTensor, DeepMind converted the problem of finding matrix multiplication algorithms into a single-player game where the “board” is a three-dimensional array of numbers called a tensor. According to DeepMind, AlphaTensor learned to excel at it, improving an algorithm first discovered 50 years ago and discovering new algorithms with “state-of-the-art” complexity. One algorithm the system discovered, optimized for hardware such as Nvidia’s V100 GPU, was 10% to 20% faster than commonly used algorithms on the same hardware.

Perceptron: AI saving whales, steadying gaits and banishing traffic • TechCrunch