by tyler | Apr 11, 2024 | Tech
Patlytics , an AI-powered patent analytics platform, wants to help enterprises, IP professionals, and law firms speed up their patent workflows from discovery, analytics, comparisons, and prosecution to litigation.
The fledgling startup secured $4.5 million in seed funding, oversubscribed and closed in a few days, led by Google ’s AI-focused VC arm, Gradient Ventures.
Patlytics was co-founded by CEO Paul Lee, a former venture capitalist at Tribe, and CTO Arthur Jen, a serial entrepreneur who co-founded and served as CTO of the web3 wallet platform Magic. Their shared vision and complementary skills laid the foundation for Patlytics, driven by their firsthand experiences and a deep understanding of the industry’s pain points.
The co-founders told TechCrunch they witnessed many opportunities in the IP space. Lee, who spent most of his previous career investing in vertical SaaS and AI and a few legal tech startups, came across many IP companies that used antiquated techniques in a workflow that (he thought) should be digitalized. While working at Magic, Jen dealt intensively with filing and defending patents to protect company technology.
“The AI revolution in patent intelligence is not just about efficiency; it’s about transforming how patent professionals strategize and engage with the entire patent lifecycle,” Lee said in an exclusive interview with TechCrunch. “Recognizing the intricate blend of technical and legal expertise required for patent work, we’ve developed our platform to be an indispensable ally for patent professionals.”
Traditional patent prosecution and litigation workflows, which rely heavily on manual input, are complex and time-consuming, Lee continued. The research and discovery phase, which involves searching and analyzing large volumes of patent data, demands significant effort, encompassing internet searches, piecemeal manual investigations, and inherently inefficient procedures.”
What sets the startup apart from its industry peers like Anaqua, Clarivate, and Patsnap is that Patlytics is “the sole provider offering end drafts and extensive chart solutions” in its current AI-first approach in terms of insights and analytics, Lee explained.
Another difference is the platform doesn’t rely entirely on software solutions, but has a place for human participation in the process.
Image Credits: Patlytics co-founders: Arthur Jen ( CTO) and Paul Lee (CEO) / Patlytics
Image Credits: Patlytics co-founders: Arthur Jen ( CTO) and Paul Lee (CEO) / Patlytics
The outfit recently launched its product, which is SOC-2 certified , and already serves some top-tier law firms and a few in-house legal counsels at enterprises as customers. The company did not disclose the number of clients due to confidentiality agreements. Its target users include IP law firms and companies with several patents.
“Protecting intellectual property remains a major priority and business requirement for information technology, physical product, and biotechnology companies. As companies incorporate AI into their new products, companies from the automobile to the pharmaceutical industry are keen to protect new inventions and watch for infringement from competitors,” said Gradient’s general partner, Darian Shirazi. “We’re excited to partner with the team at Patlytics as they leverage the recent transformative innovations in AI to reinvent the intellectual property protection industry.”
The outfit will use the proceeds to invest in product and AI development and go-to-market function, aiming to cover all relevant workflows for patent prosecution and litigation. In addition, it plans to bolster its engineering team. The company has 11 employees.
“Knowing that navigating the intricate landscape of intellectual property can be laborious, our AI-integrated patent workflow aims to enhance the efficiency and provide insights, transforming IP protection into a dynamic force shaping the future technological landscape,” Jen said. “We build our technology with data security and privacy in mind, safeguarding sensitive information throughout the patent lifecycle.”
Other participants in the round included 8VC, Alumni Ventures, Gaingels, Joe Montana’s Liquid 2 Ventures, Position Ventures, Tribe Capital, and Vermilion Ventures. Notably, the round also attracted a host of angel backers, including partners at premier law firms, Datadog President Amit Agarwal, Fiscal Note founder Tim Hwang, and Tapas Media founder Chang Kim.
SoftBank, Tencent backs IP analytics platform PatSnap in $300M round
IPRally, a patent search engine powered by explainable AI, raises $10.8M

by tyler | Apr 11, 2024 | Tech
On Thursday, Apple announced that it has opened its iPhone repair process to include used components. Starting this fall, customers and independent repair shops will be able to fix the handset using compatible components.
Components that don’t require configuration (such as volume buttons) were already capable of being harvested from used devices. Today’s news adds all components — including the battery, display and camera — which Apple requires to be configured for full functionality. Face ID will not be available when the feature first rolls out, but it is coming down the road.
At launch, the feature will be available solely for the iPhone 15 line on both the supply and receiving ends of the repair. That caveat is due, in part, to limited interoperability between the models. In many cases, parts from older phones simply won’t fit. The broader limitation that prohibited the use of components from used models comes down to a process commonly known as “parts paring.”
Apple has defended the process, stating that using genuine components is an important aspect of maintaining user security and privacy. Historically, the company hasn’t used the term “parts pairing” to refer to its configuration process, but it acknowledges that phrase has been widely adopted externally. It’s also aware that the term is loaded in many circles.
“‘Parts pairing’ is used a lot outside and has this negative connotation,” Apple senior vice president of hardware engineering, John Ternus, tells TechCrunch. “I think it’s led people to believe that we somehow block third-party parts from working, which we don’t. The way we look at it is, we need to know what part is in the device, for a few reasons. One, we need to authenticate that it’s a real Apple biometric device and that it hasn’t been spoofed or something like that. … Calibration is the other one.”
Right-to-repair advocates have accused Apple of hiding behind parts pairing as an excuse to stifle user-repairability. In January, iFixit called the process the “ biggest threat to repair .” The post paints a scenario wherein an iPhone user attempts to harvest a battery from a friend’s old device, only to be greeted with a pop-up notification stating, “Important Battery Message. Unable to verify this iPhone has a genuine Apple battery.”
It’s a real scenario and surely one that’s proven confusing for more than a few people. After all, a battery that was taken directly from another iPhone is clearly the real deal.
Today’s news is a step toward resolving the issue on newer iPhones, allowing the system to effectively verify that the battery being used is, in fact, genuine.
“Parts pairing, regardless of what you call it, is not evil,” says Ternus. “We’re basically saying, if we know what module’s in there, we can make sure that when you put our module in a new phone, you’re gonna get the best quality you can. Why’s that a bad thing?”
The practice took on added national notoriety when it was specifically targeted by Oregon’s recently passed right-to-repair bill. Apple, which has penned an open letter in support of a similar California bill, heavily criticized the bill’s parts pairing clause.
“Apple supports a consumer’s right to repair, and we’ve been vocal in our support for both state and federal legislation,” a spokesperson for the company noted in March. “We support the latest repair laws in California and New York because they increase consumer access to repair while keeping in place critical consumer protections. However, we’re concerned a small portion of the language in Oregon Senate Bill 1596 could seriously impact the critical and industry-leading privacy, safety and security protections that iPhone users around the world rely on every day.”
While aspects of today’s news will be viewed as a step in the right direction among some repair advocates, it seems unlikely that it will make the iPhone wholly compliant with the Oregon bill. Apple declined to offer further speculation on the matter.
Biometrics — including fingerprint and facial scans — continue to be a sticking point for the company.
“You think about Touch ID and Face ID and the criticality of their security because of how much of our information is on our phones,” says Ternus. “Our entire life is on our phones. We have no way of validating the performance of any third-party biometrics. That’s an area where we don’t enable the use of third-party modules for the key security functions. But in all other aspects, we do.”
It doesn’t seem coincidental that today’s news is being announced within weeks of the Oregon bill’s passage — particularly given that these changes are set to roll out in the fall. The move also appears to echo Apple’s decision to focus more on user-repairability with the iPhone 14, news that arrived amid a rising international call for right-to-repair laws.
Apple notes, however, that the processes behind this work were set in motion some time ago. Today’s announcement around device harvesting, for instance, has been in the works for two years.
For his part, Ternus suggests that his team has been focused on increasing user access to repairs independent of looming state and international legislation. “We want to make things more repairable, so we’re doing that work anyway,” he says. “To some extent, with my team, we block out the news of the world, because we know what we’re doing is right, and we focus on that.”
Overall, the executive preaches a kind of right tool for the right job philosophy to product design and self-repair.
“Repairability in isolation is not always the best answer,” Ternus says. “One of the things that I worry about is that people get very focused as if repairability is the goal. The reality is repairability is a means to an end. The goal is to build products that last, and if you focus too much on [making every part repairable], you end up creating some unintended consequences that are worse for the consumer and worse for the planet.”
Also announced this morning is an enhancement to Activation Lock, which is designed to deter thieves from harvesting stolen phones for parts. “If a device under repair detects that a supported part was obtained from another device with Activation Lock or Lost Mode enabled,” the company notes, “calibration capabilities for that part will be restricted.”
Ternus adds that, in addition to harvesting used iPhones for parts, Apple “fundamentally support[s] the right for people to use third-party parts as well.” Part of that, however, is enabling transparency.
“We have hundreds of millions of iPhones in use that are second- or third-hand devices,” he explains. “They’re a great way for people to get into the iPhone experience at a lower price point. We think it’s important for them to have the transparency of: was a repair done on this device? What part was used? That sort of thing.”
When iOS 15.2 arrived in November 2021, it introduced a new feature called “ iPhone parts and service history .” If your phone is new and has never been repaired, you simply won’t see it. If one of those two qualifications does apply to your device, however, the company surfaces a list of switched parts and repairs in Settings.
Ternus cites a recent UL Solutions study as evidence that third-party battery modules, in particular, can present a hazard to users.
“We don’t block the use of third-party batteries,” he says. “But we think it’s important to be able to notify the customer that this is or isn’t an authentic Apple battery, and hopefully that will motivate some of these third parties to improve the quality.”
While the fall update will open harvesting up to a good number of components, Apple has no plans to sell refurbished parts for user repairs.

by tyler | Apr 11, 2024 | Tech
We’re excited to reveal the complete agenda, packed with keynote stage speakers and interactive roundtable sessions. From fundraising insights to growth strategies, join us as we navigate the startup landscape together at TechCrunch Early Stage 2024 on April 25 in Boston.
Don’t miss out — secure your spot now for an unforgettable experience of learning, connection, and inspiration. Prices go up at the door!
Sponsored by: Latham & Watkins LLP
Sponsored by: Prepare 4 VC
Sponsored by Fidelity Private Shares
Sponsored by HomeHQ.ai
Sponsored by HAX
Sponsored by Sand Technologies

by tyler | Apr 11, 2024 | Tech
U.S. cybersecurity agency CISA is warning Sisense customers to reset their credentials and secrets after the data analytics company reported a security incident.
In a brief statement on Thursday , CISA said it was responding to a “recent compromise” at Sisense, which provides business intelligence and data analytics to companies around the world.
CISA urged Sisense customers to “reset credentials and secrets potentially exposed to, or used to access, Sisense services,” and to report any suspicious activity involving the use of compromised credentials to the agency.
The exact nature of the cybersecurity incident is not clear yet.
Founded in 2004, Sisense develops business intelligence and data analytics software for big companies, including telcos, airlines and tech giants. Sisense’s technology allows organizations to collect, analyze and visualize large amounts of their corporate data by tapping directly into their existing technologies and cloud systems.
Companies like Sisense rely on using credentials, such as passwords and private keys, to access a customer’s various stores of data for analysis. With access to these credentials, an attacker could potentially also access a customer’s data.
CISA said it is “taking an active role in collaborating with private industry partners to respond to this incident, especially as it relates to impacted critical infrastructure sector organizations.”
Sisense counts Air Canada, PagerDuty, Philips Healthcare, Skullcandy and Verizon as its customers, as well as thousands of other organizations globally.
News of the incident first emerged on Wednesday after cybersecurity journalist Brian Krebs published a note sent by Sisense chief information security officer, Sangram Dash, urging customers to “rotate any credentials that you use within your Sisense application.”
Neither Dash nor a spokesperson for Sisense responded to an email seeking comment.
Israeli media reported in January that Sisense had laid off about half of its employees since 2022. It is unclear if the layoffs impacted the company’s security posture. Sisense has taken in close to $300 million in funding from investors, who include Insight Partners, Bessemer Ventures Partners, and Battery Ventures.
Do you know more about the Sisense breach? To contact this reporter, get in touch on Signal and WhatsApp at +1 646-755-8849, or by email . You can also send files and documents via SecureDrop .

by tyler | Apr 11, 2024 | Tech
Meta said on Thursday that it is testing new features on Instagram intended to help safeguard young people from unwanted nudity or sextortion scams. This includes a feature called “Nudity Protection in DMs,” which automatically blurs images detected as containing nudity.
The tech giant said it will also nudge teens to protect themselves by serving a warning encouraging them to think twice about sharing intimate images. Meta hopes this will boost protection against scammers who may send nude images to trick people into sending their own images in return.
The company said it is also implementing changes that will make it more difficult for potential scammers and criminals to find and interact with teens. Meta said it is developing new technology to identify accounts that are “potentially” involved in sextortion scams, and will apply limits on how these suspect accounts can interact with other users.
In another step announced on Thursday, Meta said it has increased the data it is sharing with the cross-platform online child safety program, Lantern , to include more “sextortion-specific signals.”
The social networking giant has had long-standing policies that ban people from sending unwanted nudes or seeking to coerce others into sharing intimate images. However, that doesn’t stop these problems from occurring and causing misery for scores of teens and young people — sometimes with extremely tragic results .
We’ve rounded up the latest crop of changes in more detail below.
Nudity Protection in DMs aims to protect teen users of Instagrams from cyberflashing by putting nude images behind a safety screen. Users will be able to choose whether or not to view such images.
“We’ll also show them a message encouraging them not to feel pressure to respond, with an option to block the sender and report the chat,” said Meta.
The nudity safety-screen will be turned on by default for users under 18 globally. Older users will see a notification encouraging them to turn the feature on.
“When nudity protection is turned on, people sending images containing nudity will see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if they’ve changed their mind,” the company added.
Anyone trying to forward a nude image will see the same warning encouraging them to reconsider.
The feature is powered by on-device machine learning, so Meta said it will work within end-to-end encrypted chats because the image analysis is carried out on the user’s own device.
The nudity filter has been in development for nearly two years .
In another safeguarding measure, Instagram users who send or receive nudes will be directed to safety tips (with information about the potential risks involved), which, according to Meta, have been developed with guidance from experts.
“These tips include reminders that people may screenshot or forward images without your knowledge, that your relationship to the person may change in the future, and that you should review profiles carefully in case they’re not who they say they are,” the company wrote in a statement. “They also link to a range of resources, including Meta’s Safety Center, support helplines , StopNCII.org for those over 18, and Take It Down for those under 18.”
The company is also testing showing pop-up messages to people who may have interacted with an account that has been removed for sextortion. These pop-ups will also direct users to relevant resources.
“We’re also adding new child safety helplines from around the world into our in-app reporting flows. This means when teens report relevant issues — such as nudity, threats to share private images or sexual exploitation or solicitation — we’ll direct them to local child safety helplines where available,” the company said.
While Meta says it removes sextortionists’ accounts when it becomes aware of them, it first needs to spot bad actors to shut them down. So, the company is trying to go further by “developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior.”
“While these signals aren’t necessarily evidence that an account has broken our rules, we’re taking precautionary steps to help prevent these accounts from finding and interacting with teen accounts,” the company said. “This builds on the work we already do to prevent other potentially suspicious accounts from finding and interacting with teens.”
It’s not clear what technology Meta is using to do this analysis, nor which signals might denote a potential sextortionist (we’ve asked for more details). Presumably, the company may analyze patterns of communication to try to detect bad actors.
Accounts that get flagged by Meta as potential sextortionists will face restrictions on messaging or interacting with other users.
“[A]ny message requests potential sextortion accounts try to send will go straight to the recipient’s hidden requests folder, meaning they won’t be notified of the message and never have to see it,” the company wrote.
Users who are already chatting with potential scam or sextortion accounts will not have their chats shut down, but will be shown Safety Notices “encouraging them to report any threats to share their private images, and reminding them that they can say ‘no’ to anything that makes them feel uncomfortable,” according to the company.
Teen users are already protected from receiving DMs from adults they are not connected with on Instagram (and also from other teens, in some cases). But Meta is taking this a step further: The company said it is testing a feature that hides the “Message” button on teenagers’ profiles for potential sextortion accounts — even if they’re connected.
“We’re also testing hiding teens from these accounts in people’s follower, following and like lists, and making it harder for them to find teen accounts in Search results,” it added.
It’s worth noting the company is under increasing scrutiny in Europe over child safety risks on Instagram , and enforcers have questioned its approach since the bloc’s Digital Services Act (DSA) came into force last summer.
Meta has announced measures to combat sextortion before — most recently in February , when it expanded access to Take It Down . The third-party tool lets people generate a hash of an intimate image locally on their own device and share it with the National Center for Missing and Exploited Children, helping to create a repository of non-consensual image hashes that companies can use to search for and remove revenge porn.
The company’s previous approaches to tackle that problem had been criticized , as they required young people to upload their nudes. In the absence of hard laws regulating how social networks need to protect children, Meta was left to self-regulate for years — with patchy results.
However, some requirements have landed on platforms in recent years — such as the UK’s Children Code (came into force in 2021) and the more recent DSA in the EU — and tech giants like Meta are finally having to pay more attention to protecting minors.
For example, in July 2021 , Meta started defaulting young people’s Instagram accounts to private just ahead of the UK compliance deadline. Even tighter privacy settings for teens on Instagram and Facebook followed in November 2022 .
This January , the company announced it would set stricter messaging settings for teens on Facebook and Instagram by default, shortly before the full compliance deadline for the DSA kicked in in February .
This slow and iterative feature creep at Meta concerning protective measures for young users raises questions about what took the company so long to apply stronger safeguards. It suggests Meta opted for a cynical minimum in safeguarding in a bid to manage the impact on usage, and prioritize engagement over safety. That is exactly what Meta whistleblower, Francis Haugen , repeatedly denounced her former employer for.
Asked why the company is not also rolling out these new protections to Facebook, a spokeswoman for Meta told TechCrunch, “We want to respond to where we see the biggest need and relevance — which, when it comes to unwanted nudity and educating teens on the risks of sharing sensitive images — we think is on Instagram DMs, so that’s where we’re focusing first.”
Meta is rolling out tighter teen messaging limitations and parental controls

by tyler | Apr 11, 2024 | Tech
Sanctuary AI announced that it will be delivering its humanoid robot to a Magna manufacturing facility. Based in Canada, with auto manufacturing facilities in Austria, Magna manufactures and assembles cars for a number of Europe’s top automakers, including Mercedes, Jaguar and BMW. As is often the nature of these deals, the parties have not disclosed how many of Sanctuary AI’s robots will be deployed.
The news follows similar deals announced by Figure and Apptronik, which are piloting their own humanoid systems with BMW and Mercedes, respectively. Agility also announced a deal with Ford at CES in January 2020, though that agreement found the American carmaker exploring the use of Digit units for last-mile deliveries. Agility has since put that functionality on the back burner, focusing on warehouse deployments through partners like Amazon.
For its part, Magna invested in Sanctuary AI back in 2021 — right around the time Elon Musk announced plans to build a humanoid robot to work in Tesla factories. The company would later dub the system “Optimus.” Vancouver-based Sanctuary unveiled its own system, Phoenix, back in May of last year. The system stands 5’7” (a pretty standard height for these machines) and weighs 155 pounds.
Phoenix isn’t Sanctuary’s first humanoid (an early model had been deployed at a Canadian retailer), but it is the first to walk on legs — this is in spite of the fact that most available videos only highlight the system’s torso. The company has also focused some of its efforts on creating dexterous hands — an important addition if the system is expected to expand functionality beyond moving around totes.
Sanctuary calls the pilot, “a multi-disciplinary assessment of improving cost and scalability of robots using Magna’s automotive product portfolio, engineering and manufacturing capabilities; and a strategic equity investment by Magna.”
Sanctuary AI’s new humanoid robot stands 5’7″ and lifts 55 lb
As ever, these agreements should be taken as what they are: pilots. They’re not exactly validation of the form factor and systems — that comes later, if Magna gets what it’s looking for with the deal. That comes down to three big letters: ROI.
The company isn’t disclosing specifics with regard to the number of robots, the length of the pilot or even the specific factory where they will be deployed.
