Privacy will die to deliver us the thinking and knowing computer | TechCrunch

Privacy will die to deliver us the thinking and knowing computer | TechCrunch

We’re getting a a first proper look at much-hyped Humane’s “AI pin” (whatever that is) on November 9 , and personalized AI memory startup Rewind is launching a pendant to track not only your digital but also your physical life sometime in the foreseeable future. Buzz abounds about OpenAI’s Sam Altman meeting with Apple’s longtime design deity Jony Ive regarding building an AI hardware gadget of some kind, and murmurs in the halls of VC offices everywhere herald the coming of an iPhone moment for AI in breathless tones.

Of course, the potential is immense: A device that takes and extends what ChatGPT has been able to do with generative AI to many other aspects of our lives – hopefully with a bit more smarts and practicality. But the cost is considerable; not the financial cost, which is just more wealth transfer from the coal reserves of rich family offices and high-net worth individuals to the insatiable fires of startup burn rates. No, I’m talking about the price we pay in privacy.

The death of privacy has been called, called-off, countered and repeated many times over the years (just Google the phrase) in response to any number of technological advances, including things like mobile device live location sharing; the advent and eventual ubiquity of social networks and their resulting social graphs; satellite mapping and high-resolution imagery; massive credential and personal identifiable information (PII) leaks, and much, much more.

Generative AI – the kind popularized by OpenAI and ChatGPT, and the kind that most people are referring to when they anticipate a coming wave of AI gadgetry – is another mortal enemy of what we think of as privacy, and it’s one of its most voracious and indiscriminate killers yet.

At our recent TechCrunch Disrupt event in San Francisco, Signal President Meredith Whittaker – one of the only major figures in tech who seems willing and eager to engage with the specific realistic threats of AI, rather than pointing to eventual doomsday scenarios to keep peoples’ eyes off the prize – said that AI is at heart “a surveillance technology” that “requires the surveillance business model” in terms of its capacity and need to hoover up all our data. It’s also surveillant in use, too, in terms of image recognition, sentiment analysis and countless other similar applications.

All of these trade-offs are for a reasonable facsimile of a thinking and knowing computer, but not one that can actually think and know. The definitions of those things will obviously vary, but most experts agree that the LLMs we have today, while definitely advanced and clearly able to convincingly mimic human behavior in certain limited circumstances, are not actually replicating human knowledge or thought.

But even to achieve this level of performance, the models upon which things like ChatGPT are based have required the input of vast quantities of data – data collected arguably with the ‘consent’ of those who provided it in that they posted it freely to the internet without a firm understanding of what that would mean for collection and re-use, let alone in a domain that probably didn’t really exist when they posted it to begin with.

That’s taking into account digital information, which is in itself a very expansive collection of data that probably reveals much more than any of us individually would be comfortable with. But it doesn’t even include the kind of physical world information that is poised to be gathered by devices like Humane’s AI pin, the Rewind pendant and others, including the Ray-Ban Meta Smartglasses that the Facebook-owner released earlier this month, which are set to add features next year that provide information on-demand about real-world objects and places captured through their built-in cameras.

Some of those working in this emerging category have anticipated concerns around privacy and provided what protections they can – Humane notes that its device will always indicate when it’s capturing via a yellow LED; Meta revamped the notification light on the Ray-Ban Smart glasses vs. the first iteration to physically disable recording if they detect tampering or obfuscation of the LED; Rewind says its taking a privacy-first approach to all data use in hopes that’ll become the standard for the industry.

It’s unlikely that will become the standard for the industry. The standard, historically, has been whatever the minimum is that the market and regulators will bear – and both have tended to accept more incursions over time, whether tacitly or at least via absence of objection to changing terms, conditions and privacy policies.

A leap from what we have now, to a true thinking and knowing computer that can act as a virtual companion with at least as full a picture of our lives as we have ourselves, will require a forfeiture of as much data as we can ever hope to collect or possess – insofar as that’s something any of us can possess. And if we achieve our goals, the fact of whether this data ever leaves our local devices (and the virtual intelligences that dwell therein) or not actually becomes somewhat moot, since our information will then be shared with another – even if the other in this case happens not have a flesh and blood form.

It’s very possible that by that point, the concept of ‘privacy’ as we understand it today will be an outmoded or insufficient one in terms of the world in which we find ourselves, and maybe we’ll have something to replace it that preserves its spirit in light of this new paradigm. Either way, I think the path to AI’s iPhone moment necessarily requires the ‘death’ of privacy as we know it, which puts company’s that ensconce and valorize privacy as a key differentiator – like Apple – in an odd position over the next decade or so.

Privacy will die to deliver us the thinking and knowing computer | TechCrunch

Viso eyes no-code for the future of computer vision and scores funding to scale | TechCrunch

Viso eyes no-code for the future of computer vision and scores funding to scale | TechCrunch

Computer vision has become commonplace across innumerable industries, but the methods of creating and controlling these visual AI models aren’t so easy. Viso is building a low/no-code end-to-end platform that lets companies roll their own computer vision stack, and they just pulled in $9.2M to scale up.

There are tons of computer vision models and services out there, of course, but a lot sort of fit the description of “model as API.” Say you want to do person recognition and rate whether they’re standing or sitting, so you can tell how busy a train station or restaurant is.

There are fully-forrmed options out there for you for person and pose recognition, but they may not fit your use case, or security model, or they’re too expensive to scale with. Building your own is an option, but the expertise required to train and deploy modern CV models is non-trivial: unless you have the time and money to stand up a real team, it may be out of your reach.

That’s the type of situation that Viso wants to remedy, by providing a platform to create an enterprise-grade CV model of your own without dedicating the kind of time and resources that it often takes.

“Early in the adoption cycle, companies resort to buying/renting pre-made computer vision systems. However, they eventually need to bring all computer vision initiatives together (streamlining), and deeply integrate and customize them, and also ‘own’ them because the data is sensitive and the technology of strategic value. This is why companies across those industries are starting to hire AI engineers,” explained Viso’s co-founder and co-CEO, Gaudenz Boesch.

Examples of Viso-powered computer vision applications.

Examples of Viso-powered computer vision applications.

But unlike for many other enterprise-level needs, computer vision lacks a “specialized infrastructure” to efficiently build and deploy it.

“Companies have to build it from scratch, trying to assemble a plethora of disconnected software and hardware platforms (cameras, servers) across the organization,” he continued. This in turn requires expertise across numerous domains that quickly grows too expensive.

Viso’s approach will likely look familiar to anyone who has used no-code tools in other contexts. It amounts to a series of modules, both pre-built and customizable, that let a user select, train, and deploy computer vision models as needed.

One view of the model creation process.

One view of the model creation process.

Of course, you’ll still need some level of expertise – which object recognition model should it run? Where will training data be kept? How is inference handled? But a handful of engineers can do the work of far more, and all in one place rather than scattered across a dozen tools, APIs, and code notebooks.

Viso says it’s end-to-end, and that doesn’t seem to be an exaggeration. Computer vision requires data to start with, and training processes, and then implementation, hosting, compliance work, and so on — and it seems to really be a “soup to nuts” solution that puts all of that in one place:

That’s a big list!

That’s a big list!

So if you were making that “busy detector” from earlier, you could conceivably come into it with nothing but a hundred hours of footage and come out the other end a week or two later with a complete product. That would include low-level analysis and storage of the raw data, annotation and labeling, training and testing of the base model, product integration, deployment online or offline, analytics, updates and backups, as well as access and security… all without leaving Viso, and probably without touching the semicolon or bracket keys. (There are various case studies here .)

Though there are other computer vision platforms out there, Boesch said none were “built to manage highly complex computer vision applications at scale, and maintain them continuously,” instead being more focused on a handful of tasks from the above list. Viso aims to support as many models and methods, hardware, and use cases as possible, while ensuring the customer owns the end result.

Not being a developer myself, I can’t speak to how difficult or easy different use cases might be, but certainly there is a fundamental attraction (as evidenced by the popularity of other low-code and end-to-end tools) to using fewer and more comprehensive platforms rather than stitching together a series of disconnected ones.

Viso’s investors seem to think so, and the company has raised $9.2 million in seed stage funding, led by Accel and with various angels participating. Interestingly, the company has been bootstrapped since it was founded in 2018 in Switzerland.

Boesch said that exploding demand caused the company to do the raise, which by AI company terms is quite modest compared with the products on offer and existing customers. He said Viso has already been adopted by several large companies, including Pricewaterhouse Cooper, DHL, and Orange, and has experienced 6x in new customer growth since 2022.

 

Viso eyes no-code for the future of computer vision and scores funding to scale | TechCrunch

Kibsi raises $9.3M for its no-code computer vision platform | TechCrunch

Kibsi raises $9.3M for its no-code computer vision platform | TechCrunch

Kibsi is an Irvine, California-based startup that is building a no-code computer vision platform that allows businesses to build and deploy computer vision applications. Among the things that set Kibsi apart from many other players in this space is that it lets businesses reuse their existing cameras to create insights into virtually anything they want to track, be that in a warehouse, restaurant or on an airport ramp.

The company today announced that it has raised a total of $9.3 million in pre-seed and seed funding. The participants in these rounds were GTMfund, NTTVC (which led the $4 million pre-seed round), Preface Ventures, Ridge Ventures, Secure Octane and Wipro Ventures.

Image Credits: Kibsi

Image Credits: Kibsi

Tolga Tarhan , who is now Kibsi’s CEO, joined Rackspace in 2019 after the founding team sold AWS-focused consulting business Onica to Rackspace. He later became the company’s CTO. After the Rackspace IPO, he decided that his journey in the company had come full circle. Together with co-founders and former Onica execs Amanda McQueen , Amir Kashani and Eric Miller , the team looked at what they could build next.

“I wanted to go get out and go create again,” Tarhan told me. “We wanted something that had an IoT orientation to it — because we’ve done a lot of IoT at Onica, was a big part of our business. We wanted it to be software — we had done enough consulting for multiple lifetimes. And we wanted it to be something involving AI, because we thought IoT by itself was almost old news. How do we combine these things? And as we thought about that space and our experience and where we got into roadblocks with customers, we realized that many customers are having trouble implementing computer vision.”

Image Credits: Kibsi

Image Credits: Kibsi

He noted that too often, computer vision projects in large enterprises fail even though they have the cameras and the talent to work on models. But to ingest stream from their cameras to then run the models takes a lot of undifferentiated work — and integrating all of this with downstream applications presents another set of integration challenges. So the team decided to build a computer vision platform that enables businesses to use their existing cameras and then combine that with a user experience that quickly enables users to gain real business value from this data. The platform lets users run their own computer vision models or Kibsi’s own and it then presents the results in a way that matches the business intent, both in Kibsi’s own user interface and through an API.

“We don’t return X and Y coordinates of people and objects,” said Tarhan. “If you’re thinking about a business analyst’s job, they don’t really care that a person is standing at this coordinate — what they want to know is: did that person interact with that merchandise?”

Image Credits: Kibsi

Image Credits: Kibsi

Kibsi has already attracted attention from customers like Owens Corning. Yet while manufacturing is a natural environment for computer vision, the Kibsi team also counts the likes of Whisker , which is embedding Kibsi’s tech into their litter robot, and the Woodland Park Zoo among its customers.

“Tracking animal behavior and interactions requires our professionals to sift through hours of camera footage,” said Bonnie Baird, animal welfare scientist at Woodland Park Zoo. “We are excited to add Kibsi’s computer vision capabilities to our existing cameras to gain valuable
insights about our animals and their well-being.”

While there are also obvious use cases for Kibsi in the smart city space (and its investor NTT is a major player there), Tarhan noted that the sales cycles there are quite slow. “As a startup, it’s not the best place to play day one,” he said.

Kibsi offers a free trial of their platform for developers, with a more comprehensive developer plan starting at $99/month. For premium and enterprise pricing plans, potential customers need to contact the company directly.

Kibsi raises $9.3M for its no-code computer vision platform | TechCrunch

Segway partners with Drover AI, Luna to bring computer vision to e-scooters | TechCrunch

Segway partners with Drover AI, Luna to bring computer vision to e-scooters | TechCrunch

Segway-Ninebot is partnering with Drover AI and Luna Systems — two startups that build computer vision technology to detect and correct improper electric scooter riding — to integrate their technologies into its AI-enabled e-scooters.

The partnerships, announced at the Micromobility Europe event in Amsterdam, are something of a pivot for Segway. This time last year, the scooter manufacturer launched an AI-powered scooter , the S90L, as the vertically integrated solution to scooter advanced rider assistance systems (ARAS). Rather than retrofitting third party hardware and software systems onto scooters, shared micromobility operators were offered a unified platform that included everything from the scooter itself to intelligent sensors to computer vision models.

Segway’s offering came as almost every major e-scooter operator began implementing some form of scooter ARAS that would prevent sidewalk riding in a bid to win over cities.

While Segway has managed to sell about 20,000 S90Ls to shared micromobility operators (mainly to Lyft), the company realized that it was spreading itself a bit thin, according to Tony Ho, Segway’s vice president of business development. Like many tech companies, Segway spent the last year re-strategizing and came away with a reaffirmed commitment to focus on core competencies. For Segway, that means building the hardware and working with partners to provide the software.

Los Angeles-based Drover and Dublin-based Luna have been leading the camera-focused scooter ARAS movement by testing and selling attachable IoT modules to companies like Spin, Voi, Helbiz, Beam, Fenix and others.

“When you deploy AI-based scooters in new cities, you need to train the computer vision system to learn the city, the pavements, the parking systems, the bike lanes,” Ho told TechCrunch. “So in every city where you deploy these scooters, there’s actually a huge amount of data you need to collect, and you also have to build the model in a way that’s suitable for each city.”

Ho said Segway didn’t have the brand bandwidth and resources to deal with that scale. Which is where Luna and Drover come in.

The non-exclusive partnerships with the startups will work in two ways. Customers that purchase S90L models — which are equipped with cameras, processors, CPU and GPU — can choose to implement either Drover’s or Luna’s scooter ARAS algorithms from the factory floor. Segway’s software will also be available, but it won’t be something the company focuses on or spends much time promoting.

“It’s almost like we’re building a mini app store for our scooters,” said Ho. “We’re opening the platform to developers or startups or operators, so they can basically take our vehicles, train them with their algorithm, and we become the computer on which they can run their algorithm.”

Operators that don’t already have S90L models but want scooter ARAS capability will have the option to retrofit Segway’s new modular AI system called Pilot Edge. Pilot Edge is essentially an add-on box with all of the necessary sensors for scooter ARAS, into which Drover or Luna’s tech can be integrated.

While Segway is open to any software that its operator customers choose, the company has a deal with Drover as its preferred software partner. Similarly, Drover will also recommend that customers purchase Segway’s hardware platform both for integrated vehicles and stand-alone computer vision modules.

For Drover and Luna, partnering with Segway means they no longer have to worry about building and deploying hardware, and can instead focus on software development and charging monthly fees for scooter ARAS. Which is good because, as Ho says, “hardware is hard.” It’s expensive to build and to store inventory that might not sell, and for small startups, dealing with supply chains can really cut into profits.

“It’s dirty work, but we can do it because we have huge scale,” said Ho. “So if we aggregate everybody’s demand, we can afford to do this at a bigger scale and therefore a lower cost.”

Drover AI’s Alex Nesic on using tech to regulate the scooter market

Segway partners with Drover AI, Luna to bring computer vision to e-scooters | TechCrunch

Thinking about trading options or stock in Marvell Technology, Agilon Health, Super Micro Computer, Roku, or Appian?

NEW YORK, June 7, 2023 /PRNewswire/ — InvestorsObserver issues critical PriceWatch Alerts for MRVL, AGL, SMCI, ROKU, and APPN.

Click a link below then choose between in-depth options trade idea report or a stock score report.

Options Report – Ideal trade ideas on up to seven different options trading strategies. The report shows all vital aspects of each option trade idea for each stock.

Stock Report – Measures a stock’s suitability for investment with a proprietary scoring system combining short and long-term technical factors with Wall Street’s opinion including a 12-month price forecast.

(Note: You may have to copy this link into your browser then press the [ENTER] key.)

InvestorsObserver provides patented technology to some of the biggest names on Wall Street and creates world-class investing tools for the self-directed investor on Main Street. We have a wide range of tools to help investors make smarter decisions when investing in stocks or options.

SOURCE InvestorsObserver