Venture

Will the future of work be ethical? Founder perspectives

Comment

ai ethics
Image Credits: Andriy Onufriyenko / Getty Images

Greg Epstein

Contributor

Greg M. Epstein is the Humanist Chaplain at Harvard and MIT, and the author of The New York Times bestselling book “Good Without God.” Described as a “godfather to the [humanist] movement” by The New York Times Magazine in recognition of his efforts to build inclusive, inspiring and ethical communities for the nonreligious and allies, Greg was also named “one of the top faith and moral leaders in the United States” by Faithful Internet, a project of the United Church of Christ and the Stanford Law School Center for Internet and Society.

More posts from Greg Epstein

In June, TechCrunch Ethicist in Residence Greg M. Epstein attended EmTech Next, a conference organized by the MIT Technology Review. The conference, which took place at MIT’s famous Media Lab, examined how AI and robotics are changing the future of work.

Greg’s essay, Will the Future of Work Be Ethical? reflects on his experiences at the conference, which produced what he calls “a religious crisis, despite the fact that I am not just a confirmed atheist but a professional one as well.” In it, Greg explores themes of inequality, inclusion and what it means to work in technology ethically, within a capitalist system and market economy.

Accompanying the story for Extra Crunch are a series of in-depth interviews Greg conducted around the conference, with scholars, journalists, founders and attendees.

Below, Greg speaks to two founders of innovative startups whose work provoked much discussion at the EmTech Next conference. Moxi, the robot assistant created by Andrea Thomaz of Diligent Robotics and her team, was a constant presence in the Media Lab reception hall immediately outside the auditorium in which all the main talks took place. And Prayag Narula of LeadGenius was featured, alongside leading tech anthropologist Mary Gray, in a panel on “Ghost Work” that sparked intense discussion throughout the conference and beyond.

Andrea Thomaz is the Co-Founder and CEO of Diligent Robotics. Image via MIT Technology Review

Could you give a sketch of your background?

Andrea Thomaz: I was always doing math and science, and did electrical engineering as an Undergrad at UT Austin. Then I came to MIT to do my PhD. It really wasn’t until grad school that I started doing robotics. I went to grad school interested in doing AI and was starting to get interested in this new machine learning that people were starting to talk about. In grad school, at the MIT Media Lab, Cynthia Breazeal was my advisor, and that’s where I fell in love with social robots and making robots that people want to be around and are also useful.

Say more about your journey at the Media Lab?

My statement of purpose for the Media Lab, in 1999, was that I thought that computers that were smarter would be easier to use. I thought AI was the solution to HCI [Human-computer Interaction]. So I came to the Media Lab because I thought that was the mecca of AI plus HCI.

It wasn’t until my second year as a student there that Cynthia finished her PhD with Rod Brooks and started at the Media Lab. And then I was like, “Oh wait a second. That’s what I’m talking about.”

Who is at the Media Lab now that’s doing interesting work for you?

For me, it’s kind of the same people. Patty Maes has kind of reinvented her group since those days and is doing fluid interfaces; I always really appreciate the kind of things they’re working on. And Cynthia, her work is still very seminal in the field.

So now, you’re a CEO and Founder?

CEO and Co-Founder of Diligent Robotics. I had twelve years in academia in between those. I finished my PhD, went and I was a professor at Georgia Tech in computing, teaching AI and robotics and I had a robotics lab there.

Then I got recruited away to UT Austin in electrical and computer engineering. Again, teaching AI and having a robotics lab. Then at the end of 2017, I had a PhD student who was graduating and also interested in commercialization, my Co-Founder and CTO Vivian Chu.

Let’s talk about the purpose of the human/robot interaction. In the case of your company, the robot’s purpose is to work alongside humans in a medical setting, who are doing work that is not necessarily going to be replaced by a robot like Moxi. How does that work exactly?

One of the reasons our first target market [is] hospitals is, that’s an industry where they’re looking for ways to elevate their staff. They want their staff to be performing, “at the top of their license.” You hear hospital administrators talking about this because there’s record numbers of physician burnout, nurse burnout, and turnover.

They really are looking for ways to say, “Okay, how can we help our staff do more of what they were trained to do, and not spend 30% of their day running around fetching things, or doing things that don’t require their license?” That for us is the perfect market [for] collaborative robots.” You’re looking for ways to automate things that the people in the environment don’t need to be doing, so they can do more important stuff. They can do all the clinical care.

In a lot of the hospitals we’re working with, we’re looking at their clinical workflows and identifying places where there’s a lot of human touch, like nurses making an assessment of the patient. But then the nurse finishes making an assessment [and] has to run and fetch things. Wouldn’t it be better if as soon as that nurse’s assessment hit the electronic medical record, that triggered a task for the robot to come and bring things? Then the nurse just gets to stay with the patient.

Those are the kind of things we’re looking for: places you could augment the clinical workflow with some automation and increase the amount of time that nurses or physicians are spending with patients.

So your robots, as you said before, do need human supervision. Will they always?

We are working on autonomy. We do want the robots to be doing things autonomously in the environment. But we like to talk about care as a team effort; we’re adding the robot to the team and there’s parts of it that the robot’s doing and parts of it that the human’s doing. There may be places where the robot needs some input or assistance and because it’s part of the clinical team. That’s how we like to think about it: if the robot is designed to be a teammate, it wouldn’t be very unusual for the robot to need some help or supervision from a teammate.

That seems different than what you could call Ghost Work.

Right. In most service robots being deployed today, there is this remote supervisor that is either logged in and checking in on the robots, or at least the robots have the ability to phone home if there’s some sort of problem.

That’s where some of this Ghost Work comes in. People are monitoring and keeping track of robots in the middle of the night. Certainly that may be part of how we deploy our robots as well. But we also think that it’s perfectly fine for some of that supervision or assistance to come out into the forefront and be part of the face-to-face interaction that the robot has with some of its coworkers.

Since you could potentially envision a scenario in which your robots are monitored from off-site, in a kind of Ghost Work setting, what concerns do you have about the ways in which that work can be kind of anonymized and undercompensated?

Currently we are really interested in our own engineering staff having high-touch customer interaction that we’re really not looking to anonymize. If we had a robot in the field and it was phoning home about some problem that was happening, at our early stage of the company, that is such a valuable interaction that in our company that wouldn’t be anonymous. Maybe the CTO would be the one phoning in and saying, “What happened? I’m so interested.”

I think we’re still at a stage where all of the customer interactions and all of the information we can get from robots in the field are such valuable pieces of information.

But how are you envisioning best-case scenarios for the future? What if your robots really are so helpful that they’re very successful and people want them everywhere? Your CTO is not going to take all those calls. How could you do this in a way that could make your company very successful, but also handle these responsibilities ethically?

I think this is a job, and remote supervisors would be a full-fledged part of our community-at-work. I don’t think we would devalue that work.

Ethically, I think we would be up front about the hours. Part of the remote supervision of robots can probably be tedious. We don’t do that and we don’t have a lot of this kind of staff, yet. So it’s a little hard for me to say exactly what it would be like. But I hope we would create a workforce that’s doing this task and it would be just like any other division of our company.

Content moderators for Facebook, for example, typically deal with more stressful and controversial content than what you guys are doing, but at least with regard to team-building, what you’re describing is a very different approach than what they’ve taken. I see a lot of influencer-types fleeing Facebook in droves right now. I think that a lot of them would probably want to be on it for the rest of their lives if they didn’t think that it was responsible for ethical violations that may have saved them billions in the short term.

Right. Another way we would implement a workforce different than a Facebook could do it: our robots are operating in very private spaces, in hospitals. So every member of my staff that’s going to be working in a clinical space, we have to go through a lot of training about HIPAA compliance and privacy. Like if you accidentally see a patient’s name, then you have to be very careful not to take any pictures and not to save anything and make sure it doesn’t get shared broadly. So those kinds of regulations and privacy concerns would definitely apply to our remote supervisor workforce.

In that sense, I personally, [and we] as a company, have to take responsibility for these people that I’m asking to remotely supervise the robot and say, “Yes, I am ensuring that all of these people are HIPAA compliant. They’re going to be good stewards of the privacy information and private information that they might see in this space.”

You said an initial setting for your robot products is the hospital/health care setting. Do you have other settings in mind that you’re able to talk about right now? Even as serve a long-term vision for where you see this type of robot serving?

We aren’t actively pursuing any other markets. We’re very excited about and making good progress with hospitals. One of the benefits of coming to an event like this is getting exposure, have people coming up to us all day long saying, “Oh, what about manufacturing? What about retail? Or you want to put Moxi in our restaurants in the UK?” And it’s like, “Maybe.” But in the end what Moxi is doing is indoor materials management. It’s a pretty generic skillset you could see applying to other markets.

How optimistic are you about our shared human future?

Like a shared human future-

Yes.

… with humans and technology? Or just all of us humans together?

Oh, that’s a good question because I typically ask the question thinking mainly in terms of humans. But your work raises a possible double meaning: our shared human future in the sense of the future we humans will be sharing with robots.

I’m very optimistic. I mean, I think that we have some of the smartest people in the world, are working on amazing advances in robotics, that I think are going to be changing the way that we live and the way that we work. Taking on chores of life that we don’t have to do anymore.

I’m excited to see what our new jobs are like. I’m excited to see what nurses think about being a nurse 5-10 years from now when they’re like, “Oh man, remember when we used to have to take out the trash? That was terrible.”

Anytime I hear somebody talking about the smartest people in the world these days, especially having spent 15 years at Harvard and now being at MIT as well, it brings up concerns about inequality. We’re so good at educating people at these great schools, that what we’re doing, without even meaning to, is setting up people like you… I wouldn’t go so far as to say you and me, I’ll just say you and a lot of the other students and folks that I work with… to just be able to dominate the workforce and the economy because you’re just so well-trained. You’re so well prepared.

What concerns do you have about inequality and how are you going to be thinking about the ways in which having access to robots will probably start with relatively wealthy institutions and places?

I think it’s already gotten better because when I was in grad school in 2000 to 2006, you definitely had to be at MIT or CMU, [or] at one of the top five places in this country to even get your hands on a state-of-the-art robot. Now, costs are coming down such that there’s a lot of access to hardware and sensors in ways there never was before. That’s just in the last 10 to 15 years.

I think it’s always the case that the more wealthy universities are going to have more resources and more interesting new toys. But I do think in robotics in particular, a lot of the commercial success or the commercial drivers of robotics has also driven hardware and sensors and the kind of basic building blocks of robotics. The costs have come down so much that I think it is creating a lot more access in universities that probably only ever were able to have like a LEGO robotics class. Now they could have a class that is really using a more realistic robot arm that would be something more like you might program if you went and got a job at BMW.

Will the future of work be ethical?

Prayag Narula is the founder, President, and Chairman of the Board at LeadGenius. Image via MIT Technology Review

Tell me about what LeadGenius does.

Prayag Narula: LeadGenius helps large sales and marketing companies gather and manage their data. I’ll give you a very simple example: if you’re a sales leader, you want your sales team to be selling into a new market. But how do you sell into a new market? You find out, basically, who are the companies in that market, who are the decision-makers, and you start sending them emails or phone calls, right? That process is tedious and it’s not something that can be fully automated.

We combine our technology and input from our people in about 35 different countries, [doing] gig economy-like work, to provide this data to our customers. The idea is using a combination of technology and people to solve large scale data problems.

That points to where you got connected with Mary Gray and her coauthor, Siddharth Suri: the work you are creating and coordinating could very easily end up as “ghost work.”

It already is kind of ghost work, right? The people we work with are contractors, they’re gig economy-like workers. They could work on a one-off project basis. That’s not our model, but it is very close to ghost work, if not exactly ghost work.

As a founder, you seem to have been more thoughtful than the average company that might employ anything along the lines of ghost work, about the ethics of doing so. I would imagine others in your company as well. So, how have you approached thinking about and structuring policies around these issues?

Great question. The company was started based on this idea that contract labor, especially online work, can be used to generate employment opportunities that would be the work of the future. So, we started out with this paradigm that we have to follow the ethics of online work, and we need to basically be the company that establishes some of those ethics. That is in the DNA of the company.

Fascinating. You were getting into an industry that, even beyond what your company in particular would be doing, was going to grow almost exponentially. You wanted to try to do it right, and it wasn’t necessarily clear what doing it right would look like when you were founding. When did you found the company?

The company was founded in 2011. So you’re right. And it was, to a certain extent, founded in response to what we saw as not entirely ethical practices in what we are calling ghost work today. I remember meeting a founder of one of the competitors and he said, “My goal is to pay nothing for the work people are doing, right? That’s my goal, right?” It seemed so wrong.

He intended to persuade people to work for free?

Yeah, yeah, yeah.

As an “opportunity” for them.

As an opportunity. In mobile gaming you get free tokens to do some of the work, right? So, the goal was to not have any financial transaction associated with [the] work. I don’t want to live in that kind of future, where work is so cheap that it’s free.

One of the biggest and most controversial ethical decisions you’ve had to make was the idea of paying people a living wage, even if that meant paying different people in different places different amounts of money for the same kind of work.

Yes, that is one of the core tenets of LeadGenius. It has received a lot of pushback both from the business model point of view of well, why not we just pay the minimum amount available? Or the lowest possible wage? And then from the community itself. And I don’t blame them. This idea that somebody who reports to you, somebody you are mentoring and has less experience than you makes sometimes even twice as much as you because they just happen to live in a place where cost of living is higher. It seems unfair to a large extent.

As I said at the talk, I don’t think there is another solution yet. We’re still searching. And this is something that we as the community of online work, and “future of work” in general, will have to really figure out. I don’t see the cost of living in India or Vietnam or Eastern Europe being the same as Australia, the UK or US anytime soon. So, what happens?

How do you calculate what the living wage is? Do you use a specific index across the board or are there other factors involved?

For some states and countries, MIT publishes a living wage. We follow that. Other places where we don’t have data, we spread through word of mouth from our community. So typically one person would find a job and then they tell 10 other people. Once we have established there is a large enough group there, we start asking around, and [doing] research. We always pay more than minimum wage. 15 to 20% more, sometimes. Mostly 100% more than minimum wage. It’s not an exact science.

Any kind of index would force us to get into very complex debates about what it is to make a living. What kind of life are we really talking about?

Right.

Complex at the very least. What do you think a living wage is?

A living wage is something that someone who is working approximately 40 hours a week should be able to support a small family on. That is my definition. There’s actually a term for it, called family living wage or family minimum wage. That is the least that you should be able to do. I am not an expert on this. As I said, we just kind of scramble to figure it out every time we have a new location. We look at a lot of government sources, but a lot of times they are not enough.

You may be doing better than companies like Amazon.

Sure.

How do you see your company doing in comparison with comparable work done by bigger companies?

Mechanical Turk, one of the biggest ghost work platforms, has a systematic problem of not paying people minimum wage. The [Ghost Work] book talks about people on Mechanical Turk in the US that make $3 an hour or $2 an hour. Amazon can’t wash their hands of this. And Amazon has basically said, “Oh, these people are just contractors. We are just connectors.” That stuff doesn’t fly anymore. I would hold them absolutely responsible for decreasing the value of work. And I think they are absolutely in violation of labor laws around the country.

I suspect you’re right. If companies like Facebook or Google or Amazon can’t make money without paying the people involved in that work fairly, maybe they just shouldn’t be able to make so much money.

Absolutely.

Did God come down from heaven and determine that everybody at Facebook and Google needs to be mega-rich?

I hear you. I think this is more to do with brushing under the carpet the human impact of the kind of technology platform you’re building. So when Facebook or Twitter or Amazon talk about content moderation, they talk about, “oh, a lot of it is automated.” At various big companies, I would be surprised if anybody wants to even acknowledge the kind of, of low, and I mean our work is still considered middle-end and middle to high-end, but especially low-end workers, that these people exist and the fact that their work is of value. That is so dehumanizing, so violently against the ethos that I, and most of the people in tech hold dearly. Somehow when it comes to humans and people we don’t see, when it comes to non-programmers and non-technology people, we just somehow turn a blind eye and that’s not acceptable.

The online gig economy can make a lot of positive change in the world. It can provide employment opportunities to people at a scale never done before. I grew up in India and youth unemployment there is at serious levels. More and more people joining the workforce don’t have those opportunities. You can make a big impact on that through the gig economy. But while we do it, we cannot and should not forget that there are people behind. You don’t take advantage of these people. You make sure you do the right thing while you’re providing jobs to these people.

I‘ve been excited to talk to you about the human side of venture capital. You said to me at the conference yesterday that AI and machine learning have become so dominant in the venture capital mind that it’s hard to get venture capitalists interested in human-centered projects these days. Is that right?

I think that’s not just today. This is a problem that has plagued Silicon Valley for a long time. Anything human, anything related to people, anything that’s not computer science, whether that’s anthropology or sociology or political science is somehow considered soft tech and not worth as much, and somehow a lesser intellectual pursuit than computer science.

When you’re building a company, when you’re pitching a company, you have to constantly brush under the rug that there is a lot of human impact and human input involved, especially in AI and machine learning. Basically, the pitch you end up making is, it’s done by people today, but we’ll automate it in the future. If you say, “this will perpetually be done by people,” nobody would give you any money. [In platforms like LeadGenius,] there is a lot of tech involved in building collaboration tools for hundreds of thousands of people. That’s hard tech. But somehow it’s not considered as hard as building new algorithms.

I’ll give you an example. One time I was talking to my friend who has a very similar model to us, and they don’t talk about the human factor at all. And he was actually being very successful in fundraising. I mean, we were having success but from niche pockets. He was definitely more successful than me in fundraising. And he said, “What I pitched to them is ‘Hey, we are doing this today using people, but it will get automated in the future. And then because we have all this data, we will be the ones automating it.’” [I asked,] “do you really believe that this can be automated?” And [he said,] “that doesn’t really matter.”

This is symptomatic of Facebook or Google, or any of these companies [not] talking about their content moderators and thinking about them enough. Or Amazon talking about the Mechanical Turk people. You cannot bring the human aspect, the human input of your business, especially the kind of low-end human input into the conversation, because our economy punishes people that rely on people. As long as they’re programmers, that’s okay. But as soon as you kind of go to any other part of the equation, it’s just considered non-scalable and soft tech and easy, and that’s bullshit.

Do you think part of that is the people who are making the decisions not really having a good sense of the value of humans as humans? Of humans and humanity in and of themselves? That is a theme that’s been coming up in some of my other interviews.

Narula: I think it’s symptomatic of the way science, especially computer science is taught in the industry today. It starts with how people are educated. There’s this holier than thou, very tech-centric outlook, towards anything working less with math and numbers and algorithms. That manifests itself in how investments are made and how public companies are run. The education system and thought leaders have created this cascade effect of technologists not taking the human aspect of technology seriously.

How optimistic are you about our shared human future?

I’m neither optimistic nor pessimistic about it. This idea that technology is going to solve all our human problems and AI is going to somehow come and make us a society of abundance where nobody has to worry about anything and we’ll have enough resources, I don’t think that’s going to happen. So call me a pessimist in that sense. But on the other side, I think there are enough people thinking about the human impact of technology that I feel like we are going through this rough time where technology as an industry is growing up. And once it grows up, once it kind of starts to recognize the real impact and the real cost to humanity that it’s having, everybody that I know and work with does want to do the right thing. And that gives me a lot of hope.

That’s one of my favorite answers that I’ve received to that question.

More TechCrunch

Jasper Health, a cancer care platform startup, laid off a substantial part of its workforce, TechCrunch has learned.

General Catalyst-backed Jasper Health lays off staff

Live Nation says its Ticketmaster subsidiary was hacked. A hacker claims to be selling 560 million customer records.

Live Nation confirms Ticketmaster was hacked, says personal information stolen in data breach

Featured Article

Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

An autonomous pod. A solid-state battery-powered sports car. An electric pickup truck. A convertible grand tourer EV with up to 600 miles of range. A “fully connected mobility device” for young urban innovators to be built by Foxconn and priced under $30,000. The next Popemobile. Over the past eight years, famed vehicle designer Henrik Fisker…

2 hours ago
Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

Late Friday afternoon, a time window companies usually reserve for unflattering disclosures, AI startup Hugging Face said that its security team earlier this week detected “unauthorized access” to Spaces, Hugging…

Hugging Face says it detected ‘unauthorized access’ to its AI model hosting platform

Featured Article

Hacked, leaked, exposed: Why you should never use stalkerware apps

Using stalkerware is creepy, unethical, potentially illegal, and puts your data and that of your loved ones in danger.

3 hours ago
Hacked, leaked, exposed: Why you should never use stalkerware apps

The design brief was simple: each grind and dry cycle had to be completed before breakfast. Here’s how Mill made it happen.

Mill’s redesigned food waste bin really is faster and quieter than before

Google is embarrassed about its AI Overviews, too. After a deluge of dunks and memes over the past week, which cracked on the poor quality and outright misinformation that arose…

Google admits its AI Overviews need work, but we’re all helping it beta test

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. In…

Startups Weekly: Musk raises $6B for AI and the fintech dominoes are falling

The product, which ZeroMark calls a “fire control system,” has two components: a small computer that has sensors, like lidar and electro-optical, and a motorized buttstock.

a16z-backed ZeroMark wants to give soldiers guns that don’t miss against drones

The RAW Dating App aims to shake up the dating scheme by shedding the fake, TikTok-ified, heavily filtered photos and replacing them with a more genuine, unvarnished experience. The app…

Pitch Deck Teardown: RAW Dating App’s $3M angel deck

Yes, we’re calling it “ThreadsDeck” now. At least that’s the tag many are using to describe the new user interface for Instagram’s X competitor, Threads, which resembles the column-based format…

‘ThreadsDeck’ arrived just in time for the Trump verdict

Japanese crypto exchange DMM Bitcoin confirmed on Friday that it had been the victim of a hack resulting in the theft of 4,502.9 bitcoin, or about $305 million.  According to…

Hackers steal $305M from DMM Bitcoin crypto exchange

This is not a drill! Today marks the final day to secure your early-bird tickets for TechCrunch Disrupt 2024 at a significantly reduced rate. At midnight tonight, May 31, ticket…

Disrupt 2024 early-bird prices end at midnight

Instagram is testing a way for creators to experiment with reels without committing to having them displayed on their profiles, giving the social network a possible edge over TikTok and…

Instagram tests ‘trial reels’ that don’t display to a creator’s followers

U.S. federal regulators have requested more information from Zoox, Amazon’s self-driving unit, as part of an investigation into rear-end crash risks posed by unexpected braking. The National Highway Traffic Safety…

Feds tell Zoox to send more info about autonomous vehicles suddenly braking

You thought the hottest rap battle of the summer was between Kendrick Lamar and Drake. You were wrong. It’s between Canva and an enterprise CIO. At its Canva Create event…

Canva’s rap battle is part of a long legacy of Silicon Valley cringe

Voice cloning startup ElevenLabs introduced a new tool for users to generate sound effects through prompts today after announcing the project back in February.

ElevenLabs debuts AI-powered tool to generate sound effects

We caught up with Antler founder and CEO Magnus Grimeland about the startup scene in Asia, the current tech startup trends in the region and investment approaches during the rise…

VC firm Antler’s CEO says Asia presents ‘biggest opportunity’ in the world for growth

Temu is to face Europe’s strictest rules after being designated as a “very large online platform” under the Digital Services Act (DSA).

Chinese e-commerce marketplace Temu faces stricter EU rules as a ‘very large online platform’

Meta has been banned from launching features on Facebook and Instagram that would have collected data on voters in Spain using the social networks ahead of next month’s European Elections.…

Spain bans Meta from launching election features on Facebook, Instagram over privacy fears

Stripe, the world’s most valuable fintech startup, said on Friday that it will temporarily move to an invite-only model for new account sign-ups in India, calling the move “a tough…

Stripe curbs its India ambitions over regulatory situation

The 2024 election is likely to be the first in which faked audio and video of candidates is a serious factor. As campaigns warm up, voters should be aware: voice…

Voice cloning of political figures is still easy as pie

When Alex Ewing was a kid growing up in Purcell, Oklahoma, he knew how close he was to home based on which billboards he could see out the car window.…

OneScreen.ai brings startup ads to billboards and NYC’s subway

SpaceX’s massive Starship rocket could take to the skies for the fourth time on June 5, with the primary objective of evaluating the second stage’s reusable heat shield as the…

SpaceX sent Starship to orbit — the next launch will try to bring it back

Eric Lefkofsky knows the public listing rodeo well and is about to enter it for a fourth time. The serial entrepreneur, whose net worth is estimated at nearly $4 billion,…

Billionaire Groupon founder Eric Lefkofsky is back with another IPO: AI health tech Tempus

TechCrunch Disrupt showcases cutting-edge technology and innovation, and this year’s edition will not disappoint. Among thousands of insightful breakout session submissions for this year’s Audience Choice program, five breakout sessions…

You’ve spoken! Meet the Disrupt 2024 breakout session audience choice winners

Check Point is the latest security vendor to fix a vulnerability in its technology, which it sells to companies to protect their networks.

Zero-day flaw in Check Point VPNs is ‘extremely easy’ to exploit

Though Spotify never shared official numbers, it’s likely that Car Thing underperformed or was just not worth continued investment in today’s tighter economic market.

Spotify offers Car Thing refunds as it faces lawsuit over bricking the streaming device

The studies, by researchers at MIT, Ben-Gurion University, Cambridge and Northeastern, were independently conducted but complement each other well.

Misinformation works, and a handful of social ‘supersharers’ sent 80% of it in 2020

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Okay, okay…

Tesla shareholder sweepstakes and EV layoffs hit Lucid and Fisker