Biotech & Health

Deep Science: Keeping AI honest in medicine, climate science and vision

Comment

Deep learning artificial neural networks that form shape as human brain. Neural network handles data on input and gives result on output
Image Credits: Andrii Shyp / Getty Images

Research papers come out far too frequently for anyone to read them all. That’s especially true in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect some of the more interesting recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.

This week we have a number of entries aimed at identifying or confirming bias or cheating behaviors in machine learning systems, or failures in the data that support them. But first a purely visually appealing project from the University of Washington being presented at the Conference on Computer Vision and Pattern Recognition.

They trained a system that recognizes and predicts the flow of water, clouds, smoke and other fluid features in photos, animating them from a single still image. The result is quite cool:

Animation showing how a system combined guesses at previous and forthcoming moments to animate a waterfall.
Image Credits: Hołyński et al./CVPR

Why, though? Well, for one thing, the future of photography is code, and the better our cameras understand the world they’re pointed at, the better they can accommodate or recreate it. Fake river flow isn’t in high demand, but accurately predicting movement and the behavior of common photo features is.

An important question to answer in the creation and application of any machine learning system is whether it’s actually doing the thing you want it to. The history of “AI” is riddled with examples of models that found a way to look like they’re performing a task without actually doing it — sort of like a kid kicking everything under the bed when they’re supposed to clean their room.

This is a serious problem in the medical field, where a system that’s faking it could have dire consequences. A study, also from UW, finds models proposed in the literature have a tendency to do this, in what the researchers call “shortcut learning.” These shortcuts could be simple — basing an X-ray’s risk on the patient’s demographics rather than the data in the image, for instance — or more unique, like relying heavily on conditions in the hospital its data is from, making it impossible to generalize to others.

The team found that many models basically failed when used on datasets that differed from their training ones. They hope that advances in machine learning transparency (opening the “black box”) will make it easier to tell when these systems are skirting the rules.

An MRI machine in a hospital.
Image Credits: Siegfried Modola (opens in a new window) / Getty Images

An example of the opposite can be found in climate modeling, which involves such complex systems that supercomputers can spin their bits for months just to simulate the movements of air and water in a tiny volume. Simplified models can be created by feeding the appropriate data into a machine learning system, which may, based on 23 hours of data, predict the 24th. But is the system actually modeling the climate factors, or just making statistically probable guesses at the outcomes?

A study started at the University of Reading had the happy outcome of finding, by looking into these systems very carefully, that they actually do what they claim to. “In some sense, it means the data-driven method is intelligent. It is not an emulator of data. It is a model that captures the dynamical processes. It is able to reconstruct what lies behind the data,” said co-author Valerio Lucarini.

Weather forecasts get an AI update with Atmo as businesses grapple with climate-related catastrophes

That kind of confidence will be useful in applications like this flood prediction project from Lancaster University, earlier versions of which suffered from the same lack of assurance. Professor Plamen Angelov is embarking on an improved flooding model that is not only faster and more accurate, but also explainable. You can probably expect this kind of “here’s how we know how we know” upgrade to become increasingly common wherever AI systems have the possibility of causing harm.

Some situations are not so easily quantifiable, such as an algorithm meant to detect whether a student is likely to drop out of college. There’s the opportunity for shortcuts here, too, if the system picks up on correlations that aren’t meaningful. Cornell researchers looked into whether including protected demographic information such as race, gender and income might affect these models, and found that, fortunately, they were not throwing off the estimates one way or the other. In fact, the team recommended including that data because it produces a more holistic view inclusive of these factors.

Simulating neural networks — that is, the ones in our heads — may seem like an obvious application of neural networks — that is, the ones in our computers — but it’s hardly as straightforward as it sounds. The latter are inspired by the former, but it doesn’t mean they’re naturally good at simulating them.

Diagram of an optical nerve monitoring device.
Image Credits: EPFL

That said, networks of neurons in the brain can be monitored and their behavior predicted much as in any other complex system. That’s the hope EPFL researchers have in a new project aiming to build fundamentals for visual prosthetics by modeling how the visual cortex of a blind person reacts to certain stimuli. If it can be predicted well, potential users won’t have to be tested as frequently or invasively, since from some telltale early signs they should be able to simulate how it will adapt going forward.

Folks aging into conditions like dementia require a lot of oversight, but there are rarely enough caregivers to provide it. Smart home devices and a touch of machine learning could help with that, though, suggests a recent study by researchers at UC Berkeley.

Sensors used to smarten up a person's home, including motion sensors, humidity sensors, etc.
Image Credits: Robert Levenson / UC Berkeley

The homes of people suffering from dementia and other conditions were kitted out with sensors to tell when a faucet was left on, when someone was in bed, if a door is left open and so on, and this information was monitored closely to find a baseline of activity. Then, if the person deviates from that baseline, indicating an episode of confusion or physical distress, their caregiver can be alerted. It lessened worry in caregivers and adds a layer of responsive tech to the situation that can be flexibly applied. Dealing with the handful of low-data streams isn’t exactly a made-for-AI problem, but machine learning can help deploy and monitor these systems in a standard way.

Older folks, among others, are better represented in a large image dataset from Google that a team revisited to look into fairness metrics. In the case of this dataset, comprising 9 million images of which 100,000 had people, that meant considering whether labels and bounding boxes were applied fairly and consistently. Turns out it wasn’t quite the case!

Examples of new boxes in MIAP. In each subfigure the magenta boxes are from the original Open Images dataset, while the yellow boxes are additional boxes added by the MIAP Dataset. Image Credits: left: Boston Public Library; middle: jen robinson; right: Garin Fons; all used with permission under the CC BY 2.0 license.

In a second pass at these labels, the team identified tens of thousands of new people in the photos, and updated how age and gender are represented. Instead of asking labelers to draw boxes around any “boy” or “woman” they see, they now box up any “person” and then add labels of their gender and age presentation as they perceive it. This more inclusive process is also more practical since it’s far more likely that a system will want to look for “people” and not just people with a certain gender presentation. If after a person is identified their age, gender or appearance matter for whatever reason, that data is over and above personhood.

As the researchers note, the resulting dataset is more inclusive and much better for it, streamlining processes and reducing the risk of siphoning human biases into ML systems.

Deep Science: Robots, meet world

More TechCrunch

Jasper Health, a cancer care platform startup, laid off a substantial part of its workforce, TechCrunch has learned.

General Catalyst-backed Jasper Health lays off staff

Live Nation says its Ticketmaster subsidiary was hacked. A hacker claims to be selling 560 million customer records.

Live Nation confirms Ticketmaster was hacked, says personal information stolen in data breach

Featured Article

Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

An autonomous pod. A solid-state battery-powered sports car. An electric pickup truck. A convertible grand tourer EV with up to 600 miles of range. A “fully connected mobility device” for young urban innovators to be built by Foxconn and priced under $30,000. The next Popemobile. Over the past eight years, famed vehicle designer Henrik Fisker…

9 hours ago
Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

Late Friday afternoon, a time window companies usually reserve for unflattering disclosures, AI startup Hugging Face said that its security team earlier this week detected “unauthorized access” to Spaces, Hugging…

Hugging Face says it detected ‘unauthorized access’ to its AI model hosting platform

Featured Article

Hacked, leaked, exposed: Why you should never use stalkerware apps

Using stalkerware is creepy, unethical, potentially illegal, and puts your data and that of your loved ones in danger.

10 hours ago
Hacked, leaked, exposed: Why you should never use stalkerware apps

The design brief was simple: each grind and dry cycle had to be completed before breakfast. Here’s how Mill made it happen.

Mill’s redesigned food waste bin really is faster and quieter than before

Google is embarrassed about its AI Overviews, too. After a deluge of dunks and memes over the past week, which cracked on the poor quality and outright misinformation that arose…

Google admits its AI Overviews need work, but we’re all helping it beta test

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. In…

Startups Weekly: Musk raises $6B for AI and the fintech dominoes are falling

The product, which ZeroMark calls a “fire control system,” has two components: a small computer that has sensors, like lidar and electro-optical, and a motorized buttstock.

a16z-backed ZeroMark wants to give soldiers guns that don’t miss against drones

The RAW Dating App aims to shake up the dating scheme by shedding the fake, TikTok-ified, heavily filtered photos and replacing them with a more genuine, unvarnished experience. The app…

Pitch Deck Teardown: RAW Dating App’s $3M angel deck

Yes, we’re calling it “ThreadsDeck” now. At least that’s the tag many are using to describe the new user interface for Instagram’s X competitor, Threads, which resembles the column-based format…

‘ThreadsDeck’ arrived just in time for the Trump verdict

Japanese crypto exchange DMM Bitcoin confirmed on Friday that it had been the victim of a hack resulting in the theft of 4,502.9 bitcoin, or about $305 million.  According to…

Hackers steal $305M from DMM Bitcoin crypto exchange

This is not a drill! Today marks the final day to secure your early-bird tickets for TechCrunch Disrupt 2024 at a significantly reduced rate. At midnight tonight, May 31, ticket…

Disrupt 2024 early-bird prices end at midnight

Instagram is testing a way for creators to experiment with reels without committing to having them displayed on their profiles, giving the social network a possible edge over TikTok and…

Instagram tests ‘trial reels’ that don’t display to a creator’s followers

U.S. federal regulators have requested more information from Zoox, Amazon’s self-driving unit, as part of an investigation into rear-end crash risks posed by unexpected braking. The National Highway Traffic Safety…

Feds tell Zoox to send more info about autonomous vehicles suddenly braking

You thought the hottest rap battle of the summer was between Kendrick Lamar and Drake. You were wrong. It’s between Canva and an enterprise CIO. At its Canva Create event…

Canva’s rap battle is part of a long legacy of Silicon Valley cringe

Voice cloning startup ElevenLabs introduced a new tool for users to generate sound effects through prompts today after announcing the project back in February.

ElevenLabs debuts AI-powered tool to generate sound effects

We caught up with Antler founder and CEO Magnus Grimeland about the startup scene in Asia, the current tech startup trends in the region and investment approaches during the rise…

VC firm Antler’s CEO says Asia presents ‘biggest opportunity’ in the world for growth

Temu is to face Europe’s strictest rules after being designated as a “very large online platform” under the Digital Services Act (DSA).

Chinese e-commerce marketplace Temu faces stricter EU rules as a ‘very large online platform’

Meta has been banned from launching features on Facebook and Instagram that would have collected data on voters in Spain using the social networks ahead of next month’s European Elections.…

Spain bans Meta from launching election features on Facebook, Instagram over privacy fears

Stripe, the world’s most valuable fintech startup, said on Friday that it will temporarily move to an invite-only model for new account sign-ups in India, calling the move “a tough…

Stripe curbs its India ambitions over regulatory situation

The 2024 election is likely to be the first in which faked audio and video of candidates is a serious factor. As campaigns warm up, voters should be aware: voice…

Voice cloning of political figures is still easy as pie

When Alex Ewing was a kid growing up in Purcell, Oklahoma, he knew how close he was to home based on which billboards he could see out the car window.…

OneScreen.ai brings startup ads to billboards and NYC’s subway

SpaceX’s massive Starship rocket could take to the skies for the fourth time on June 5, with the primary objective of evaluating the second stage’s reusable heat shield as the…

SpaceX sent Starship to orbit — the next launch will try to bring it back

Eric Lefkofsky knows the public listing rodeo well and is about to enter it for a fourth time. The serial entrepreneur, whose net worth is estimated at nearly $4 billion,…

Billionaire Groupon founder Eric Lefkofsky is back with another IPO: AI health tech Tempus

TechCrunch Disrupt showcases cutting-edge technology and innovation, and this year’s edition will not disappoint. Among thousands of insightful breakout session submissions for this year’s Audience Choice program, five breakout sessions…

You’ve spoken! Meet the Disrupt 2024 breakout session audience choice winners

Check Point is the latest security vendor to fix a vulnerability in its technology, which it sells to companies to protect their networks.

Zero-day flaw in Check Point VPNs is ‘extremely easy’ to exploit

Though Spotify never shared official numbers, it’s likely that Car Thing underperformed or was just not worth continued investment in today’s tighter economic market.

Spotify offers Car Thing refunds as it faces lawsuit over bricking the streaming device

The studies, by researchers at MIT, Ben-Gurion University, Cambridge and Northeastern, were independently conducted but complement each other well.

Misinformation works, and a handful of social ‘supersharers’ sent 80% of it in 2020

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Okay, okay…

Tesla shareholder sweepstakes and EV layoffs hit Lucid and Fisker