Biotech & Health

Deep Science: Keeping AI honest in medicine, climate science and vision

Comment

Deep learning artificial neural networks that form shape as human brain. Neural network handles data on input and gives result on output
Image Credits: Andrii Shyp / Getty Images

Research papers come out far too frequently for anyone to read them all. That’s especially true in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect some of the more interesting recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.

This week we have a number of entries aimed at identifying or confirming bias or cheating behaviors in machine learning systems, or failures in the data that support them. But first a purely visually appealing project from the University of Washington being presented at the Conference on Computer Vision and Pattern Recognition.

They trained a system that recognizes and predicts the flow of water, clouds, smoke and other fluid features in photos, animating them from a single still image. The result is quite cool:

Animation showing how a system combined guesses at previous and forthcoming moments to animate a waterfall.
Image Credits: Hołyński et al./CVPR

Why, though? Well, for one thing, the future of photography is code, and the better our cameras understand the world they’re pointed at, the better they can accommodate or recreate it. Fake river flow isn’t in high demand, but accurately predicting movement and the behavior of common photo features is.

An important question to answer in the creation and application of any machine learning system is whether it’s actually doing the thing you want it to. The history of “AI” is riddled with examples of models that found a way to look like they’re performing a task without actually doing it — sort of like a kid kicking everything under the bed when they’re supposed to clean their room.

This is a serious problem in the medical field, where a system that’s faking it could have dire consequences. A study, also from UW, finds models proposed in the literature have a tendency to do this, in what the researchers call “shortcut learning.” These shortcuts could be simple — basing an X-ray’s risk on the patient’s demographics rather than the data in the image, for instance — or more unique, like relying heavily on conditions in the hospital its data is from, making it impossible to generalize to others.

The team found that many models basically failed when used on datasets that differed from their training ones. They hope that advances in machine learning transparency (opening the “black box”) will make it easier to tell when these systems are skirting the rules.

An MRI machine in a hospital.
Image Credits: Siegfried Modola (opens in a new window) / Getty Images

An example of the opposite can be found in climate modeling, which involves such complex systems that supercomputers can spin their bits for months just to simulate the movements of air and water in a tiny volume. Simplified models can be created by feeding the appropriate data into a machine learning system, which may, based on 23 hours of data, predict the 24th. But is the system actually modeling the climate factors, or just making statistically probable guesses at the outcomes?

A study started at the University of Reading had the happy outcome of finding, by looking into these systems very carefully, that they actually do what they claim to. “In some sense, it means the data-driven method is intelligent. It is not an emulator of data. It is a model that captures the dynamical processes. It is able to reconstruct what lies behind the data,” said co-author Valerio Lucarini.

Weather forecasts get an AI update with Atmo as businesses grapple with climate-related catastrophes

That kind of confidence will be useful in applications like this flood prediction project from Lancaster University, earlier versions of which suffered from the same lack of assurance. Professor Plamen Angelov is embarking on an improved flooding model that is not only faster and more accurate, but also explainable. You can probably expect this kind of “here’s how we know how we know” upgrade to become increasingly common wherever AI systems have the possibility of causing harm.

Some situations are not so easily quantifiable, such as an algorithm meant to detect whether a student is likely to drop out of college. There’s the opportunity for shortcuts here, too, if the system picks up on correlations that aren’t meaningful. Cornell researchers looked into whether including protected demographic information such as race, gender and income might affect these models, and found that, fortunately, they were not throwing off the estimates one way or the other. In fact, the team recommended including that data because it produces a more holistic view inclusive of these factors.

Simulating neural networks — that is, the ones in our heads — may seem like an obvious application of neural networks — that is, the ones in our computers — but it’s hardly as straightforward as it sounds. The latter are inspired by the former, but it doesn’t mean they’re naturally good at simulating them.

Diagram of an optical nerve monitoring device.
Image Credits: EPFL

That said, networks of neurons in the brain can be monitored and their behavior predicted much as in any other complex system. That’s the hope EPFL researchers have in a new project aiming to build fundamentals for visual prosthetics by modeling how the visual cortex of a blind person reacts to certain stimuli. If it can be predicted well, potential users won’t have to be tested as frequently or invasively, since from some telltale early signs they should be able to simulate how it will adapt going forward.

Folks aging into conditions like dementia require a lot of oversight, but there are rarely enough caregivers to provide it. Smart home devices and a touch of machine learning could help with that, though, suggests a recent study by researchers at UC Berkeley.

Sensors used to smarten up a person's home, including motion sensors, humidity sensors, etc.
Image Credits: Robert Levenson / UC Berkeley

The homes of people suffering from dementia and other conditions were kitted out with sensors to tell when a faucet was left on, when someone was in bed, if a door is left open and so on, and this information was monitored closely to find a baseline of activity. Then, if the person deviates from that baseline, indicating an episode of confusion or physical distress, their caregiver can be alerted. It lessened worry in caregivers and adds a layer of responsive tech to the situation that can be flexibly applied. Dealing with the handful of low-data streams isn’t exactly a made-for-AI problem, but machine learning can help deploy and monitor these systems in a standard way.

Older folks, among others, are better represented in a large image dataset from Google that a team revisited to look into fairness metrics. In the case of this dataset, comprising 9 million images of which 100,000 had people, that meant considering whether labels and bounding boxes were applied fairly and consistently. Turns out it wasn’t quite the case!

Examples of new boxes in MIAP. In each subfigure the magenta boxes are from the original Open Images dataset, while the yellow boxes are additional boxes added by the MIAP Dataset. Image Credits: left: Boston Public Library; middle: jen robinson; right: Garin Fons; all used with permission under the CC BY 2.0 license.

In a second pass at these labels, the team identified tens of thousands of new people in the photos, and updated how age and gender are represented. Instead of asking labelers to draw boxes around any “boy” or “woman” they see, they now box up any “person” and then add labels of their gender and age presentation as they perceive it. This more inclusive process is also more practical since it’s far more likely that a system will want to look for “people” and not just people with a certain gender presentation. If after a person is identified their age, gender or appearance matter for whatever reason, that data is over and above personhood.

As the researchers note, the resulting dataset is more inclusive and much better for it, streamlining processes and reducing the risk of siphoning human biases into ML systems.

Deep Science: Robots, meet world

More TechCrunch

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, Ask Photos

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers

Online travel agency Expedia is testing an AI assistant that bolsters features like search, itinerary building, trip planning, and real-time travel updates.

Expedia starts testing AI-powered features for search and travel planning

Welcome to TechCrunch Fintech! This week, we look at the drama around TabaPay deciding to not buy Synapse’s assets, as well as stocks dropping for a couple of fintechs, Monzo raising…

Inside TabaPay’s drama-filled decision to abandon its plans to buy Synapse’s assets

The person who claimed to have stolen the physical addresses of 49 million Dell customers appears to have taken more data from a different Dell portal, TechCrunch has learned. The…

Threat actor scraped Dell support tickets, including customer phone numbers