Space

Deep Science: Dog detectors, Mars mappers and AI-scrambling sweaters

Comment

Dogs are detected and their positions estimated by a computer.
Image Credits: Microsoft Research

Research papers come out at far too rapid a rate for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect the most relevant recent discoveries and papers, particularly in but not limited to artificial intelligence, and explain why they matter.

This week in Deep Science spans the stars all the way down to human anatomy, with research concerning exoplanets and Mars exploration, as well as understanding the subtlest habits and most hidden parts of the body.

Let’s proceed in order of distance from Earth. First is the confirmation of 50 new exoplanets by researchers at the University of Warwick. It’s important to distinguish this process from discovering exoplanets among the huge volumes of data collected by various satellites. These planets were flagged as candidates but no one has had the chance to say whether the data is conclusive. The team built on previous work that ranked planet candidates from least to most likely, creating a machine learning agent that could make precise statistical assessments and say with conviction, here is a planet.

“A prime example when the additional computational complexity of probabilistic methods pays off significantly,” said the university’s Theo Damoulas. It’s an excellent example of a field where marquee announcements, like the Google-powered discovery of Kepler-90 i, represent only the earliest results rather than a final destination, emphasizing the need for further study.

In our own solar system, we are getting to know our neighbor Mars quite well, though even the Perseverance rover, currently hurtling through the void in the direction of the red planet, is like its predecessors a very resource-limited platform. With a small power budget and years-old radiation-hardened CPUs, there’s only so much in the way of image analysis and other AI-type work it can do locally. But scientists are preparing for when a new generation of more powerful, efficient chips makes it to Mars.

Automatically labeled landscape imagery from Mars.
Image Credits: JPL

Automatically classifying terrain, autonomously identifying and navigating to objects of interest, and local hosting and processing of scientific data are all on the table, as proposed by the Machine Learning-based Analytics for Autonomous Rover Systems (MAARS) program. Though the capabilities of a future rover may be orders of magnitude greater than what we have headed there now, efficiency and reliability will always be paramount — it’s the ultimate in edge deployment. You can even help train a Mars-bound navigation algorithm right now.

In orbit, the proliferation of communications satellites in constellations like SpaceX’s Starlink is leading to much worry on the part of astronomers, whose Earth-based telescopes must look past those pesky objects to observe the sky. A recent multiorganization study simulating a satellite-filled future night sky concludes that it will “fundamentally change astronomical observing,” and that “no combination of mitigations can fully avoid the impacts.”

Among the recommendations, software to “identify, model, subtract and mask satellite trails in images on the basis of user-supplied parameters” is foremost for observatories. This kind of task is highly suitable for ML agents, as we’ve seen in other digital media manipulation tools. I would be astonished if there were less than a dozen concurrent projects in private and public R&D to address this need, as it will be a persistent part of all astronomical observation going forward.

Starlink satellites streak through a telescope’s observations. Image Credits: IAU

One more space-surface interaction we need to be aware of: Turns out quantum computers may be extremely sensitive to natural radiation, including the minute amounts emitted by metals in soil and of course those rascally cosmic rays. Just one more thing to isolate those fragile qubits from.

Another quick note for those of us here in the atmosphere: Berkeley National Lab tested a handful of consumer-grade air quality monitors to see if they actually do what they’re supposed to. Surprisingly, they do, but consistently overestimate the level of particulates in the air by as much as 2.4 times. That makes sense from a liability point of view — better to overreport than under.

The study suggests that a network of these cheaper sensors, while their readings are not to be relied on at face value, could prove an invaluable resource for tracking air quality trends across scales. If these devices are not already contributing to environmental and climate research, they should be and probably will be soon. But like other IoT devices, they’ll face privacy questions. The benefits of establishing clear rules and permissions for this kind of thing are becoming more clear by the day.

A robot in the forest scans the environment while a soldier watches using a head mounted display.
Image Credits: U.S. Army

On the ground, the U.S. Army Research Lab has come up with an interesting way to promote a sort of symbiosis between humans and robots, each limited in their own way. A robot buddy traveling alongside a human can scan the environment more quickly and thoroughly than a person, but lacks the ability to tell whether changes it observes around it are important. The ARL and UC San Diego put together a system that watches for discrepancies in what its lidar systems detect, such as movement or a new or absent object, and highlights them in a heads-up display worn by a human. It skips the whole problem of “understanding” what’s happening by passing that on to a human, while leveraging the robot’s superiority in superficial sensing. This paradigm could be a very helpful one — and a relief to those rightly worried that robots aren’t really smart enough to make judgments like this.

Lastly, some news inside the body that’s been made unfortunately timely by the tragic passing of Chadwick Boseman. Colorectal cancer is deadly and difficult to detect early, and a shortcoming of colonoscopies is it is difficult to say with certainty that the doctor has inspected every square inch of the tract. Google’s Colonoscopy Coverage Deficiency via Depth algorithm observes video of the procedure and builds a 3D map of the colon, noting as it goes which parts of it weren’t adequately visible to the camera.

A simulated colon, analysis of the image, and resulting map of the virtual colon.
A simulated colon, analysis of the image and resulting map of the virtual colon. Image Credits: Google

Ideally the procedure could capture enough to be sure, but this would be a helpful tool to prevent against malpractice or just improve efficiency so that a second procedure isn’t necessary. Colorectal cancer is a major risk, especially for men, and furthermore especially for Black men, who tend to develop it earlier and more often. More and better tools may help detect it earlier and save thousands of lives.

ECCV

The European Conference on Computer Vision took place in late August and there are, as always, a lot of interesting papers that come out of it.

Facebook has a fun new database and tool called ContactPose, a collection of grips of everyday objects by a variety of people either “using” or “handing off” whatever it is. Gripping an object in an intelligent way is a remarkably difficult problem and the best source for how to do it is human technique.

3D render of a banana and the way it was held by human hands.
Image Credits: Facebook/Georgia Tech

ContactPose provides joint and hand poses for things like game controllers, cameras, bananas and sunglasses, showing contact heat maps and other information useful to a computer trying to figure out how to hold something. You can play around with it here.

The company is also, predictably, concerned that tools used to identify and analyze individuals in photos may be disrupted somehow. We’ve seen studies that showed how taking advantage of a machine learning models’ biases can make a turtle be classified as a gun, and so on, but a harder problem is tricking the AI into thinking that there’s nothing instead of something.

Image of a man wearing a sweatshirt with a pattern that confuses the computer vision system.
Image Credits: Facebook

This paper shows that it is indeed possible to engineer patterns that, worn or otherwise shown to a computer vision system, seem to confound it and make it think that the wearer is not a person but part of the background. The resulting clothing isn’t exactly haute couture but more attractive scrambler patterns are probably on the way.

Microsoft is addressing an old favorite: estimating the positions of multiple people on camera. This kind of research goes back to the Kinect and while it never really took off in terms of gaming, it has proven useful in countless other ways.

A computer estimates the body positions of several people on camera.
Image Credits: Microsoft Research

This paper looks at a new way of identifying and analyzing the body positions of multiple people from the perspectives of multiple cameras simultaneously. It can be tough to figure that out from one 2D image, but with two or three of them it becomes a solvable problem — just a computationally complex one. But they’re confident in this approach, which will eventually be documented at this GitHub page.

Just because a problem is a little wacky doesn’t mean it isn’t worth solving. Such is the case with this paper describing “end-to-end dog reconstruction from monocular images.” Maybe the utility of being able to tell the exact shape and position of a dog from a single image isn’t obvious to you. Or me. Indeed, perhaps there is no “utility” as the concept is commonly understood.

3D models of dogs with various changes to tail, ear, and body size.
Just a few of the many shapes dogs can, and should, take. Image Credits: Microsoft Research

But think about it this way: Humans can recognize dogs instantly no matter how they’ve folded their furry bodies, or whether they’re a small dog with long floppy ears or a big one with pointy triangular ears. If computer vision systems are to meet or exceed the capabilities of humans, shouldn’t they at least be able to do that?

Seriously: Being able to identify an object (in this case an animal) despite that object having numerous unpredictable variations is a powerful and fundamental vision task, one we do every day almost automatically. Pursuing it as an abstract goal is an important line of inquiry and while “reconstructing a 3D dog mesh” won’t save any lives, it’s important basic research that happens to involve a lot of very good girls and boys.

Google’s ECCV spread had fewer obvious standouts, though this paper points to a feature I would appreciate and have secretly wished for from Maps: live shadows. Or not live exactly but reasonably accurate predictions. Using multiple images taken of the same location at street level, the team can create a good model of how the sun and other lights affect the scene, allowing them to change it arbitrarily for positions of the sun or sky conditions.

An image of an intersection has its lighting artificially adjusted.
Image Credits: Google

If this doesn’t end up in Google Maps within a year or two I’ll be very surprised. Having Street View reflect current weather patterns, or being able to tell whether a cafe is in the sun or shade at a given time on a given day is a hugely useful feature and the kind of wizardry the company loves to pack into one of the few products where it is truly still a leader. (Here’s a longer video on how it works.)

Another area it excels is computational photography, and a lot of its ECCV papers are the kind of thing that lead to products down the line there as well. Pose estimation, detection of objects and actions in videos, accelerating lidar analysis, that sort of thing. Anyone with a competing product could probably make a lot of informed speculation about their roadmap. But as few have more general interest, I’ll leave it to them.

More TechCrunch

Meta’s Oversight Board has now extended its scope to include the company’s newest platform, Instagram Threads, and has begun hearing cases from Threads.

Meta’s Oversight Board takes its first Threads case

The company says it’s refocusing and prioritizing fewer initiatives that will have the biggest impact on customers and add value to the business.

SeekOut, a recruiting startup last valued at $1.2 billion, lays off 30% of its workforce

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender SoLo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

OpenAI is removing one of the voices used by ChatGPT after users found that it sounded similar to Scarlett Johansson, the company announced on Monday. The voice, called Sky, is…

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

1 day ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine