AI

Reality Check: The marvel of computer vision technology in today’s camera-based AR systems

Comment

Image Credits: Busakorn Pongparnit (opens in a new window) / Getty Images

Alex Chuang

Contributor

Alex Chuang is the managing partner of Shape Immersive, a leading VR/AR agency that drives innovation for the world’s top brands and enterprises.

More posts from Alex Chuang

British science fiction writer, Sir Arthur C. Clarke, once said, “Any sufficiently advanced technology is indistinguishable from magic.”

Augmented reality has the potential to instill awe and wonder in us just as magic would. For the very first time in the history of computing, we now have the ability to blur the line between the physical world and the virtual world. AR promises to bring forth the dawn of a new creative economy, where digital media can be brought to life and given the ability to interact with the real world.

AR experiences can seem magical but what exactly is happening behind the curtain? To answer this, we must look at the three basic foundations of a camera-based AR system like our smartphone.

  1. How do computers know where it is in the world? (Localization + Mapping)
  2. How do computers understand what the world looks like? (Geometry)
  3. How do computers understand the world as we do? (Semantics)

Part 1: How do computers know where it is in the world? (Localization)

Mars Rover Curiosity taking a selfie on Mars. Source: https://www.nasa.gov/jpl/msl/pia19808/looking-up-at-mars-rover-curiosity-in-buckskin-selfie/

When NASA scientists put the rover onto Mars, they needed a way for the robot to navigate itself on a different planet without the use of a global positioning system (GPS). They came up with a technique called Visual Inertial Odometry (VIO) to track the rover’s movement over time without GPS. This is the same technique that our smartphones use to track their spatial position and orientation.

A VIO system is made out of two parts.

  • The Optical System
  • The Inertial System or Inertial Measurement Unit (IMU)

The optical system is comprised of a camera stack which includes the lens, shutter and image sensors. The inertial system is made up of an accelerometer, which measures acceleration and a gyroscope, which measures orientation. Together, they help your device determine its position (x, y, z) and orientation (pitch, yaw, roll), which is also known as your 6-degrees-of-freedom (6DoF) pose.

Six degrees of freedom general definitions. Source: https://sensing.honeywell.com/honeywell-sensing-inertial-measurement-unit-6df-applicationnote.pdf

As you move your smartphone to look at the AR content, your phone is essentially capturing many photos of the environment and comparing them to figure out its position. For each photo it captures, it is also identifying key features in the environment that are visually unique and interesting in nature such as the edges, corners, ridges of unique objects in the scene. By comparing two images and their respective key features and using the sensor data from the phone’s IMU, your phone can figure out its position through stereoscopic calculation. It’s very similar to how our eyes can infer depth.

The features of the two images are detected and matched through a robust and accurate algorithm called (SIFT scale-invariant feature transform). Source: https://www.cc.gatech.edu/~hays/compvision/proj2/

How does mapping work?

When I am lost in a foreign city, the first thing I do is open my Google Maps and look around for visual clues (landmarks, Starbucks, road signs, etc.) to figure out where I am on the map.

For your phone to understand where it is in space, it needs to first build and memorize a map of its surrounding by “looking” around. This machine-readable map is basically a graph of all the interesting points that your phone identified including their descriptions (e.g. colors and lighting). Together, these points or features form a sparse point cloud that looks like the gif below.

A sparse point cloud map generated for localization and 3D reconstruction purposes. Source: https://www.youtube.com/watch?v=RbOcpOmEbiI&t=15s

This map is very important for your phone to relocalize itself when it loses tracking. Your phone can lose tracking if you cover the camera, drop your phone or move your phone too fast and it’s capturing blurry images. When these scenarios happen, your phone will need to relocalize itself. The relocalization process starts when your phone looks at the scene again and identifies the key features of the scene. It then compares those features with the features on the map it previously memorized. When a match has been found, it will be able to find its spatial position again.

What is SLAM (Simultaneous Localization and Mapping)?

To bring everything together, SLAM refers to the broader system that allows your phone to construct and update a map of an unknown environment while simultaneously keep track of its location within the map. SLAM systems include subsystems that we already mentioned like the phone’s optical systems, inertial systems, and mapping systems. Through tight integration between hardware and software, your phone now has this incredible ability to understand where it is in the world and track itself within its environment.

Why isn’t GPS good enough?

GPS will be able to give you a rough estimate of your latitude and longitude position in the world, but it is not accurate enough to give you your precise location. Also, GPS doesn’t work in many underground or indoor environments because the signal from the satellites weakens or distorts as it travels through solid material.

Part 2: How do computers understand what the world looks like? (Geometry)

When Pokemon Go first took the world by storm in 2016, we were charmed by the iconic yellow furry monster that we can see in the real world. However, we quickly realized that Pikachu did not have a clue what the world looked like. To our disappointment, Pikachu was simply a computer-generated graphic that was overlayed on top of the real world.

Expectation VS Reality. The Pikachu on the right does not have any idea of what the world looks like. Source: http://papagamedev.com/wp-content/uploads/2016/08/pokemon_pikachu_ar.jpg

Fast forward to 2019, your phone now has an incredible ability to map your environment spatially (3D reconstruction) with the help of 6D.ai’s software. This means it can understand the shape or structure of real objects in the scene, making occlusion and collision possible. Occlusion is the ability for virtual objects to hide behind real-world objects. And collision is the ability for virtual objects to collide with real-world objects. When virtual objects respond to real-world physics as if they are real, this makes the AR experience so much more believable.

6d.ai is making huge technological advancement in the field of 3D reconstruction for mobile phones. Through their software, the monocular RGB camera on your phone now possesses the power of a depth sensor. It can scan the environment and capture a dense point cloud, which is later converted into a mesh through computational geometry.

Think of a mesh like a thin invisible blanket that drapes over the scene, outlining the external surface of objects. As you move your phone around, this mesh is updated in real-time, providing your device with the most accurate spatial understanding of your physical environment. With this new information, the virtual Pikachu can hop on to your couch, go under a table, and run behind your kitchen counter.

Spatial mapping in action with 6D.ai’s software, no depth sensor needed! Source: https://www.6d.ai/
Spatial mapping in action with ZED’s 3D camera. Source: https://www.youtube.com/watch?v=HnXnBKaCqpU

In this demo below, we used 6D.ai to rapidly generate a textured 3D mesh of the physical environment and allow virtual alien plants to grow on the surfaces of walls, floors, and tables.

Through the lens of our camera, we can now step into a magical alternate dimension that is parallel to our world, just like “The Upside Down” in Stranger Things.

Will enters “The Upside Down”, a parallel alternate dimension in Stranger Things. Source: https://www.netflix.com/

Part 3: How can computers understand the world as we do? (Semantics)

Cuteness alert! Tell me what do you see in the following photo?

Source: https://www.alamy.com

Some of you might say that you see 2 dogs and 2 cats. Some of you might say you see 2 puppies and 2 kittens. And for those of you who are really good, you would say you spot 2 Dachshund puppies and 2 Russian Blue kittens.

When computers see this image, all they see is a bunch of 1’s and 0’s. But with a convolutional neural network (CNN) model, computers can be trained to localize, detect, classify and segment objects. At the simplest level, a convolutional neural network is a system that takes a source image like the one above and figure out the different patterns that it sees in the photo through a series of specialized layers. Each layer has filters that are trained to recognize a specific pattern like edges, shapes, textures, corners or even sophisticated objects like dogs, cats, humans, cars or stop signs.

With CNN as the backbone, the computer can now perform other computer vision tasks such as object detection and classification, semantic segmentation, and instance segmentation.

Source: https://www.youtube.com/watch?v=nDPWywWRIRo

Object Detection + Classification

Object detection and classification is the process of drawing a bounding box around the object(s) in the image and giving it a class label such as dog, cat, person etc. There are two types of algorithms to consider:

  1. Algorithms based on classification work in two stages. In the first step, the model selects an interesting region and then it attempts to classify those regions using CNN. Predictions are ran for every selected region until the CNN model is confident that it has detected the object that it is looking for. This is a computationally expensive method because you’re essentially processing the entire image to look for one thing.
  2. Algorithms based on regression predicts classes and bounding boxes for the whole image in one run of the algorithm. The most well-known example of this type of algorithms is YOLO (You only look once), which is commonly used for real-time object detection.

Semantic Segmentation 

Semantic segmentation is a process that aims to recognize and understand what’s in the image at the pixel level. Each pixel of the image is associated with a class label such as grass, cat, tree, and sky. Each class label is also highlighted by a unique color.

Source: https://tariq-hasan.github.io/concepts/computer-vision-semantic-segmentation/

However, semantic segmentation does not highlight individual instances of a class differently. For example, if there are 2 cows in the photo, it will highlight the collective area of the 2 cows and we won’t be able to distinguish one cow from another. This is where instance segmentation comes into play.

Instance Segmentation

Instance segmentation is actually a combo method of object detection and semantic segmentation. First, the model will use object detection to draw a bounding box around each of the two dogs. Then it will perform semantic segmentation within the bounding box to segment out the instances.

This particular model is called Mask R-CNN (mask regional convolutional neural network), which was built by the Facebook AI research team in 2017.

Can Mask R-CNN be used in real-time for Augmented Reality?

The short answer is yes, but there is a trade-off in quality and speed. Niantic used a similar deep neural network to infer 3D information about the surrounding world so perceived occlusion can be possible. In the demo below, it is quite obvious that dynamic objects like humans are segmented and masked in real-time so Pikachu and Eevee can run behind them.

Here is another example of how real-time instance segmentation is applied to let you virtually try on new hair color.

And here is a demo of a context-aware AR shooting game. The neural network is able to identify the different objects in the scene as well as its material (e.g. wood, glass, fabric). As a result, when the virtual bullet passes through each material, a different animation effect takes place. For example, when the bullet passes through the fabric chair, feathers exploded out.

What does all of this mean for the future of Augmented Reality?

As computers learn to localize itself, see and understand the world as we do, we are one step closer to merging the virtual and physical world.

One day, we will create a machine-readable, 1:1 scale model of the world known as “The AR Cloud”. The AR Cloud has many alternative names such as “the world’s digital twin”, “the mirror world” or “magicverse”. Personally, I’d like to think of it as a digital replica of our world that perfectly overlays on top of our real world.

 “The AR Cloud is going to become the single most important software infrastructure in the history of computing, far bigger than Facebook’s Social Graph or Google’s Search Index.”- Ori Inbar, Super Ventures

Not only will The AR Cloud enable everyone to have a shared experience, but its applications also extend well into self-driving cars, Internet of Things, automation, smart cities or self-navigating delivery drones.

Soon we will be able to program context-aware media to interact with our real world. In 2015, Niantic released a Pokemon Go concept trailer where they showed hundreds of people using their Pokemons to fight Mewtwo in Timesquare. This type of experience will be possible as key technologies such as The AR Cloud, 5G, AI and AR glasses mature over time.

Source: https://www.youtube.com/watch?v=2sj2iQyBTQs

JK Rowling once said, “We do not need magic to change the world, we carry all the power we need inside ourselves already: we have the power to imagine better.” With augmented reality, our world becomes a canvas for us to paint our imagination over. Hopefully, this article was able to inspire you to experiment and create with AR!

More TechCrunch

In a series of posts on X on Thursday, Paul Graham, the co-founder of startup accelerator Y Combinator, brushed off claims that OpenAI CEO Sam Altman was pressured to resign…

Paul Graham claims Sam Altman wasn’t fired from Y Combinator

In its three-year history, EthonAI has amassed some fairly high-profile customers including Siemens and chocolate-maker Lindt.

AI manufacturing startup funding is on a tear as Switzerland’s EthonAI raises $16.5M

Don’t miss out: TechCrunch Disrupt early-bird pricing ends in 48 hours! The countdown is on! With only 48 hours left, the early-bird pricing for TechCrunch Disrupt 2024 will end on…

Ticktock! 48 hours left to nab your early-bird tickets for Disrupt 2024

Biotech startup Valar Labs has built a tool that accurately predicts certain treatment outcomes, potentially saving precious time for patients.

Valar Labs debuts AI-powered cancer care prediction tool and secures $22M

Archer Aviation is partnering with ride-hailing and parking company Kakao Mobility to bring electric air taxi flights to South Korea starting in 2026, if the company can get its aircraft…

Archer, Kakao Mobility partner to bring electric air taxis to South Korea in 2026

Space startup Basalt Technologies started in a shed behind a Los Angeles dentist’s office, but things have escalated quickly: soon it will try to “hack” a derelict satellite and install…

Basalt plans to “hack” a defunct satellite to install its space-specific OS

As a teen model, Katrin Kaurov became financially independent at a young age. Aleksandra Medina, whom she met at NYU Abu Dhabi, also learned to manage money early on. The…

Former teen model co-created app Frich to help Gen Z be more realistic about finances

Can an AI help you tell your story? That’s the idea behind a startup called Autobiographer, which leverages AI technology to engage users in meaningful conversations about the events in…

Autobiographer’s app uses AI to help you tell your life story

AI-powered summaries of webpages are a feature that you will find in many AI-centric tools these days. The next step for some of these tools is to prepare detailed and…

Perplexity AI’s new feature will turn your searches into shareable pages

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

A surge of battery recycling startups have emerged in Europe in a bid to tap into the next big opportunity in the EV market: battery waste.  Among them is Cylib,…

Cylib wants to own EV battery recycling in Europe

Amazon has received approval from the U.S. Federal Aviation Administration (FAA) to fly its delivery drones longer distances, the company announced on Thursday. Amazon says it can now expand its…

Amazon gets FAA approval to expand US drone deliveries

With Plannin, creators can tell their audience about their latest trip, which hotels they liked and post photos of their travels.

Former Priceline execs debut Plannin, a booking platform that uses travel influencers to help plan trips

Amazon is rolling out its AI voice search feature to Alexa, which lets it answer open-ended questions about content.

Amazon is rolling out AI voice search to Fire TV devices

Redpanda has already integrated Benthos into its own service and has made it the core technology of its new Redpanda Connect service.

Redpanda acquires Benthos to expand its end-to-end streaming data platform

It’s a lofty goal to take on legacy payments infrastructure, however, Forward’s model has an advantage by shifting the economics back to SaaS companies.

Fintech startup Forward grabs $16M to take on Stripe, lead future of integrated payments

Fertility remains a pressing concern around the world — birthrates are down in many countries, and infertility rates (that is, the ability to conceive at all) are up. And given…

Rhea reaps $10M more led by Thiel

Microsoft, Meta, Intel, AMD and others have formed a new group to design next-gen interconnects for AI accelerator hardware.

Tech giants form an industry group to help develop next-gen AI chip components

With JioFinance, the Indian tycoon Mukesh Ambani is making his boldest consumer-facing move yet into financial services.

Ambani’s Reliance fires opening salvo in fintech battle, launches JioFinance app

Salespeople live and die by commissions. It’s no surprise, then, that Salesforce paid a premium to buy a platform that simplifies managing commissions.

Filing shows Salesforce paid $419M to buy Spiff in February

YoLa Fresh works with over a thousand retailers across Morocco and records up to $1 million in gross merchandise volume.

YoLa Fresh, a GrubMarket for Morocco, digs up $7M to connect farmers with food sellers

Instagram is expanding the scope of its “Limits” tool specifically for teenagers that would let them restrict unwanted interactions with people.

Instagram now lets teens limit interactions to their ‘Close Friends’ group to combat harassment

Agritech company Iyris helps growers across eleven countries globally increase crop yields, reduce input costs, and extend growing seasons.

Iyris makes fresh produce easier to grow in difficult climates, raises $16M

Exactly.ai says it uses generative AI to help artists retain legal ownership of their art while being able to reproduce their designs faster and at scale.

Exactly.ai secures $4M to help artists use AI to scale up their output

FintechOS competes with other companies such as Ncino, Meridian Link, Abrigo and Backbase.

Romanian startup FintechOS raises $60M to help old banks fight back against neobanks

After two years of preparation and four delays over the past several months due to technical glitches, Indian space startup Agnikul has successfully launched its first sub-orbital test vehicle, powered…

India’s Agnikul launches 3D-printed rocket in sub-orbital test after initial delays

Struggling EV startup Fisker has laid off hundreds of employees in a bid to stay alive, as it continues to search for funding, a buyout or prepare for bankruptcy. Workers…

Fisker cuts hundreds of workers in bid to keep EV startup alive

Chinese EV manufacturers face a new challenge in their pursuit of U.S. customers: a new House bill that would limit or ban the introduction of their connected vehicles. The bill,…

Chinese EV makers, and their connected vehicles, targeted by new House bill

With the release of iOS 18 later this year, Apple may again borrow ideas third-party apps. This time it’s Arc that could be among those affected.

Is Apple planning to ‘sherlock’ Arc?

TechCrunch Disrupt 2024 will be in San Francisco on October 28–30, and we’re already excited! This is the startup world’s main event, and it’s where you’ll find the knowledge, tools…

Meet Visa, Mercury, Artisan, Golub Capital and more at TC Disrupt 2024