Startups

Why image recognition is about to transform business

Comment

Ken Weiner

Contributor

Ken Weiner is the CTO of GumGum.

At Facebook’s recent annual developer conference, Marc Zuckerberg outlined the social network’s artificial intelligence (AI) plans to “build systems that are better than people in perception.” He then demonstrated an impressive image recognition technology for the blind that can “see” what’s going on in a picture and explain it out loud.

From programs that help the visually impaired and safety features in cars that detect large animals to auto-organizing untagged photo collections and extracting business insights from socially shared pictures, the benefits of image recognition, or computer vision, are only just beginning to make their way into the world — but they’re doing so with increasing frequency and depth.

It’s busy enough that the upcoming LDV Vision Summit, an annual conference dedicated to all things visual tech, from VR and cameras to medical imaging and content analysis, is already in its third year. “The advancements in computer vision these days are creating tremendous new opportunities in analyzing images that are exponentially impacting every business vertical, from automotive to advertising to augmented reality,” says Evan Nisselson of LDV Capital, which organizes the summit.

As with other forms of AI — natural language procession, bioinformatics, gaming — the field of computer vision has benefited greatly from the expansion of open-source, deep learning technology, user-friendly programming tools and faster and more affordable computing.

Many a headline references deep learning and artificial intelligence as the next big thing, but how exactly do these different tools work, and in what ways are businesses using them to offer image tech to the world? Is Google’s TensorFlow the same thing as Facebook’s DeepFace or Microsoft’s Project Oxford? Not exactly. To help clarify things, here’s a quick breakdown of current image technology tools and how businesses are using them.

Training material: Open data

Thanks to deep learning techniques, a machine learning technique loosely modeled after the human brain, computers can be taught to accurately identify what’s in pictures faster than ever — but they need massive amounts of data to do it.

Enter ImageNet and Pascal VOC. Years in the making, these massive and free-to-anyone databases contain millions of images tagged with keywords about what’s inside the pictures — everything from cats and mountains to pizza and sports activities. These open datasets are the basis for machine learning around images (the only way computers can accurately identify cats in photos is because they have already learned what cats look like by analyzing millions of pictures tagged with the word “cat”).

Best known for its annual visual recognition challenge, ImageNet was launched by computer scientists at Stanford and Princeton in 2009 with 80,000 tagged images. It has since grown to include more than 14 million tagged images, any of which are up for grabs at any time for machine training purposes.

Powered by various universities in the U.K., Pascal VOC has fewer pictures, but each one has richer annotations. This improves the accuracy and breadth of the machine learning and, for some applications, speeds up the overall process, because it allows for the omission of cumbersome computer subtasks.

Now, everyone from Google and Facebook to startups and universities use these open source picture sets to feed their machine learning beasts, but the big technology companies have the advantage of access to millions of user-labeled images from apps such as Google Photos and Facebook. Have you ever wondered why Google and Facebook let you upload so many pictures for free? It’s because those pictures are used to train their deep learning networks to become more accurate.

Building blocks: Open-source software libraries and frameworks

Once you have the data, it’s time to build a machine that can learn from it. Enter open-source software libraries. Freely available, these frameworks serve as starting points for building machine learning systems to service different kinds of computer vision functions, from facial and emotion recognition to medical screening and large obstacle (read: deer) detection in cars. These machine learning systems are then fed pictures from ImageNet and its ilk, proprietary images (aka Google Photos) or other sources (like anonymized, indexed clinical records).

Google TensorFlow is one of the better-known libraries, if only because it was covered widely when selected parts were open sourced late last year. TensorFlow, some of which is still proprietary to Google, is used to develop many of the company’s AI initiatives, from autonomous cars and translation to Google Now and Google Photos.

But TensorFlow is hardly the first — or only — open-source framework. UC Berkeley’s Caffe has been around since 2009, and remains popular because of its ease of customizability and large community of innovators, not to mention heavy use by Pinterest and Yahoo!/Flickr. Even Google turns to Caffe for certain projects such as DeepDream.

Created in 2002, Torch is also popular, owing to its use by Facebook AI Research (FAIR), which open sourced some of its modules in early 2015. Some of these tools are optimized to run on more than one graphics processor or computer to amplify capacity and speed up the deep learning process. Similarly, NVIDIA’s cuDNN is an open-source software library that optimizes a computer’s graphics processing unit (GPU) performance, making machine learning even faster.

These tools, while flexible and robust, require teams of computer vision engineers and hardware, so only companies that want to make computer vision a major part of their product strategy, where they’d want to own the software, need apply.

Ready-to-wear: Hosted APIs

Not every company has the resources, or wants to invest in the resources, to build out a computer vision engineering team. Even if you’ve found the right team, it can be a lot of work to get it just right, which is where hosted API services come in. Carried out in the cloud, these solutions offer menus of out-of-the-box image recognition services that can be easily integrated with an existing app or used to build out a specific feature or an entire business.

Say the Travel Channel needs “landmark detection” to show relevant photos on landing pages for specific landmarks, or eHarmony wants to filter out “unsafe” profile images uploaded by their users. Neither of these companies needs or wants to get into the deep learning image recognition development business, but can still benefit from its capabilities.

Google Cloud Vision, for example, offers a series of image detection services from facial and optical character recognition (text) to landmark and explicit content detection, and charges on a per-photo basis. Microsoft Cognitive Services (née Project Oxford) offers a collection of visual image recognition APIs, including emotion, celebrity and face detection, and charges a specific rate per 1,000 transactions. Meanwhile, startups like Clarifai offer computer vision APIs that help companies organize their content, filter out unsafe user-generated images and videos and make purchasing recommendations based on viewed or taken photos.

Custom computer vision technology

Of course, it doesn’t have to be apples or oranges. Computer vision engineering teams don’t need to be Google-sized, and companies big and small that don’t want to build their own AI systems may still want robust, custom image recognition solutions. If a beauty or cosmetics company wants to find, say, pictures of people with high-volume hair to serve ads about body-minimizing shampoo, it’ll need someone to create a custom algorithm to search for high-volume hair, since that isn’t the first thing that the more commoditized solutions offer out of the box.

Same with logos or car make and model, which are still niche commercial applications that currently aren’t available in the open-source arena. And if a closed dataset isn’t readily available, no matter, because a good percentage of the images shared on social media these days are public, anyway, making for a rich source of images with which to feed the machine learning beast.

Some companies use combinations of open data and open-source frameworks, as long as they have a team of engineers, or they might just use hosted APIs if computer vision is not something on which they are staking their entire business.

And for companies with a wide range of very specific needs, there are custom solutions. No matter how it’s approached, though, it’s clear that image recognition rarely exists in isolation; it’s made stronger by access to more and more pictures, real-time big data, unique applications and speed. The businesses that make the most of these connections are the ones that will be best poised for success.

More TechCrunch

Go Digit, an Indian insurance startup, has raised $141 million from investors including Goldman Sachs, ADIA, and Morgan Stanley as part of its IPO.

Indian insurance startup Go Digit raises $141M from anchor investors ahead of IPO

Peakbridge intends to invest in between 16 and 20 companies, investing around $10 million in each company. It has made eight investments so far.

Food VC Peakbridge has new $187M fund to transform future of food, like lab-made cocoa

For over six decades, the nonprofit has been active in the financial services sector.

Accion’s new $152.5M fund will back financial institutions serving small businesses globally

Meta’s newest social network, Threads is starting its own fact-checking program after piggybacking on Instagram and Facebook’s network for a few months. Instagram head Adam Mosseri noted that the company…

Threads finally starts its own fact-checking program

Looking Glass makes trippy-looking mixed-reality screens that make things look 3D without the need of special glasses. Today, it launches a pair of new displays, including a 16-inch mode that…

Looking Glass launches new 3D displays

Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.

Intuitive Machines wants to help NASA return samples from Mars

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI