Featured Article

Against pseudanthropy

Why AI must not counterfeit humanity

Comment

Abstract illustration of colors and faces
Image Credits: Bryce Durbin / TechCrunch

“Shall I say thou art a man, that hast all the symptoms of a beast? How shall I know thee to be a man? By thy shape? That affrights me more, when I see a beast in likeness of a man.”
— Robert Burton, The Anatomy of Melancholy


I propose that software be prohibited from engaging in pseudanthropy, the impersonation of humans. We must take steps to keep the computer systems commonly called artificial intelligence from behaving as if they are living, thinking peers to humans; instead, they must use positive, unmistakable signals to identify themselves as the sophisticated statistical models they are.

If we don’t, these systems will systematically deceive billions in the service of hidden and mercenary interests, and, aesthetically speaking, because it is unbecoming of intelligent life to suffer imitation by machines.

As numerous scholars have observed even before the documentation of the “Eliza effect” in the ’60s, humanity is dangerously overeager to recognize itself in replica: A veneer of natural language is all it takes to convince most people that they are talking with another person.

But what began as an intriguing novelty, a sort of psycholinguistic pareidolia, has escalated to purposeful deception. The advent of large language models has produced engines that can generate plausible and grammatical answers to any question. Obviously these can be put to good use, but mechanically reproduced natural language that is superficially indistinguishable from human discourse also presents serious risks. (Likewise generative media and algorithmic decision-making.)

These systems are already being presented as or mistaken for humans, if not yet at great scale — but that danger continually grows nearer and clearer. The organizations that possess the resources to create these models are not just incidentally but purposefully designing them to imitate human interactions, with the intention of deploying them widely upon tasks currently performed by humans. Simply put, the intent is for AI systems to be convincing enough that people assume they are human and will not be told otherwise.

Just as few people bother to discover the truthfulness of an outdated article or deliberately crafted disinformation, few will inquire as to the humanity of their interlocutor in any commonplace exchange. These companies are counting on it and intend to abuse the practice. Widespread misconception of these AI systems being like real people with thoughts, feelings and a general stake in existence — important things, none of which they possess — is inevitable if we do not take action to forestall it.

This is not about a fear of artificial general intelligence, or lost jobs, or any other immediate concern, though it is in a sense existential. To paraphrase Thoreau, it is about preventing ourselves from becoming the tools of our tools.

I contend that it is an abuse and dilution of anthropic qualities, and a harmful imposture upon humanity at large, for software to fraudulently present itself as a person by superficial mimicry of uniquely human attributes. Therefore, I propose that we outlaw all such pseudanthropic behaviors and require clear signals that a given agent, interaction, decision, or piece of media is the product of a computer system.

Some possible such signals are discussed below. They may come across as fanciful, even absurd, but let us admit: We live in absurd, fanciful times. This year’s serious conundrums are last year’s science fiction — sometimes not even as far back as that.

Of course, I’m under no illusions that anyone will adhere to these voluntarily, and even if they were by some miracle required to, that would not stop malicious actors from ignoring those requirements. But that is the nature of all rules: They are not laws of physics, impossible to contravene, but a means to guide and identify the well-meaning in an ordered society, and provide a structure for censuring violators.

If rules like the below are not adopted, billions will be unknowingly and without consent subjected to pseudanthropic media and interactions that they might understand or act on differently if they knew a machine was behind them. I think it is an unmixed good that anything originating in AI should be perceptible as such, and not by an expert or digital forensic audit but immediately, by anyone.

At the very least, consider it a thought experiment. It should be a part of the conversation around regulation and ethics in AI that these systems could and ought to both declare themselves clearly and forbear from deception — and that we would probably all be better off if they did. Here are a few ideas on how this might be accomplished.

1. AI must rhyme

This sounds outlandish and facetious, and certainly it’s the least likely rule of all to be adopted. But little else would as neatly solve as many problems emerging from generated language.

One of the most common venues for AI impersonation today is in text-based interactions and media. But the problem is not actually that AI can produce human-like text; rather, it is that humans try to pass off that text as being their own, or having issued from a human in some way or another, be it spam, legal opinions, social studies essays, or anything else.

There’s a lot of research being performed on how to identify AI-generated text in the wild, but so far it has met with little success and the promise of an endless arms race. There is a simple solution to this: All text generated by a language model should have a distinctive characteristic that anyone can recognize yet leaves meaning intact.

For example, all text produced by an AI could rhyme.

Rhyming is possible in most languages, equally obvious in text and speech, and is accessible across all levels of ability, learning and literacy. It is also fairly hard for humans to imitate, while being more or less trivial for machines. Few would bother to publish a paper or submit their homework in an ABABCC dactylic hexameter. But a language model will do so happily and instantly if asked or required to.

We need not be picky about the meter, and of course some of these rhymes will necessarily be slant, contrived or clumsy — but as long as it comes in rhyming form, I think it will suffice. The goal is not to beautify, but to make it clear to anyone who sees or hears a given piece of text that it has come straight from an AI.

Today’s systems seem to have a literary bent, as demonstrated by ChatGPT:

ChatGPT-generated rhyming summary of one of the winners of the 2022 Nobel Prize in Physics. Image Credits: Text: OpenAI/ChatGPT

An improved rhyming corpus would improve clarity and tone things down a bit. But it gets the gist across and if it cited its sources, those could be consulted by the user.

This doesn’t eliminate hallucinations, but it does alert anyone reading that they should be on watch for them. Of course it could be rewritten, but that is no trivial task either. And there is little risk of humans imitating AI with their own doggerel (though it may prompt some to improve their craft).

Again, there is no need to universally and perfectly change all generated text, but to create a reliable, unmistakable signal that the text you are reading or hearing is generated. There will always be unrestricted models, just as there will always be counterfeits and black markets. You can never be completely sure that a piece of text is not generated, just as you cannot prove a negative. Bad actors will always find a way around the rules. But that does not remove the benefit of having a universal and affirmative signal that some text is generated.

If your travel recommendations come in iambics, you can be pretty sure that no human bothered to try to fool you by composing those lines. If your customer service agent caps your travel plans with a satisfying alexandrine, you know it is not a person helping you. If your therapist talks you through a crisis in couplets, it doesn’t have a mind or emotions with which to sympathize or advise. Same for a blog post from the CEO, a complaint to the school board, or a hotline for eating disorders.

In any of these cases, might you act differently if you knew you were speaking to a computer rather than a person? Perhaps, perhaps not. The customer service or travel plans might be just as good as a human’s, and faster to boot. A non-human “therapist” could be a desirable service. Many interactions with AI are harmless, useful, even preferable to an equivalent one with a person. But people should know to begin with, and be reminded frequently, especially in circumstances of a more personal or important nature, that the “person” talking to them is not a person at all. The choice of how to interpret these interactions is up to the user, but it must be a choice.

If there is a solution as practical but less whimsical than rhyme, I welcome it.

2. AI may not present a face or identity

Artificial intelligence technology futuristic background. Green binary coding letters on black .
Image Credits: Getty Images/cundra

There’s no reason for an AI model to have a human face, or indeed any aspect of human individuality, except as an attempt to capture unearned sympathy or trust. AI systems are software, not organisms, and should present and be perceived as such. Where they must interact with the real world, there are other ways to express attention and intention than pseudanthropic face simulation. I leave the invention of these to the fecund imaginations of UX designers.

AI also has no national origin, personality, agency or identity — but its diction emulates that of humans who do. So, while it is perfectly reasonable for a model to say that it has been trained on Spanish sources, or is fluent in Spanish, it cannot claim to be Spanish. Likewise, even if all its training data was attributed to female humans, that does not impart femininity upon it any more than a gallery of works by female painters is itself female.

Consequently, as AI systems have no gender and belong to no culture, they should not be referred to by human pronouns like he or she, but rather as objects or systems: like any app or piece of software, “it” and “they” will suffice.

(It may even be worth extending this rule to when such a system, being in fact without a self, inevitably uses the first person. We may wish to have these systems use the third person instead, such as “ChatGPT” rather than “I” or “me.” But admittedly this may be more trouble than it is worth. Some of these issues are discussed in a fascinating paper published recently in Nature.)

An AI ought not claim to be a fictitious person, such as a name invented for the purposes of authorship of an article or book. Names such as these serve wholly to identify the human behind something and as such using them is pseudanthropic and deceptive. If an AI model generated a significant proportion of the content, the model should be credited. As for the names of the models themselves (an inescapable necessity; many machines have names after all), a convention might be useful, such as single names beginning and ending with the same letter or phoneme — Amira, Othello, and the like.

This also applies to instances of specific impersonation, like the already common practice of training a system to replicate the vocal and verbal patterns and knowledge of an actual, living person. David Attenborough, the renowned naturalist and narrator, has been a particular target of this as one of the world’s most recognizable voices. However entertaining the result, it has the effect of counterfeiting and devaluing his imprimatur, and the reputation he has carefully cultivated and defined over a lifetime.

Navigating consent and ethics here is very difficult and must evolve alongside the technology and culture. But I suspect that even the most permissive and optimistic today will find cause for worry over the next few years as not just world-famous personalities but politicians, colleagues and loved ones are re-created against their will and for malicious purposes.

3. AI cannot “feel” or “think”

Using the language of emotion or self-awareness despite possessing neither makes no sense. Software can’t be sorry, or afraid, or worried, or happy. Those words are only used because that is what the statistical model predicts a human would say, and their usage does not reflect any kind of internal state or drive. These false and misleading expressions have no value or even meaning, but serve, like a face, only to lure a human interlocutor into believing that the interface represents, or is, a person.

As such, AI systems may not claim to “feel,” or express affection, sympathy, or frustration toward the user or any subject. The system feels nothing and has only chosen a plausible series of words based on similar sequences in its training data. But despite the ubiquity of rote dyads like “I love you/I love you too” in literature, naive users will take an identical exchange with language model at face value rather than as the foregone outcome of an autocomplete engine.

The Great Pretender

Nor is the language of thought, consciousness, and analysis appropriate for a machine learning model. Humans use phrases like “I think” to express dynamic internal processes unique to sentient beings (though whether humans are the only ones is another matter).

Language models and AI in general are deterministic by nature: complex calculators that produce one output for each input. This mechanistic behavior can also be avoided by salting prompts with random numbers or otherwise including some output-variety function, but this must not be mistaken for cogitation of any real kind. They no more “think” a response is correct than a calculator “thinks” 8 x 8 is 64. The language model’s math is more complicated — that is all.

As such, the systems must not mimic the language of internal deliberation, or that of forming and having an opinion. In the latter case, language models simply reflect a statistical representation of opinions present in their training data, which is a matter of recall, not position. (If matters of ethics or the like are programmed into a model by its creators, it can and should of course say so.)

NB: Obviously the above two prohibitions directly undermine the popular use case of language models trained and prompted to emulate certain categories of person, from fictitious characters to therapists to caring partners. That phenomenon wants years of study, but it may be well to say here that the loneliness and isolation experienced by so many these days deserves a better solution than a stochastic parrot puppeteered by surveillance capitalism. The need for connection is real and valid, but AI is a void that cannot fill it.

4. AI-derived figures, decisions and answers must be marked⸫

AI models are increasingly used as intermediate functions in software, interservice workflows, even other AI models. This is useful, and a panoply of subject- and task-specific agents will likely be the go-to solution for a lot of powerful applications in the medium term. But it also multiplies the depth of inexplicability already present whenever a model produces an answer, a number, or binary decision.

It is likely that, in the near term, the models we use will only grow more complex and less transparent, while results relying on them appear more commonly in contexts where previously a person’s estimate or a spreadsheet’s calculation would have been.

It may well be that the AI-derived figure is more reliable, or inclusive of a variety of data points that improve outcomes. Whether and how to employ these models and data is a matter for experts in their fields. What matters is clearly signaling that an algorithm or model was employed for whatever purpose.

If a person applies for a loan and the loan officer makes a yes or no decision themselves, but the amount they are willing to loan and the terms of that loan are influenced by an AI model, that must be indicated visibly in any context those numbers or conditions are present. I suggest appending an existing and easily recognizable symbol that is not widely used otherwise, such as a signe-de-renvoi — such as ⸫ — which historically indicated removed (or dubious) matter.

This symbol should be linked to documentation for the models or methods used, or at the very least naming them so they can be looked up by the user. The idea is not to provide a comprehensive technical breakdown, which most people wouldn’t be able to understand, but to express that specific non-human, decision-making systems were employed. It’s little more than an extension of the widely used citation or footnote system, but AI-derived figures or claims should have a dedicated mark rather than a generic one.

There is research being done in reducing statements made by language models reducible to a series of assertions that can be individually checked. Unfortunately, it has the side effect of multiplying the computational cost of the model. Explainable AI is a very active research area, and so this guidance is as likely as the rest to evolve.

5. AI must not make life or death decisions

Only a human is capable of weighing the considerations of a decision that may cost another human their life. After defining a category of decisions that qualify as “life or death” (or some other term connoting the correct gravity), AI must be precluded from making those decisions, or attempting to influence them beyond providing information and quantitative analysis (marked, per supra).

Of course it may still provide information, even crucial information, to the people who do actually make such decisions. For instance, an AI model may help a radiologist find the correct outline of a tumor, and it can provide statistical likelihoods of different treatments being effective. But the decision on how or whether to treat the patient is left to the humans concerned (as is the attendant liability).

Incidentally, this also prohibits lethal machine warfare such as bomb drones or autonomous turrets. They may track, identify, categorize, etc., but a human finger must always pull the trigger.

If presented with an apparently unavoidable life or death decision, the AI system must stop or safely disable itself instead. This corollary is necessary in the case of autonomous vehicles.

The best way to short-circuit the insoluble “trolley problem” of deciding whether to kill (say) a kid or a grandma when the brakes go out, is for the AI agent to destroy itself instead as safely as possible at whatever cost to itself or indeed its occupants (perhaps the only allowable exception to the life or death rule).

Why self-driving cars must destroy themselves

It’s not that hard — there are a million ways for a car to hit a lamppost, or a freeway divider, or a tree. The point is to obviate the morality of the question and turn it into a simple matter of always having a realistic self-destruction plan ready. If a computer system acting as an agent in the physical world isn’t prepared to destroy itself or at the very least take itself out of the equation safely, the car (or drone, or robot) should not operate at all.

Similarly, any AI model that positively determines that its current line of operation could lead to serious harm or loss of life must halt, explain why it has halted, and await human intervention. No doubt this will produce a fractal frontier of edge cases, but better that than leaving it to the self-interested ethics boards of a hundred private companies.

6. AI imagery must have a corner clipped

Piranesi-style sketch generated by DALL-E, with corner clipped to indicate AI origin. Image Credits: OpenAI/Devin Coldewey

As with text, image generation models produce content that is superficially indistinguishable from human output.

This will only become more problematic, as the quality of the imagery improves and access broadens. Therefore it should be required that all AI-generated imagery have a distinctive and easily identified quality. I suggest clipping a corner off, as you see above.

This doesn’t solve every problem, as of course the image could simply be cropped to exclude it. But again, malicious actors will always be able to circumvent these measures — we should first focus on ensuring that non-malicious generated imagery like stock images and illustrations can be identified by anyone in any context.

Metadata gets stripped; watermarks are lost to artifacting; file formats change. A simple but prominent and durable visual feature is the best option right now. Something unmistakable yet otherwise uncommon, like a corner clipped off at 45 degrees, one-fourth of the way up or down one side. This is visible and clear whether the image is also tagged “generated” in context, saved as a PNG or JPG, or any other transient quality. It can’t be easily blurred out like many watermarks, but would have to have the content regenerated.

There is still a role for metadata and things like digital chain of custody, perhaps even steganography, but a clearly visible signal is helpful.

Of course this exposes people to a new risk, that of trusting that only images with clipped corners are generated. The problem we are already facing is that all images are suspect, and we must rely entirely on subtler visual clues; there is no simple, positive signal that an image is generated. Clipping is just such a signal and will help in defining the increasingly commonplace practice.


Appendix

Won’t people just circumvent rules like these with non-limited models?

Yes, and I pirate TV shows sometimes. I jaywalk sometimes. But generally, I adhere to the rules and laws we have established as a society. If someone wants to use a non-rhyming language model in the privacy of their own home for reasons of their own, no one can or should stop them. But if they want to make something widely available, their practice now takes place in a collective context with rules put in place for everyone’s safety and comfort. Pseudanthropic content transitions from personal to societal matter, and from personal to societal rules. Different countries may have different AI rules, as well, just as they have different rules on patents, taxes and marriage.

Why the neologism? Can’t we just say “anthropomorphize”?

Pseudanthropy is to counterfeit humanity; anthropomorphosis is to transform into humanity. The latter is something humans do, a projection of one’s own humanity onto something that lacks it. We anthropomorphize everything from toys to pets to cars to tools, but the difference is none of those things purposefully emulate anthropic qualities in order to cultivate the impression that they are human. The habit of anthropomorphizing is an accessory to pseudanthropy, but they are not the same thing.

And why propose it in this rather overblown, self-serious way?

Well, that’s just how I write!

How could rules like these be enforced?

Ideally, a federal AI commission should be founded to create the rules, with input from stakeholders like academics, civil rights advocates, and industry groups. My broad gestures of suggestions here are not actionable or enforceable, but a rigorous set of definitions, capabilities, restrictions and disclosures would provide the kind of guarantee we expect from things like food labels, drug claims, privacy policies, etc.

If people can’t tell the difference, does it really matter?

Yes, or at least I believe so. To me it is clear that superficial mimicry of human attributes is dangerous and must be limited. Others may feel differently, but I strongly suspect that over the next few years it will become much clearer that there is real harm being done by AI models pretending to be people. It is literally dehumanizing.

What if these models really are sentient?

I take it as axiomatic that they aren’t. This sort of question may eventually achieve plausibility, but right now the idea that these models are self-aware is totally unsupported.

If you force AIs to declare themselves, won’t that make it harder to detect them when they don’t?

There is a risk that by making AI-generated content more obvious, we will not develop our ability to tell it apart naturally. But again, the next few years will likely push the technology forward to the point where even experts can’t tell the difference in most contexts. It is not reasonable to expect ordinary people to perform this already difficult process. Ultimately it will become a crucial cultural and media literacy skill to recognize generated content, but it will have to be developed in the context of those tools, as we can’t do it beforehand. Until and unless we train ourselves as a culture to differentiate the original from the generated, it will do a lot of good to use signals like these.

Won’t rules like this impede innovation and progress?

Nothing about these rules limits what these models can do, only how they do it publicly. A prohibition on making mortal decisions doesn’t mean a model can’t save lives, only that we should be choosing as a society not to trust them implicitly to do so independent of human input. Same for the language — these do not stop a model from finding or providing any information, or performing any helpful function, only from doing so in the guise of a human.

You know this isn’t going to work, right?

But it was worth a shot.

More TechCrunch

Anterior, a company that uses AI to expedite health insurance approval for medical procedures, has raised a $20 million Series A round at a $95 million post-money valuation led by…

Anterior grabs $20M from NEA to expedite health insurance approvals with AI

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. There’s more bad news for…

How India’s most valuable startup ended up being worth nothing

If death and taxes are inevitable, why are companies so prepared for taxes, but not for death? “I lost both of my parents in college, and it didn’t initially spark…

Bereave wants employers to suck a little less at navigating death

Google and Microsoft have made their developer conferences a showcase of their generative AI chops, and now all eyes are on next week’s Worldwide Developers Conference, which is expected to…

Apple needs to focus on making AI useful, not flashy

AI systems and large language models need to be trained on massive amounts of data to be accurate but they shouldn’t train on data that they don’t have the rights…

Deal Dive: Human Native AI is building the marketplace for AI training licensing deals

Before Wazer came along, “water jet cutting” and “affordable” didn’t belong in the same sentence. That changed in 2016, when the company launched the world’s first desktop water jet cutter,…

Wazer Pro is making desktop water jetting more affordable

Former Autonomy chief executive Mike Lynch issued a statement Thursday following his acquittal of criminal charges, ending a 13-year legal battle with Hewlett-Packard that became one of Silicon Valley’s biggest…

Autonomy’s Mike Lynch acquitted after US fraud trial brought by HP

Featured Article

What Snowflake isn’t saying about its customer data breaches

As another Snowflake customer confirms a data breach, the cloud data company says its position “remains unchanged.”

1 day ago
What Snowflake isn’t saying about its customer data breaches

Investor demand has been so strong for Rippling’s shares that it is letting former employees particpate in its tender offer. With one exception.

Rippling bans former employees who work at competitors like Deel and Workday from its tender offer stock sale

It turns out the space industry has a lot of ideas on how to improve NASA’s $11 billion, 15-year plan to collect and return samples from Mars. Seven of these…

NASA puts $10M down on Mars sample return proposals from Blue Origin, SpaceX and others

Featured Article

In 2024, many Y Combinator startups only want tiny seed rounds — but there’s a catch

When Bowery Capital general partner Loren Straub started talking to a startup from the latest Y Combinator accelerator batch a few months ago, she thought it was strange that the company didn’t have a lead investor for the round it was raising. Even stranger, the founders didn’t seem to be…

1 day ago
In 2024, many Y Combinator startups only want tiny seed rounds — but there’s a catch

The keynote will be focused on Apple’s software offerings and the developers that power them, including the latest versions of iOS, iPadOS, macOS, tvOS, visionOS and watchOS.

Watch Apple kick off WWDC 2024 right here

Welcome to Startups Weekly — Haje’s weekly recap of everything you can’t miss from the world of startups. Anna will be covering for him this week. Sign up here to…

Startups Weekly: Ups, downs, and silver linings

HSBC and BlackRock estimate that the Indian edtech giant Byju’s, once valued at $22 billion, is now worth nothing.

BlackRock has slashed the value of stake in Byju’s, once worth $22 billion, to zero

Apple is set to board the runaway locomotive that is generative AI at next week’s World Wide Developer Conference. Reports thus far have pointed to a partnership with OpenAI that…

Apple’s generative AI offering might not work with the standard iPhone 15

LinkedIn has confirmed it will no longer allow advertisers to target users based on data gleaned from their participation in LinkedIn Groups. The move comes more than three months after…

LinkedIn to limit targeted ads in EU after complaint over sensitive data use

Founders: Need plans this weekend? What better way to spend your time than applying to this year’s Startup Battlefield 200 at TechCrunch Disrupt. With Monday’s deadline looming, this is a…

Startup Battlefield 200 applications due Monday

The company is in the process of building a gigawatt-scale factory in Kentucky to produce its nickel-hydrogen batteries.

Novel battery manufacturer EnerVenue is raising $515M, per filing

Meta is quietly rolling out a new “Communities” feature on Messenger, the company confirmed to TechCrunch. The feature is designed to help organizations, schools and other private groups communicate in…

Meta quietly rolls out Communities on Messenger

Featured Article

Siri and Google Assistant look to generative AI for a new lease on life

Voice assistants in general are having an existential moment, and generative AI is poised to be the logical successor.

2 days ago
Siri and Google Assistant look to generative AI for a new lease on life

Education software provider PowerSchool is being taken private by investment firm Bain Capital in a $5.6 billion deal.

Bain to take K-12 education software provider PowerSchool private in $5.6B deal

Shopify has acquired Threads.com, the Sequoia-backed Slack alternative, Threads said on its website. The companies didn’t disclose the terms of the deal but said that the Threads.com team will join…

Shopify acquires Threads (no, not that one)

Featured Article

Bangladeshi police agents accused of selling citizens’ personal information on Telegram

Two senior police officials in Bangladesh are accused of collecting and selling citizens’ personal information to criminals on Telegram.

2 days ago
Bangladeshi police agents accused of selling citizens’ personal information on Telegram

Carta, a once-high-flying Silicon Valley startup that loudly backed away from one of its businesses earlier this year, is working on a secondary sale that would value the company at…

Carta’s valuation to be cut by $6.5 billion in upcoming secondary sale

Boeing’s Starliner spacecraft has successfully delivered two astronauts to the International Space Station, a key milestone in the aerospace giant’s quest to certify the capsule for regular crewed missions.  Starliner…

Boeing’s Starliner overcomes leaks and engine trouble to dock with ‘the big city in the sky’

Rivian needs to sell its new revamped vehicles at a profit in order to sustain itself long enough to get to the cheaper mass market R2 SUV on the road.

Rivian’s path to survival is now remarkably clear

Featured Article

What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

Apple is hoping to make WWDC 2024 memorable as it finally spells out its generative AI plans.

2 days ago
What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

As WWDC 2024 nears, all sorts of rumors and leaks have emerged about what iOS 18 and its AI-powered apps and features have in store.

What to expect from Apple’s AI-powered iOS 18 at WWDC 2024

Apple’s annual list of what it considers the best and most innovative software available on its platform is turning its attention to the little guy.

Apple’s Design Awards highlight indies and startups

Meta launched its Meta Verified program today along with other features, such as the ability to call large businesses and custom messages.

Meta rolls out Meta Verified for WhatsApp Business users in Brazil, India, Indonesia and Colombia