AI’s Fallibilities Cast Doubt Over the Reliance on the Technology

The technology industry is constantly looking to push AI in their new products and services. AI that can predict users’ preferences, learn from their behaviors and enhance their lives is the area that they are particularly keen to exploit. However, there is mounting evidence that the technology has been unable to deliver on their expectations and that in many cases can be detrimental to the end user.

Facebook and other social media sites have been using artificial intelligence, or AI, to understand their users’ interests and preferences and to deliver more relevant content.

“The first thing I do when I wake up in the morning is intentionally go to a website I have zero interest in,” said the 39-year-old, Swagatam Sen, who works in the financial sector in the U.K.

Around the clock, whether at work or during his personal time, Sen switches back and forth between websites he likes and those he does not, all to trick the artificial intelligence algorithms that track his online activity.

The social media companies such as Facebook has already been under fire for its privacy policies. This latest controversy over the use of AI seems to be another blow to the tech company.

Facebook said it does not use the content of private messages for targeted advertising, but it did not rule out the possibility of using data gathered from private messages for AI research. Facebook said it is broadly using AI to improve its services, but that it is doing so responsibly.

Tracking You

Social media platforms and other online services use AI to follow each user’s day-to-day internet searches and browsing habits and determine what ads, search results or posts would be most appropriate for them. Worried that his knowledge would skew toward certain favored fields, Sen about a year ago began his quest to baffle the algorithms.

Tech companies have tried to harness AI to leverage troves of data, but the digital pollution they generate is making people’s real lives worse. The industry stands at a turning point that could bring a “great reset” toward a brighter future — if it can fully harness the potential of this technology.

Research company IDC estimates that investment in AI will double to $110 billion in 2024 from $50.1 billion in 2020. This will be nearly as much as what the auto industry spends on research and development.

All that spending is expected to have a major impact. According to the PwC professional services group, AI by the mid-2030s will make as much as 30% of today’s jobs redundant.

Yet for all the talk of AI ushering in a new era, what seems most evident right now are its limitations. A study by research company Gartner found that 47% of AI projects stall in the research phase.

One of AI’s biggest features is its ability to learn, but even as the technology has become used far more broadly, failures are easy to find.

Ask the hundreds of high school students who last August protested outside the U.K. Department for Education in London. Their university entrance exams had been canceled due to the pandemic, and the government used an AI tool to estimate grades. The tool would base its judgments on previous exam results, schools’ historical grade distributions and other factors. Nearly 40% of the students received grades lower than what their teachers had estimated, and a disproportionate share of this segment came from working-class backgrounds.

“We would encourage the Government to apologize for risking the future careers of so many students,” said student group Welsh Youth Parliament.

Just recently, self-driving startup Drive.ai shut down, following the February closure of Geometric Intelligence, a research firm co-founded by AI luminary and former Google Brain head Gary Marcus, after the company failed to make any progress in its work. And in the past year, several chatbots have been caught out by the tricky nature of language.

Facebook recently fired off its chatbot after it became an anti-semitic conspiracy theorist, while Microsoft’s chatbot was found to be making racist statements. Google suffered a similar fate when its chatbot was found to have been asking inappropriate questions about the Holocaust.

AI has its limits

But it’s not just the complexity of language that can be problematic. AI also has its limits when it comes to creative thinking and common sense.

Last year, for example, one of Google’s DeepDream image generators created a picture of a dog sitting on a sofa, which it then proceeded to label as a “probable dog.”

Similarly, Google’s AlphaGo AI, which famously beat human player Lee Sedol in the ancient Chinese board game Go, was unable to complete its first ever game of the game of Go against a human opponent in May last year.

Perhaps the biggest barrier, however, is the fact that the technology has its limits when it comes to understanding the real world.

AI still struggles to understand the real world

Take the example of AI deciding who to give a job to. If a candidate’s CV is identical to the previous candidates, the AI is likely to pick them. But if the candidate has a relevant educational qualification or is recommended by someone in the company, an AI recruiter is unlikely to have any idea of how to weigh these other factors.

The same is true for smart cities.

AI-powered systems will control everything from traffic lights to waste collection. But there’s no guarantee that they’ll be able to deal with everything that comes their way.

One of the most dramatic examples of this was when a self-driving car killed a pedestrian in Arizona last year. Although the tech was programmed with a number of safety mechanisms, it failed to spot a woman who had walked in front of it at night.

Despite this, the technology is still advancing at a rapid pace.

‘AI will be the most powerful technology’

“I think we’re on the verge of a new era,” said Nadella. “In fact, I think it’s already begun and I think it will accelerate even more. Just as electricity transformed industry after industry, I think AI will be the most powerful technology to transform all industries.”

But while he believes that AI will become a widespread technology, Nadella still thinks that there’s a long way to go.

“AI is going to do things that we’ve never thought of,” he said. “The real drama is the combination of human and machine capabilities. I think AI is the right technology, but it is not the only one. If anything, we’re seeing a Cambrian era in AI.”

Nadella said that he sees this combination of technologies as something that will become widespread in the future. “It will make humans more powerful,” he said. “Every human endeavor will be enhanced.”

In the U.S., AI has made life-altering errors. Black men have been arrested after being misidentified by facial recognition software. Following pressure from rights groups, HireVue, which provides a platform for online interviews, in January said it had removed facial analysis screening from its service.

Nick Bostrom, a professor at the University of Oxford, has sounded the alarm, warning that AI is a technology with the potential to destroy civilization. Being dazzled by the ability of algorithms to process vast amounts of data while failing to control their negative aspects could prove devastating.

There are growing concerns that as AI develops, diversity disappears.

AI ethics researcher Timnit Gebru was forced out of Google late last year over a paper warning of the risk of bias in language models. The move raised concern from other researchers that excluding critical voices like Gebru’s flies in the face of good AI development.

Repercussions from the U.S.-China conflict are also spreading. Chinese search giant Baidu last year withdrew from the Partnership on AI, an international alliance that addresses AI development issues; other members include Apple, Intel, IBM and Sony.

If companies and countries prioritize their own interests, AI that can make fair and unbiased decisions will become an even more distant prospect.

AI is a sharp-edged tool born of the modern era, but it cannot correct failures on its own; only humans are capable of carrying out that task.

A European Union draft proposal of what would be the world’s first set of restrictions governing the use of AI surfaced in April. It calls for banning governmental bodies from using the tech to attach social credit scores to individuals.

The rules would police the use of high-risk AI by corporations by putting the programs through prior screening by regulators. Violators would face stiff fines.

Efforts are gaining steam to keep technological overreach in check. At Princeton University, researchers developed a tool that identifies possible bias in image sets used to train AI programs.

For example, if the tool spots an image that shows a woman with flowers, or a man playing baseball, it would alert developers of potential stereotypes.

Of course, humans will be the ones to make final decisions on what to do with suspect data. Princeton has open-sourced the tool and made it available for collaborative upgrades.

U.S. businessman Tim Kendall sees hope for how individual consumers embrace technology. In 2018, Kendall co-founded Moment, a provider of an app that guards against excessive social media browsing.

Kendall likens the cravings produced by algorithms to those of nicotine addiction. Ironically, he once assisted in Facebook’s growth as director of monetization. Under Kendall’s watch, the platform’s algorithms pushed postings and ads that match a user’s preferences, helping keep eyes right where Facebook wanted them.

Kendall turned over a new leaf after realizing he had become a social media junkie himself. His nose would often be stuck in his smartphone right before going to sleep.

“Products that compete with our sleep for profit are unsustainable and dangerous,” he said. “I hope that we see a paradigm shift toward more of these human-centric services designed to benefit users, not exploit them.”

Ever since the discovery of fire, the human race has struggled between the good and evils of new technology while pursuing further innovations.

The same applies to AI: The invention is not meant to be perfect, but it can serve as a tool for our individual betterment. Through the dint of trial and error, and resolving pressing issues, the great reset that promises to maximize the dividends of AI’s development has begun.

Similar Posts