Loops logo

Industry - 7 minute read

Why do we have a fear of AI?

By Milo Hobsbawm

This post is the first in a series focused on AI and how we can harness its capabilities and potential to further and optimize our creative practises.

In recent years, the rapid advancements in artificial intelligence and machine learning have sparked both excitement and concern across various sectors of society. From self-driving cars and virtual assistants to AI-powered tools for creating art and analyzing vast amounts of data, the applications of AI seem limitless. However, alongside the immense potential of these technological developments, there is a growing body of concern about the unintended consequences and potential negative impacts of AI on humanity.

AI and machine learning have been topics of great debate recently, especially around AI's potential applications and the potential issues faced with its current uses.

It is widely agreed that modern AI technology has proven itself to be one of the most transformative, door-opening inventions in the history of technology, let alone the history of mankind. But with great power, comes great responsibility. Its transformative nature means the doors it opens swing both ways, offering the chance for both positive and negative outcomes.

Those who aren't part of the AI world, or who don't work closely with it, have developed a negative perception of artificial intelligence, which so happens to be a large chunk of society. Afraid, skeptical, wary, distrustful, hesitant - the applicable terms are endless. This leads to a many wanting the regulate AI.

Ironically, these people use AI everyday to unlock their iPhone with facial recognition, or to check their bank account on an app, or to find the best route on their SatNav.

Why are we scared of AI?

There are many reasons why we might fear AI:

Reason 1 - Bad people do bad things

Stephen Hawking himself said that future developments of AI "could spell the end of the human race."

It isn't unreasonable to think, at some point in the future, AI could be weaponised. With computer science progressing every decade, the systems we have today will be incomparable to what humans develop in the future, as are the potential future threats. If future artificial intelligence technology fell into volatile hands, the consequences could be potentially catastrophic.

One major concern is the potential for AI systems to be used for malicious purposes, such as cyberattacks, surveillance, or even the creation of autonomous weapons. As these systems become more advanced and capable of performing complex tasks, the risks associated with their misuse increase. For example, AI-powered deepfakes and disinformation campaigns could be used to manipulate public opinion and undermine trust in institutions, while AI-driven cyberattacks could target critical infrastructure and cause widespread disruption.

Reason 2 - No shared base level of understanding

Not understanding something leads to an inherent distrust of it. Our self-preservation instincts immediately view any novum ("new thing") from the worst possible angle, our point of view only changing once its proven to not be a threat. The fear of the unknown is a powerful thing, and as the human intelligence and artificial intelligence spheres increase, so does the uncertainty.

Artificial intelligence is a complex concept that many struggle to fully understand, and yet the sheer amount of data and research available is useless if you don't know how to interpret or contextualise it.

The lack of a shared understanding of AI among the general public is compounded by the rapid pace of technological advances and the complexity of the underlying systems, such as neural networks and deep learning algorithms. This knowledge gap contributes to the anxiety and mistrust surrounding AI, as people feel they have little control over or insight into the technology that is increasingly shaping their lives.

Reason 3 - SuperIntelligence

This is perhaps one of the most classic AI concerns - that, one day, the computers and robots of the world will rise up against and surpass the humans.

Andy Hobsbawm, the chairman for Loops, says, "Thirty years ago the futurist Peter Cochrane said it's ok if computers land our planes safely but we get all emotional when they beat us at chess. These same fears are being massively amplified by AI today as machines get exponentially more clever and increasingly start to apply their intelligence like human beings."

It's still unknown what happens if AI continues to improve to the point where it will no longer need human involvement and has learnt to invent and optimize itself.

Artificial General Intelligence (AGI) is still just a concept at this point, but could humans find themselves falling behind computers, unable to keep up with the rapid advancement? This would raise a few difficult to answer questions, like how do we measure intelligence?

The notion of superintelligence, an AI system that surpasses human intelligence in virtually all domains, is a daunting prospect. As AI continues to evolve and perform tasks that were once considered uniquely human, the line between human and machine capabilities becomes increasingly blurred. This blurring of boundaries raises profound questions about the nature of intelligence and what it means to be human.

While the development of artificial general intelligence (AGI) remains a distant possibility, the rapid progress in narrow AI applications, such as language models, image generators, and autonomous vehicles, has led many to speculate about the potential consequences of superintelligent AI. The fear is that once AI surpasses human intelligence, it may pursue goals that are misaligned with human values, leading to unintended and potentially catastrophic outcomes for society.

Reason 4 - We don't want to be replaced by computers

That threat of being told that computers could do a better job than us, that we could easily be replaced, may have hit home more closely than demanding companies intended. Human jobs have already seen examples of this takeover in previous waves of automation going back to the industrial revolution.

This is a fear shared by employees across all industries, from engineering to marketing to service, but a fervent discussion has broken out in regards to the relationship between AI tools and creative professions, with some AI now capable of composing music with reinforcement techniques (meaning no input from humans).

The job market can be competitive enough when you're up against other humans - who would want to compete against machines too?

The anxiety of job displacement due to AI is a growing concern as these systems become more advanced and capable of performing a wide range of tasks. From data entry and customer service to more complex roles in healthcare, software development, and creative industries, AI has the potential to automate many jobs, leading to significant disruptions in the labor market.

For example, the advent of generative AI, such as large language models and image generators, has raised concerns about the future of creative professions. These AI systems can create content, such as articles, artwork, and even music, with minimal human input, potentially rendering some creative jobs obsolete.

Reason 5 - The entertainment industry's portrayal of artificial intelligence

Hal from "2001: A Space Odyssey". Ultron. AUTO from "Wall-E".

Undoubtedly, the entertainment industry has played a large part in influencing the public's attitude towards AI development, framing computers as the antagonists in many science fiction narratives, as threats to humanity.

While viewers are able to distinguish fact from fiction, the currents facts around AI are still murky to most, leaving us with images of a bloodied and determined Terminator stuck in our minds rather than the reality of what AI actually is.

The psychology behind being afraid of AI

Fears are a learned behaviour, and we may be our own worst enemies when faced with overcoming it.

With daily exposure to apps like Twitter and TikTok, and content falling into popular formats like Top 10 lists, our demand for easily digestible content has grown while our attention spans have shrunk.

Artificial intelligence and machine learning are not straightforward concepts to explain, nor are regularly made easily digestible. This makes them unpopular topics to casually learn about if there is no pre-existing interest. AI takes lots of effort to learn about and it can be a pain to find reputable information and comprehend it. This usually outweighs the benefits of casual interest, and so some simply don't bother.

This lack of effort leads to a lack understanding, which ultimately leads to a lack of control, either of the concept itself or personal knowledge of it. Humans are territorial in nature, meaning we like to feel in control in order to feel safe. If something is unknown to us, and therefore outside of our control, like AI, then we are always fearful of it.

The psychology behind this is rooted in the human need for control and understanding. When faced with a new technology that is difficult to comprehend, people often resort to mental shortcuts and heuristics to make sense of it. In the case of AI, the lack of accessible and easily digestible information about its workings and implications can lead to the formation of misconceptions and fears.

Moreover, the so-called "black box" nature of many AI systems, particularly those based on deep learning, can make it difficult for even experts to fully understand how these systems arrive at their decisions. This lack of transparency and interpretability can further fuel public distrust and anxiety about the potential risks and unintended consequences of AI.

Will we ever get over the fear of AI?

"The fear will eventually subside to caution, and then collaboration, like most things as we learn to live side by side and augment our lives with the power of AI," says Loops co-founder, Scott Morrison.

Sometimes, we find some comfort in the inevitable. Whether we like it or not, artificial intelligence is here to stay, already too ingrained in our everyday jobs and lives to simply cut it out.

Ensuring the ethical development and deployment of AI systems is crucial in building public trust. This involves establishing guidelines and regulations that prioritize transparency, accountability, and fairness in AI decision-making processes. Collaborations between AI developers, ethicists, and policymakers can help create a framework for the responsible use of AI that addresses societal concerns and mitigates potential negative impacts.

As AI becomes more integrated into our daily lives, society will likely adapt and become more comfortable with the technology. Just as we have learned to trust and rely on other once-novel technologies, such as smartphones and the internet, we may eventually come to accept and embrace AI as a tool that can enhance our lives and solve complex problems. However, this process will take time and require ongoing dialogue and collaboration between all stakeholders to ensure that the development and use of AI aligns with human values and benefits society as a whole.

Loops aims to be part of a positive shift

Artificial intelligence can solve some of the most damaging issues on the macro scale - disease, climate change, terrorism, but issues such as these are solved with baby steps, and usually not in the public eye. This means AI enabled technologies have positive capabilities aren't observed by most, going unappreciated and unacknowledged.

At Loops, we have an interest in applying machine learning to support creative strategy, building the foundation of what we like to call the "modern creative process".

Loops uses a form of natural language processing (NLP) to understand how a large audience feels about an idea.

Imagine a huge focus group that you can spin up from your laptop, with any group of people in the world, and understand consensus, outliers, and blindspots in a few clicks. This doesn't hinder creativity - it enables it because you are empowered to explore new ideas, test brave thinking, and get weird, all with zero risk and minimal effort.

In the next piece of this series, we'll outline and discuss how to combat the ethical concerns / fears of AI, and the benefits of working in tandem with it.

thumbs up sticker

Create with