The Danger of “Downgrading” Artificial Intelligence (AI) to ChatGPT

Chuah Kee Man
6 min readFeb 1, 2023

--

Photo by Hitesh Choudhary on Unsplash

I know it’s kind of ironic to be talking about how “intelligent” artificial intelligence (AI) should be. After all, humans were the ones who went on a “crusade” to find other intelligent beings in the universe (and beyond) only to spend more time and resources to create an “artificial” one. Science-fictions made it even more popular to the masses (like the works of my favourite author, Isaac Asimov) and by the late 1940's, we had scientists, mathematicians, and philosophers promoting their concepts and applied prototypes of how AI would work (And that’s 80 years ago!). One of the most popular scientists is of course Alan Turing, who proposed the idea of creating intelligent machines and how to test their intelligence in his paper Computing Machinery and Intelligence (published in 1950).

The Good Old AI

Of course, it was too good to be true then, largely due to the fact that computers were not even capable of storing commands and the cost of constant testing was massive! It made more sense to feed the poor than investing in the “fantasized” AI. Slowly, AI was formally accepted as an academic discipline in 1956, and more research works were done in modeling human capacity in solving problems, forming logic and dealing with large database of knowledge. Natural language processing (NLP) became popular starting then too.

When I was introduced to ELIZA in 1998, I was kind of surprised to note that it was actually initiated in 1964, as my entire life prior to that was under the impression that such “intelligent” system could not have worked (perhaps also due to limited access to such things within my confined exposure to the computing world at that time). Obviously, there was a “slow down” in AI (known as AI Winter — due to the lack of funding as well as failure of some major projects) between 1980s to 1990s, causing its development to be perceived as lacking in practicality. Thanks to the efforts by scientists from various disciplines (computing, psychology, statistics, mathematics, cognitve science), AI revived its reputation by late 1990s as advanced understanding in neural networks, fuzzy systems, mathematical optimization coupled with the rise of computing infrastructure had made it possible to create “more intelligent” AI, away from the symbolic approach of “faking intelligence through certain rigid rules” (just like ELIZA and co).

(ELIZA the NLP psychotherapist, Source: Wikimedia)

The Cooler and Trendier AI

Fast forward (like really really fast forward) to the year 2023, we’re now in an era where the internet is a necessity, and powerful computing devices are everywhere in all kinds of forms and functions. The computing technology and its power as well as affordability have vastly opened up a multitude of opportunities for AI to expand. Machine learning and its related fields (including NLP) became the trend of the century. With huge amount of data being exchanged every second, it makes it even more convenient to train AI to work. Users all over the world are connected and constantly contributing to this larger-than-life database for AI to “consume and teach itself”. The more we use it, the better it is!

There’s no surprise that more and more organizations are telling you “data is the new oil”. Big data opens up more doors to expand the use of AI in various industries, from the simplest case of search engine (Yes! AI is doing its trick to let you see what you want to see in those search results) or your feeds in various social media platforms to more advanced use cases like autonomous vehicles and cyberweapons.

As machine learning becomes more sophisticated thanks to the access to large amounts of data and algorithmic improvements, more systems and frameworks began to be introduced by late 2010s. One example is the generative adversarial networks (GAN) developed by Ian Goodfellow and his team in 2014. GAN is truly game changing, and one that has powered many AI-generated media (Generative AI) algorithms and tools. GAN generally was behind the emergence of various media generation tools (from images to music) such as DALL-E, DeepDream, Stable Diffusion, etc.

Then we have what is known as “the transformer”, which was first introduced by Google in 2017.

It’s a new neural network architecture that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input data. (Uszkoreit, 2017).

In other words, a transformer is capable of deciding the relationships between all words in a sentence and decide “what to respond” without having to process one word at a time. You can understand why Google was the one who introduced this because it has access to a super large database of language corpus and in many languages! (hint: Google Translate).

But it was OpenAI (initially a non-profit organisation to promote friendly AI that benefits all humanity) that introduced generative pre-training transformer (GPT) of a language model in 2018, which showed promising outcome of how it can acquire world knowledge and process long-range dependencies by pre-training on a large corpus. GPT-2 language model was then introduced in 2019 and it began to trigger interests of developers and researchers. Websites and tools that promote “write with transformers” began to mushroom. Then, the current latest version GPT-3 was released in May 2020, capable of processing millions of parameters in creating a large language model (LLM). Its fully trained model was not accessible by the public at first on the grounds of possible abuse though developers can still use it through the application programming interface (API). However, ironically, OpenAI licensed GPT-3 exclusively to Microsoft. The rapid jump from “non-profit” to “for-profit”, thanks to the massive injection of funds by Microsoft.

(Source: The Technology Review)

And then ChatGPT takes all the spotlight…

(ChatGPT Interface — Source: Wikimedia)

With the advancement of GPT, OpenAI decided to introduce the “conversational interface” for the transformer known as ChatGPT in November 2022, allowing even kids to experience its capability of understanding and responding in natural language. It’s proven to be a success with 1 million users within 5 days of its launch and now it becomes the talk of the town, from the concern over its abuse to its potential to be embedded with other generative AI models.

Sadly, ChatGPT has become the “spokeperson” of AI, with its reputation as the “most intelligent” one in threatening humanity from ethical issues to security risks. Those who have followed AI in the field, will see its gradual growth for close to 80 years, but to common users who have used ChatGPT for the first time, it’s technically “magical” to them. True enough, OpenAI still upholds its mission of creating “friendly AI for the benefits of humanity”, and ChatGPT is a good example of a “friendly AI”.

But let’s not forget, before ChatGPT, we have many AI tools that have benefitted us even though many didn’t realise they are powered by AI, from the likes of Siri, Alexa, Google Translate, Google Lens, WolframAlpha, or even the grammar-checkers.

The “danger” now is for many people to assume ChatGPT is “THE ONE” for AI. It’s definitely not. It’s essentially one part of the larger and powerful field of AI. It’s kind of disappointing to hear people “downgrading” AI to ChatGPT or other generative AI tools. But on the bright side, it starts to get people more interested in AI. Hopefully, we would be able to motivate the younger ones to venture into this field (and its sub fields), improving various aspects of AI, not just being an end-user. That’s surely more exciting!

--

--

Chuah Kee Man
Chuah Kee Man

Written by Chuah Kee Man

A striver by choice, a survivor by chance. Educator | Researcher | Coffee Addict #unimas #edtech #elearning

Responses (1)