TC Global Insights

Future of Industry

Gpt-3 – Living Up to the Hype?

In September this year The Guardian published an article titled, ‘A robot wrote this entire article. Are you scared yet, human? Written by the powerful language generator, GPT-3, it was intended to get a reaction.

And it did.

Many wondered how original it was, and if this really was as impressive as it was made out to be. After all, it was an articulate, well thought-out and argued essay, one that could have been written by a seasoned journalist. If you scrolled to the end of the article, however, there was a disclaimer (or caveat) of sorts. It turns out that the robot was given clear instructions and prompts on what and how to write. Also, a there were eight different outputs produced by GPT-3, and a human editor then picked the best parts from them all to make one article.

The Guardian said (in an explanation of sorts) that editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

Even so. This was impressive and leads one to wonder what our futuristic world will look like (we’ve wondered before too – think Huxley and Asimov).

However, what is clear is this – of all the technological advancements we have made and are making, Artificial Intelligence (AI) is one of the biggest game-changers, and OpenAI’s GPT-3 is probably one of the largest leaps in Artificial Intelligence. OpenAI is a San Francisco-based Artificial Intelligence Research company, co-founded by Elon Musk.

So, what exactly is GPT-3?

GTP-3 is the acronym for Generative Pre-trained Transformer 3, an open-source NLP (Natural language Processing) program that uses machine learning to produce human-like text. Simply put, it’s AI that has a superior language structure and can create content.

As the name suggests, it is the third generation language prediction model of the GTP-n series, a program that was initially introduced in May 2020. It generates text using pre-trained algorithms, and has been fed some 570gb of text-based information, which was collected by crawling the internet – including Common Crawl (an open repository of web crawl data that can be accessed and analyzed by anyone) and Wikipedia text.

GPT-3 has a capacity of 175 billion machine learning parameters, which is far more than the Microsoft’s Turing NLG, which was introduced in February 2020, with a capacity of 17 billion parameters.

Great. So what can it do?

From performing machine translation, question-answering, reading conceptual tasks, to composing poems, writing well-tailored articles that are seemingly indistinguishable from those written by humans, the GPT-3 has the ability to create anything with a language structure. Owing to its size, it can also execute an impressive bandwidth of natural language tasks without any requirements of fine-tuning for specific tasks unlike other language models – like BERT which needs elaborate fine-tuning and a large training dataset for specific tasks.

The GPT-3 also goes beyond mere task automation of natural language and has been seen to be able to also generate computing codes as illustrated by Sharif Shameem on a Twitter post. Sharif Shameem, the chief of an app development start-up called Debuild was able to construct a program in GPT-3 where he typed the description of a software UI in plain English, and GPT-3 responded with computer code using the JSX syntax extension to JavaScript. That code produces a UI matching what he had described.

In this regard, the ability and the precision with which the GTP-3 generates working codes will have huge implications for the way software and apps are developed in the future. The emotional intelligence and the machine learning capability of the GTP-3 will also have significant ramifications at workplaces in the future.

What can we expect from it?

“Releasing such a powerful model means that we need to go slow and be thoughtful about its impact on businesses, industries, and people,” OpenAI was quoted responding to ZDNet’s query on whether GPT-3 will come out anytime soon for the general populace.

While the program has taken the AI and tech industry by storm, it’s not without its errors and limitations, as seen by the many users of its beta version. The program has been found to lack true common sense and a semantic understanding of words making it susceptible to mistakes a basic human being will never commit. It has also been found to have difficulty in creating or writing longer or more complex programs or commands.

The cost-efficiency of the program when it is introduced to the market will be another aspect of it with smaller organizations seemingly looking to be priced out. The company has also come out and openly stated that they are in no rush to make the program generally available given its limitations with the CEO of OpenAI, Sam Altman, himself dubbing the hype around Gpt-3 as being a little bit too much.

To conclude, the GPT-3 is an extremely intelligent and powerful AI capable of performing several commands the same way a human being can. It is, however, nowhere near the perfect model of an AI intelligent or empathetic enough to completely replace human beings in our workplaces, at least, in the immediate future.

Date added

Filed under:

Future of Industry


No spam, just your favourite topics.

Choose Insight topics that you are interested in to subscribe for your personalized newsletter.

A world
of possibilities awaits.
Join the movement.
Find your perfect university,
in one of 40 countries all over the world
Prepare for the future,
whether at university, business or in employment
Secure your future,
through smart, international investments
Connect with leading international companies
and unlock the potential of your team
Fill in the form, so we can contact
you and start our journey together.