Dr. Bashayer Al-Majed
Several months after artificial intelligence research lab OpenAI launched its ChatGPT-3.5 chatbot and InstructGPT, Google has unveiled its own AI program and interface, called Bard.
These systems use a neural-net style AI system, based on the idea of a mammalian brain with lots of small “cells” that individually process data but are part of a whole, interactive network. They utilize natural language processing to better understand and process human speech, so as to more accurately comprehend a user’s question or request, and then return data in a form that is recognizable as human speech or prose.
Therein arises the controversial question of plagiarism. Students can input an essay question and the bot will return a completed essay on the subject. Similarly, the technology could be used to complete job applications or write a Shakespearean sonnet.
But can the program understand a question well enough and truly assemble an answer that has enough depth and is in a natural enough “human” language to pass as, for example, the work of a student? From what I have seen, the responses returned by the chatbot are impressive but do not yet have enough depth or clarity. Perhaps a teacher unfamiliar with their student, and with a lot of essays to mark in a hurry, might not notice. But in all likelihood, ChatGPT is not quite there yet.
However, Microsoft clearly believes it will eventually get there and that it is the future, given that it has just invested $29 billion in working with OpenAI to upgrade all of its platforms, including its search engine, Bing.
Google’s Bard, meanwhile, suffered a setback last week when, in a video demo, it responded with incorrect information to a question it was asked. This highlights a major flaw in current AI: The information it shares is only as good as the information it is able to access.
In Google’s case, the information comes from whatever web pages its search engine can find. There are a lot of highly accurate pages out there but also a lot of fallacious information, whether unintentionally erroneous or deliberately deceptive.
A human might be able to tell (though sometimes might not) which sources are reliable and which are spoofs or merely inaccurate. A computer program might not have the information required to make that determination, however, and would therefore fail to screen out false information.
Information can also contain biases, including historic omissions. Consider, for example, an AI program assessing job candidates to select one for the role of CEO of an IT company. If we assume no Arab woman has previously held the role of CEO of an IT company, then the AI might reach the conclusion that Arab women would not be successful in the job because, for example, they do not feature on a list of the top 10 most successful people to hold such a role.
The error made by Google Bard this month was not catastrophic; it is still undergoing rigorous testing and will improve over time as information is updated, errors are identified and it “learns.” But it serves as an important reminder to all of us that computers are not infallible.
The situation is akin to listening to an overconfident person telling you about “facts” they are convinced are right, when they are not. You might end up believing them simply because they sound so sure of themselves, and perhaps even doubt yourself if you question their information. Many people still believe computers cannot possibly be wrong — but they are only as accurate as the information that is fed into them.
This particular AI technology is mostly concerned with language: Analysing a question asked by the user, finding all the loosely relevant information, collating it and filtering it to be more directly specific to the question asked, then summarizing that information in what appears like natural language.
At the end of the day, the information it outputs is dependent on how good its search engine is. However realistically Microsoft’s new AI-powered bot can speak, if the quality of the information it gathers comes down to a direct comparison between Bing and Google, then unless Bing massively upgrades the ways in which it finds information, and regardless of Google’s error in its demo video, there will be no competition. Google is simply a much more effective search engine than Bing. There is a reason why more people “Google” what they want to know, rather than “Bing” it.
It is difficult to predict which platform will lead the way into the future of AI. Sometimes a single innovator truly claims that pioneering mantle in an industry; look at Xerox, Hoover or Google itself, for example, all of which are brand names that became synonymous as a verb for the very function of their main products.
But technology can evolve and change in ways and directions that none but the most tech-savvy among us might imagine, and even major brands, technology and activities can quickly become obsolete. Take, for example, pioneering brands such as Kodak, Nokia and AOL, which were once commonplace in many homes, yet few Gen Z adults would recognize them now. And in the 1960s, a computer took up an entire room; now you can wear one that is much more powerful on your wrist.
Regardless of which company ultimately prevails and enjoys greater success, AI is here to stay and it is clear it will revolutionize everyday technology, the ways in which we interact with it, and the ways in which we interact with one another. However, privacy and intellectual laws will still need to be addressed.