AI (Artificial Intelligence) Is More Popular, but Still Needs the Human Element

Since the AI tool ChatGPT was introduced months ago, AI has become increasingly popular.  However, with the increase in AI use, it seems that more and more people are recognizing that AI tools like ChatGPT can often be wrong. That doesn’t mean that an AI tool like ChatGPT isn’t useful. But, it does indicate that AI often still requires human involvement in order to obtain accurate, meaningful results.

The many mentions of AI lately in the business press are a sign of widespread interest in and increasing use of it, especially since the introduction of AI tool ChatGPT, produced by the company OpenAI. In just a single day (June 29, 2023) the Wall Street Journal ran multiple articles mentioning AI in a context that implies much greater interest in AI. For example, the June 29 article “US AI Export Curbs Threaten Nvidia, China” describes the current situation as “the artificial intelligence frenzy”. Also on June 29, the Wall Street Journal ran the article “Insurers Shows Limits of Digital Revolution” which mentions “the mania unleashed by OpenAI’s ChatGPT”. And, also in that June 29 issue, was the article “Press Eyes Impact of Artificial Intelligence”, which said “Several large news and magazine publishers are discussing the formation of a coalition to address the impact of artificial intelligence on the industry.”

Yet, amid all the emphasis on AI’s increasing popularity, there is also recognition of AI’s weakness. For example, this is seen in the June 9, 2023 Wall Street Journal article titled “Mattel Adopts ChatGPT in Experiment to Fight Hacks“ by Catherine Stupp. The article’s subtitle is “Toy company warns the risks of inaccuracy are still high in the generative AI tool.”

The article points out AI’s limitations, saying “The potential for inaccuracies from generative AI brings risks for companies hoping to use it for important decisions without human supervision, said Ilia Kolochenko, chief architect at cybersecurity company Immuniweb.” He went on to say, “If we give AI too much freedom, it will probably cause a lot of trouble.”

As someone who spent my early career working as that era’s version of what today is called a data scientist, I’ll elaborate on this. I’ll point out that AI is essentially a very sophisticated and rather data intensive approach to data analysis.  Thus, it follows a principle that generally applies to all data analysis.

It doesn’t matter whether you are doing AI, or conducting a market research survey, or analyzing an Excel spreadsheet, or doing randomized controlled trials, or evaluating anecdotal data from a relatively small number of observations, or collecting and analyzing data in any other way. Data is data, and all forms of data analysis generally follow this same basic principle. Regardless of what kind of data analysis is done, the principle that always applies is: what goes in is what comes out.

In other words, if you collect data about men, the results apply strictly to men and the data may not apply to women, as has happened with many bias situations occurring with AI.  Likewise, if you collect data about small children, the results apply strictly to small children and may not apply to 80-year-olds. If you collect data about high speed expressways, it may not apply to back country gravel roads, and it may not even apply to traffic jam situations on the expressway.  In some cases, however, after thinking things through thoroughly, you may sometimes be willing to make the assumption that the data applies more broadly than what was collected. But, it is important to recognize that making such assumptions can result in errors.

These kinds of errors are what leads to incorrect answers from AI. Even though AI is trained on massive quantities of data, if there is not enough data about a particular subgroup, the data used for the AI doesn’t necessarily offer correct information about the subgroup.

That’s why human intervention can be helpful. Humans are able to think in ways that today’s AI cannot. Consequently, humans often can recognize situations where todays AI is incorrect. The incorrect AI merely knows whatever it can conclude from the data that was fed into it. Humans, on the other hand, often have a much broader understanding of the situation, particularly if it entails a specialized area in which the human has been working.

So, in conclusion, especially since the introduction of ChatGPT, AI has become increasingly popular. However, even though AI is more popular, and more and more of the general public has some awareness of it, it still requires human intervention. What AI comes up with on its own may not be right.

 

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *