Today, AI is looming over the average person’s future, subtly transforming how we access and interact with information. In my experience with ChatGPT, which I’ve affectionately named “Chet” for ease, I’ve found it incredibly useful in my daily life. Previously, when I had a question about a random subject, finding the answer involved multiple web searches, sifting through untold websites with conflicting information, and often resulted in a significant time sink. By the end of this process, a minimum of an hour would be gone, and while I might be more well-informed, I often had no clear answer to my original query.
Enter Chet to solve these inconsequential dilemmas. With the new option of conversing with Chet, a new world has opened up for me. Chet is amiable and knowledgeable, able to answer general questions and follow-up queries to refine the answer further. Ultimately, I get a comforting and satisfying feeling that I’ve completed my “mission” and now have the answer. But then, a question arises: “Do I have the answer?”
Most computer-literate folks know the acronym GIGO—Garbage In, Garbage Out. This term means that a computer can only provide an answer based on the information it can access. And herein lies my problem. Chet can provide me with information on almost any subject, but he only has access to the pool of information provided by his creators. I have no idea who these people are—likely a large group of programmers or a corporation. What are their social and political leanings? How easy would it be to influence a large portion of the populace by providing skewed information?
For instance, I asked Chet about the number of employees at OpenAI, and he responded: “OpenAI currently employs about two thousand one hundred people as of 2024. This includes a wide range of roles, from researchers to engineers and support staff, all contributing to developing and deploying advanced AI technologies.” In my experience with large groups, I’ve seen many subgroups form, each with its agenda, often causing rifts in the overall mission. Years ago, I worked for a Fortune 500 company attempting to transition into the Internet age. Despite having all the resources to leap, inner divisions crippled the company, and it eventually went bankrupt. This company had over 70,000 employees.
I have low confidence in the general populace’s ability to evaluate information critically. Let’s project ten years into the future. Chet has been supplying the public with large and small answers for quite a while and has done chiefly so admirably. Then, a nefarious agency decides it needs to change public opinion on an issue. They subvert Chet’s information with their own. The public is now presented with a biased view from what has become a trusted source. It has been years since anybody “Googled” anything, and Chet has never been wrong. Chet is our friend; why would he lie?
This scenario underscores the importance of understanding our information sources and maintaining a critical eye, even toward trusted entities. AI like Chet can be incredibly beneficial, but we must remain vigilant about the potential for manipulation and bias in their information.