In just a few short days, OpenAI’s ChatGPT will turn one year old. Still, even though it has been around for almost a year, few people truly know what it is. Thus, the question arises, “What is ChatGPT?” In fact, if you were to ask a random denizen on the street if they knew what GPT stood for, I’d wager that few actually did (it’s Chat Generative Pre-trained Transformer by the way 😉). Many people look at it as nothing more than a glorified chatbot. It has spawned a few notable offshoots, such as Microsoft’s Copilot (formerly Bing Chat), Google’s Bard, Snapchat’s My AI, and Anthropic’s Claude, but OpenAI’s ChatGPT stands alone at the top. I initially thought it only would hold this position for a while due to first mover advantage in the generative AI field, but, much to my surprise, it has maintained.
Whether we like it or not, generative AI is here to stay, so it’s worth being educated on what it is. Already, we see tie-ins in all major sectors of our culture and economy. Stock investing strategies are changing. AI art is winning awards. We’ve had AI songs go viral, software suites such as Adobe Photoshop are integrating AI Image Generation and manipulation, and software developers are now getting their code to write itself. Some would argue that this is speeding up workflows and supercharging creativity. We spend less time on boilerplate methods and actions because AI handles that. It allows humans to focus on the “meatier” questions and problems. Our learning is streamlined because AI can skim and summarize. Our understanding is elevated because AI can compile and correlate patterns and documentation when sifting through data. It’s an excellent tool they say.
The opposing side would say it is dumbing down our society. Our kids aren’t writing papers anymore in class, learning how to format thoughts and sentences. Our artists aren’t creating art, our musicians aren’t playing instruments. Every medium is losing an aspect of human thought and creativity because AI is acting as a fill-in. What happens to muscles that aren’t used? They atrophy. The same can be said for the muscles of the mind. Are we headed toward a world of artificial, dystopian cookie-cutter cultural grayness? Is ChatGPT the death knell of human thought and creativity, or the trumpet of proclamation heralding a glorious new age of enlightenment and achievement?
How does generative AI work?
Imagine generative AI as a highly skilled artist who has seen millions of paintings, drawings, and pictures. This artist has learned how to create new, unique artwork by understanding the patterns, styles, and techniques used in all those images.
When you ask the AI to create an image or write text, it’s like asking this artist to paint a picture or write a story based on everything they’ve learned. The AI combines its vast knowledge to produce something new that fits your request. It’s not simply copying what it’s seen before; instead, it’s using its understanding of different elements like colors, shapes, language, and style to make something original.
In technical terms, this process involves complex algorithms and neural networks, which are like the artist’s “brain cells,” constantly learning and adapting. When given a prompt, these networks process the information, draw upon the learned patterns and knowledge, and generate an output that matches the request. This is how generative AI can create images, write texts, or even compose music – by learning from a vast amount of existing data and using that to create something new and unique.
Over time, through the process of programming and utilizing new datasets and technologies, AI researchers and enthusiasts hope to nurture an artificial intelligence that can learn from itself. In essence, they want the “brain cells” to function as a human’s would. For illustration, when a child touches a hot stove, he learns not to touch it again. The end goal would be to tether the creation to a limitless supply of information, constantly being modified and updated by real-world inputs and parameters, and then allow the intelligence to take in that information, make a decision, and observe the output, grading it to be better or worse. This would be done trillions of times, over and over, and, with each decision, the AI would, technically, grow smarter as new information was provided to it. As it currently stands, models such as ChatGPT are trained on static datasets. OpenAI’s GPT-4 model, which runs the premium version of ChatGPT has ~1.8 trillion parameters across 120 layers, which is over 10 times larger than it’s predecessor, GPT-3. Those 1.8 trillion parameters have been integrated with 13 trillion tokens of text-based and code-based data.
SUPPOSEDLY this is a static dataset where no new information is present. However, we have already begun to see where these generative AIs are being given access to the web, social media feeds, and other sources of new data. I’m not a conspiracy theorist when I say there is a reason that Google wants you to back up your photos so badly in Google Photos. There’s a reason it’s able to identify people’s faces across multiple albums. They won’t tell you this outright, but, if I was a betting man, I’d wager that its AI has been training on that data. We already see the writing on the wall regardless. In early July 2023, Google updated its privacy policy to retain anything posted publicly by users, so it can train its AI models for products such as Bard. The changes went into effect immediately July 1st, 2023, and didn’t just include anything from that date forward. ALL previous “public” information was game as well. Does that mean those YouTube searches from 2013? Yep, they got that too. Nothing digitized is sacred anymore. Here’s the official Privacy Policy update verbiage:
If Google’s doing it, the rest of those in the generative AI race are doing it as well. Don’t be fooled. The end game is having an AI achieve AGI.
AGI (Artificial General Intelligence)
This is the other buzzword being thrown around that is important to note. Artificial General Intelligence is a form of AI superintelligence. This is the point where an AI has surpassed a human’s ability to cognitively process things. This means that the AI could outperform humans in doing most tasks. It’s autonomous cumulative learning on a scale that hasn’t ever been seen before.
Recently, waves have been made at OpenAI, after the recent firing and rehiring of their CEO, Sam Altman. Rumors are that an AI superintelligence model under development, code-named Project Q* (pronounced “Q-Star”), is the real reason he was initially dismissed from his position as CEO. This model could be a major leap forward in generative AI, radically improving AI’s ability to reason.
You see, current generative AI models are only able to create responses based on information that has been previously learned (remember the artist analogy?). AGI is an autonomous system that allows the model to apply reasoning to each decision that it makes. This grants it human-level problem-solving capabilities. When you pair a trait like that with cumulative learning, you get a robot that is able to improve itself at an exponential rate. Pretty scary stuff. In fact, the rumor is that Q* was able to outperform grade-school students in solving math problems. If true, this means that the reasoning skills and cognitive capability of this model vastly supersede current tech available to the consumer.
What does it have to do with ChatGPT?
ChatGPT is the gateway folks. This is how we are becoming conditioned to the idea of having artificial intelligence ingrained in our lives. Never before has it been involved to this level. In its proper place, I don’t think that the use of AI is a problem. As I said earlier, leveraging computer systems to detect patterns and anomalies, especially in research, medical, or engineering environments, could be extremely helpful and useful. However, attempting to use AI to replace what makes humanity human is dangerous. It can never fully be done, but the monstrous Frankenstein that will be developed in the attempt would be evil. Humans have souls. Machines never will. Humans have a sense of morality given to them by the Creator God. Machines never will. In attempting to build this we will only bring destruction upon ourselves, as those did at the Tower of Babel thousands of years ago.
The damage isn’t only on us, but on our children as well. Our children need to know how to write and arithmetic. Before I ever typed a paper will a spell-checker, I had to hand-write papers using proper grammar and sentence structure. Before I ever used a calculator, I had to know how to do addition, subtraction, multiplication, and division by hand. Kids today are becoming dumber because they do none of these things anymore. In fact, I had a child bragging to me a few weeks ago about how they hadn’t written a single paper for class in a year thanks to their Snapchat My AI. What are we doing to the next generation?
Our music needs to be developed from the minds and hands of humans. Our literature needs to be written, typed, and penned by men and women. Feeling and emotion come from the soul that God has given us. ChatGPT and AI will NEVER be able to replicate that. To chase after that goal will only lead to folly. ChatGPT is a great tool when used properly, but the risks are worth noting and watching out for. Indeed, the topic will most definitely be another article, but I could see this new wave of AGI and AI ushering in some of the events talked about in the Book of Revelation. Stay on guard and stand strong.
Leave a Reply