For the past two months, I've been experimenting with OpenAI's ChatGPT. I signed up for a monthly subscription to access ChatGPT-4 and continue my experimenting. This experimentation was a natural extension of what I've been doing with text generation using the available models on huggingface.
I wrote a bunch of Python scripts, manually entered prompts, and generated output. At first, my results were poor, interesting but poor. Then I got distracted with work and kept the text generation experiments in the background until ChatGPT-3.5 came out.
I was floored when I started playing with ChatGPT-3.5. The newer version of ChatGPT was a vast improvement and I began to see how this technology could make a huge impact on our daily lives, for better or for worse.
To make use of the newer ChatGPT I learned about prompt engineering. I started using it as a writing assistant and having it generate titles for my essays and descriptions. I even had it write full 400 to 800 word articles. I use it to write inspirational quotes and post them to my LinkedIn profile every Monday.
It's a wonderful technology and worms its way into my daily tasks. But what the heck is it? What is the GPT part of ChatGPT? How did it get so "smart?"
GPT is an abbreviation that stands for Generative Pre-trained Transformer, which is a complex neural network with different weights in the neural net structure. GPTs have proven to be good language and text and the generative part of that abbreviation means the large language model (LLM) can use what it learned to generate text that reads like a human wrote it.
The goal that researchers are trying to reach with LLMs is artificial general intelligence (AGI). Wikipedia defines AGI as the concept is that it can learn to accomplish any intellectual task that human beings or other animals can perform.
In a nutshell, researchers want to create true artificial intelligence, not just a bunch of algorithms to optimize your revenues or reduce your churn rate. Achieving AGI would be amazing but would alter the fabric of society and the environment. We might be on the precipice of a major societal change if GPT technologies are deployed into the mainstream.
Will ChatGPT take your job?
This is the question and fear for the day. Will ChatGPT and the flood of LLMs behind it take your job? The answer is, it depends on your job.
OpenAI released a research report suggesting that several white collar type of jobs are in danger of being made redundant by ChatGPT. Jobs like programming, software engineering, paralegals, legal assistants, market researchers, and many more. Ironically, the jobs not affected by ChatGPT will be the ones that require physical labor like food preparation or plumbing.
I've used ChatGPT-4 to write generic code and it's been fantastic. I asked it to write me a Go Language web server that displays a chart created in Python. In seconds I had the working code ready to go.
My experience with ChatGPT-4 tells me that a lot of routine jobs will disappear quickly once these models are deployed. Wendy's wants to replace their order takers at the drive-thru with a Google powered AI chatbot.
The potential benefits and risks of GPT are vast, and it is important to be aware of both before using this technology.
Among my creative friends, there is a fear that they'll be replaced as well. My photographer friends are panicking because GPT is being applied to image generation. Soon their clients will just enter a prompt of an image they need and out pops the digital artifact.
Many of my writer friends are watching the current Writer's Guild strike. Why? Because the Alliance of Motion Picture and Television Producers wanted to replace them with GPT software to write movies and TV episodes.
Companies are placing huge bets on GPT related software to transform their businesses and lower labor costs, after all, it's their job to maximize shareholder wealth.
The bigger question is, while this bet pays off and what are the longer-term ramifications of replacing all these workers in the marketplace?
GPTs will make some things better
I would be remiss not to mention the positive sides of GPT. It does amazing things and I use it as an assistant of sorts to my daily writing tasks. I don't use GPT to replace my writing because what I create is unique. Not because I'm a special snowflake but because what I write about can't be easily automated. I write about my experiences, something an LLM can't ever mimic.
There's no doubt that GPTs will make some industries better. It will free up more time from mundane tasks and it will focus workers on harder and more important tasks. For example, if you asked GPT to generate a rest server in Java it will do it in seconds flat. That rest server will work wonderfully out of the box and you can put it to work.
Automatically writing code for an organization will speed up their backend work and go to market products that is if their work is routine work. The application of this technology will boost productivity across all industries. Every Fortune 1000 company needs a binary classification model. Every startup needs dev-ops code. Every single organization needs a ton of generic code to do its generic day-to-day functions.
GPT is a tool that can be used for good or evil. It is up to us to decide how we will use it.
This is why GPTs will be a huge boon. I'm talking to people in various industries that range the gamut from manufacturing, to healthcare, and finance.
Everyone one of these companies wants to use GPT to help them summarize their vast corpora of information. They want to use GPT to do initial triages of symptoms. They want GPT to write proposals fasters. They want GPT to help their customers find answers to the questions faster.
There's no doubt in my mind that GPTs will make life better for all of us, but I wonder if it's too good to be true. What are the dark sides of GPT? What are the debts we incur without even being aware of them?
The climate crisis and GPU usage
One of the big issues I heard bandied about with respect to Bitcoin mining was the amount of GPU processing it needed to keep mining Bitcoins and how that translated to excessive energy usage. Excessive energy usage generates greenhouse emissions that make our climate crisis worse.
Granted, there are some energy savings for crypto-assets when switching from a proof of work to a proof of stake system but they use a lot of energy to accomplish their "mining." So much so that the Biden administration released a memo in September of 2022 with some staggering findings.
Crypto-assets can require considerable amounts of electricity usage, which can result in greenhouse gas emissions, as well as additional pollution, noise, and other local impacts to communities living near mining facilities. - via Climate and Energy Implications of Crypto-Assets in the United States Fact Sheet
As of August 2022, published estimates of the total global electricity usage for crypto-assets are between 120 and 240 billion kilowatt-hours per year, a range that exceeds the total annual electricity usage of many individual countries, such as Argentina or Australia. (Emphasis mine)
It's staggering the amount of energy and resources these crypto-assets consume and I can't help but wonder how much energy AI uses. How many kilowatt hours of energy are consumed daily when training AI models and large language models (LLMs)?
What we can estimate is training times and apply them to energy consumption, much like what this author did with respect to NVIDIA's MegatronLM. Based on his analysis it took 512 V100 GPUs running for 9 days to train the MegatronLM model, a competitor of GPT-3 (not GPT-4).
Nine days with an estimated consumption of 27,649 kilowatt hours (kWh). That's the equivalent of the annual energy usage of three average US households.
That's a huge consumption rate for one model. Now imagine every large company on the planet building its own LLMs. Imagine all the GPUs used to do that, in addition to the growing crypto-asset mining. Energy consumption will skyrocket in the coming years as we race towards extinction.
We're replacing well-paying white-collar jobs and burning a lot of energy in the process. Have we thought thru the consequences of our actions? How does that make sense for our world and future society?
This is madness.
The great filter
The Great Filter is a term used in the search for extraterrestrial life. The Fermi paradox states that our universe should be teaming with life and even filled with advanced civilizations. Yet, we haven't seen any evidence of life beyond our planet or any advanced civilizations.
One reason to explain this is that these civilizations destroyed themselves and that destruction is referred to as the Great Filter. Are we about to "Great Filter" ourselves with GPT? Some prominent AI researchers aren't sure but they are sounding the alarm.
The alarm grew louder when Geoffrey Hinton, the godfather of neural networks left Google and issued a dire warning.
Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
This is my greatest concern. The misuse of this technology while the world burns.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Some people will argue with me that we humans will make the right choice but I call bullshit on that. We can't even protect our children in school with sensible gun laws, how would we even protect our society from malicious GPT videos, images, and words?
This is what's frightening. Imagine a GPT generated video of a world leader saying they're going to launch nuclear missiles at some country. We could end up in a nuclear war or cause massive panic. Either way, people will get killed.
I like to believe in the goodness of mankind but my experience has taught me to expect disappointment more than satisfaction.
Technology and society
I'm a technologist at heart and I love to tinker and make things. I'm passionate about figuring things out and learning new things. I believe that education and research are wonderful endeavors that we all should do throughout our life.
One of the newer facets of my learning has been approaching the world of philosophy. While some people seek meaning in religion, I look at the world through the lens of a technologist and often wonder if what we create will be a boon or a detriment to us.
For example, two key points in Philosopher Heidegger's essay titled "The Question Concerning Technology" is about how we need to be careful about how we use technology. We need to make sure that it's of a beneficial nature and that it doesn't control us in the long run.
Is GPT of a beneficial nature? Yes. Will it control us? Yes. Will we adjust? Probably. The biggest question we face is how we pay our debt. How do we pay the societal, environmental, and human debt this shiny new technology brings, in addition to all the technologies that came before it?
First, it's easy to overlook the accumulated debt our technology incurs. GPT and LLMs will only get bigger and better from here on out. We'll be bombarded by fake videos, articles, and news reports that we would know if we're living in the real world or not. Documentary filmmaker Adam Curtis called this hypernormalization.
Second, the environmental impact of it continues to grow. We'll need more rare earth minerals to be mined, we'll use more electricity - whether it's for Bitcoin or LLMs - while the earth continues to heat up. We can't hide from this any longer because this debt is physically manifesting itself. Heatwaves are scorching parts of the world earlier than ever before and scientists are forecasting global famine in the coming decade.
Third, people who lose their jobs and can't be redeployed elsewhere will need assistance to survive. Our government will need to step up here. I've always believed that technology should liberate us and free up our time to become better humans but instead, it keeps us shackled. I always wonder why we're still working 40 hours a week when instead we could be working 24 hours. Will our elected officials make the necessary changes to help us Americans? I doubt it.
We humans are up against a wall right now and we're distracting ourselves with GPT. Yes, GPT is cool and will change the way we work, but at what cost?
I have a chat group with friends and a few weeks ago we were discussing GPT and how it'll affect software developers. Several members of the group are long-time software programmers and the topic of GitHub's Copilot came up. I'm not a classically trained developer, I'm more a "hack" than anything else. This particular conversation brought up an interesting point to GPT generated code and programming.
I piped in and said I love it. I explained how it gets me access to new languages and lets me generate good code that works 99% of the time out of the box. It's helped me optimize my Python code and helped me figure out how to merge Python with Go. For someone that's a "noob" in the development world, GPT assistant code completion is a massive boon for people like me in the Startup world.
After much interesting insight from the group, we realized that this GPT is good at optimizing and writing general everyday useful code. It's the rest servers, the basic AI modeling scripts, and the web servers. It can help create dashboards and basic web applications. It builds the structure and body of something you want to create, and that's about it.
It automates the code and tasks that should be automated. How many times do programmers and AI modelers recycle code they already wrote for a similar task? They do that to speed up development time and this GPT technology will only turbocharge things.
Where developer jobs are safe is in custom development. Where clients want some unique implementation of a wild and crazy idea. GPT is akin to building an assembly line robot to drill a 2" hole in a metal plate 24 hours a day.
What if you need to drill a 2.3" hole at an angle in a random location on that metal plate? That's where GPT can't compete.
The same can be said for writers and other creative artists. Several of my writer friends expressed concern that GPT will kill their livelihood until they started playing with it. Will GPT kill SEO writing? Yes, it already is, and taking some writer's income with it.
While scary on the surface, it can be a boon for good writers. The writers that will excel in this new GPT environment are the ones that have a unique voice and that share their experiences and expertise with readers. You can't fake authenticity! Will they take a financial hit at first? Yes, but now would be a good time for all creative people to adjust their go-to-market strategies.
While there are perils and risks associated with using GPT, the promise of leveling everything up is very compelling. GPT can be a wonderful assistant if you don't rely on it too heavily. Use it for what it is, a tool to get stuff done.
If you liked this article then please share it with your friends and community. If you'd like a free weekly briefing of curated links and commentary, please consider becoming a subscriber. Thank you!