ChatGPT: Looking Beyond the Hype
Unless you have been under a rock for the first part of 2023, you have likely seen, played with, and experienced the cultural tipping point phenomenon known as ChatGPT.
OpenAI’s ChatGPT is billed as an AI-powered chatbot that can engage in conversation about almost anything under the sun, while also being capable of generating text-based content, including emails, papers, and even lengthy stories. In the time that ChatGPT has been unleashed on the public, everything from advertising campaigns to high school research papers and even legal briefs have been written in the voice of Voltaire, Pitbull, and everything in between. ChatGPT has also tackled computer code, debugging then going back, checking the work, and correcting any mistakes.
And it’s also—or should be—prompting companies to figure out now what such artificial intelligence applications should be used for, how they should be used, and, perhaps most critically, why they should be used.
But first, the obligatory primer.
At the center of ChatGPT is the generative pretrained transformer (get it—GPT!) itself. It is, at the end of the day, a language model that is designed to ingest and then actually create (a.k.a. generate) more human content, presented in a far more accessible and natural language. Think of it like this: Traditional AI predicts, while generative AI creates. This GPT model isn’t just fed a ton of data; it also lever[1]ages a method called Reinforcement Learning from Human Feedback (RLHF) to continuously improve by feeding transcripts of human interactions with the chatbot back to the AI for analysis and improvement. By having humans rate a response as thumbs up or down, ChatGPT can continuously learn and adjust parameters based on the best, or worst, answers.
But don’t assume that prediction isn’t still central to this AI model. The capacity to predict and comprehend is based on parameters. The more parameters AI has within this type of language model, the better it can perform. To give a sense of scale, the first iteration of GPT had over 100 million param[1]eters. GPT-2, which was available in 2019, had 1.5 billion. The ChatGPT that people have asked to write breakup emails in the tone of Shakespeare is GPT-3.5 and has an estimated 175 billion parameters. Many predict that GPT-4 will have in excess of 100 trillion parameters.
So why do parameters matter? Will a parameter count just become yet another data point in the value proposition wars? Don’t take my word on the importance of parameters. Below is what ChatGPT had to say…in the voice of Jay-Z.
Whether or not you think this response sounds anything like Young Hova, it illustrates exactly what makes ChatGPT and generative AI models the step change the industry has been waiting for. ChatGPT takes a massive amount of complex data and boils it down to a conversation that anyone can follow and consume. It humanizes and uncomplicates the impossibly complex and complicated, but doesn’t veer into the overly simplistic.
ChatGPT is not the only generative AI in town, nor is text-based output the only asset or experience being generated. Images, video, and audio are also on the output menu making waves. Case in point: the momentary craze of using Lensa.ai to reimagine self-portraits and avatars. The Lensa app and similar AI image applications had a moment; according to data from Apptopia, these apps hit a peak of 4.3 million daily downloads and about $1.8 million in revenue from in-app purchases in early November. And then, just like that, the decline began, with these same apps analyzed by Apptopia seeing 952,000 combined downloads and around $507,000 in consumer spending by February.
A similar hype explosion is happening now with ChatGPT and generative AI in general. Across technology vendors, everyone from HubSpot to Salesforce to Microsoft are all amplifying their own achievements in conversational and generative AI. While this cycle feels a bit like the explosion of NFTs-are-the-metaverse, there is an important difference. The introduction of generative AI and leveraging tools like OpenAI’s ChatGPT and the GPT-3 model has two critical streams of AI application impact: the use of AI to fundamentally shift how work is done, and the interest in the average person to experience AI-powered tools as their sidekick and partner. As we sit at this new intersection where people are open to AI and vendors are excited to roll AI into almost every workflow, data analysis, or process, new and long overdue conversations have to start that have nothing to do with speeds, feeds, features, and functions.
It is time to have AI strategy discussions to outline strategic imperatives, guardrails, and intentional applications, especially across employee and customer experience environments. That involves asking these questions:
What? What are the processes and workflows that can and should be influenced, impacted, or automated thanks to AI? What impacts do you expect due to this new automation or influx of actionable intelligence? What will meet, exceed, or destroy customer trust or relationships? Will your customers appreciate an account update written by ChatGPT? Will they want to get a “personalized” failure-to-pay message in the voice of your CEO?
How? How will these new AI applications be trained? How has it already been trained and with whose data? How will the model continuously improve? What data is being brought to the new graph on which the insatiable beast known as AI will feed? How will accuracy, legitimacy, and dare I say ethical AI manifest in this new model? How will the value of AI be communicated and shared across both customer and employee cohorts? How will success be quantified outside of efficiencies or time saved? How will AI be measured?
Why? Why are these AI tools being deployed? To what end? Who will these improvements and accelerations impact and impress? How will stakeholders in the processes and output be impacted—and how will they react and why? Why are people hesitant to use AI tools? Why will they embrace them? Why will your teams lean into these co-pilots and assistants? Why will they feel threatened? Where and how can early or better communication about tools and their proposed impact change this reaction? Are the processes and automations being impacted thanks to generative AI being recognized as AI-generated? Does it matter to your people or your customers?
You get the idea. There are a lot of questions!
It will be more important to establish AI strategies now as more tools hit the market and turn the heads of employees and customers alike. So far, most organizations have looked at AI as a grand experiment, making selective and limited introductions in highly controlled and specific spaces. But that luxury of lengthy experimentation will quickly come to a close, as both customers and employees will expect more tools and solutions that add more value to their digital experiences. That’s why you need to bring the right teams together now to talk through these key “what,” “how,” and “why” questions.
The cautionary tales are plentiful: that time a major company unleashed its AI onto the world only for it to become a grouchy, foulmouthed racist, or the time that generative AI users started to get self-portrait avatars that were more boudoir than business portrait, or that time that AI outed a pregnant teenager to the outrage of her in-denial parents. At the same time, the wins and benefits are impossible to ignore: process improvements; employee satisfaction and documented performance improvements; revenue improvements thanks to experience improvements. This is why these strategic conversations aren’t something that can happen once the plane is in the air. The time is now. The cost is experience. It is just that simple. And not even ChatGPT can rewrite that truth.
Liz Miller is vice president and principal analyst at Constellation Research, covering the broad landscape of customer experience strategy and technologies.