“Hey ChatGPT, it’s not you, it's really us.” | Stratascale Skip to main content

“Hey ChatGPT, it’s not you, it's really us.”

“Hey ChatGPT, it’s not you, it's really us.”

You visit your favorite media outlet or online streaming service, say YouTube. You notice the trailer for Dune 2 is out and excitedly click on the link. An advertisement flashes on your screen before the real video begins. A pleasing voice, calmly, narrates a hypothetical situation: 

Are you a content generator for marketing and advertisements looking for that elusive jingle that becomes part of folklore, or writing the next Oscar worthy screenplay? Are you suffering from writer’s block and try as you may, cannot convince your creative faculties to ‘get a move on’? Look no further than your browser. ChatGPT is your friend and for the low cost of free, will turn you into the next Ogilvy or Affleck.  

You are intrigued (who wouldn’t want to be the next Ben Affleck after all). The advertisement ends with the same calm voice turning into a quick spew, informing you of possible side effects: ‘Users may experience having more time on their hand due to increased productivity or loss of their jobs. Please consult with your managers to find out more.’ 

Is November 2022 the month that will live in infamy? 

ChatGPT was released in November of 2022. It has been mentioned non-stop from that day on. A quick scan of the web shows that ChatGPT can do many things: some not surprising and others truly mind boggling. Can it write a love sonnet? Check. What about a letter using only Shakespearean English? Check and Check. A book? Been there, done that. What about writing a Kubernetes deployment manifest that bypasses out of the box DNS (Domain Name System) features and creates custom redirection policies? Yawn. 

If ChatGPT can do all this, what is left for us mere humans? Are we on the verge of a massive labor market restructuring where bots constructed by bits and bytes will replace flesh and bones? So much fear, confusion and doubt abound. While we wrestle with these thoughts and are astonished at what ChatGPT and Generative AI (Artificial Intelligence) can produce, we cannot deny the fact that Generative AI technologies (like ChatGPT and its friends) are more than the typical chat bots we’ve grown accustomed to.  

An episode of the Simpsons shows Homer thinking about Maggie not speaking. He ponders over this fact for a second and realizes that ‘the sooner she talks, the sooner she talks back’.  With Generative AI, intelligent machines are now talking back.  

ChatGPT: The most known but not the most powerful Generative AI tool. 

First, ChatGPT is nothing more than a wrapper on GPT language models. These generative models are extremely powerful and are not limited to producing only text-based responses but can produce art as well. ChatGPT is leading the pack in terms of reach and familiarity but that should not in any way be taken to imply it is in a field of its own.  

The table below lists some of other Generative AI tools that will likely start ruling the air waves (and the hype cycles) in time: 

GPT-2, 3.x and 4.x 

Large Language Models developed by OpenAI. 


Google’s Chat based search engine; Not yet open to public but on its way to becoming a more publicly consumable option. 

Stable Diffusion 

Developed by Stability AI and generates images based on textual inputs. 


Developed by Microsoft and has been trained in images and text.  


Developed by Google and is more powerful than Google’s LaMDA; Works with images and text and has been used to control robots. 

Transformers: The technology behind the current set of Generative AI tools. 

Understanding Transformers should be beyond the scope of an average business user. It’s a very sophisticated technology developed by Google Research and Google Brain. They take an input, produce an output, all the while making sure context is maintained by not forgetting previously provided inputs.  

Transformers can ingest billions of parameters (GPT-3 has 175 billion and GPT-4 is expected to be 3 to 4 times larger) and using highly advanced parallelism and inference schemes get trained on large data sets efficiently. 

ChatGPT: Not a Jekyll or a Hyde. 

At least at this moment in its short life history, ChatGPT has been glorified for its achievements (and the positive effect of them on our business models) and vilified for the dangers it poses to the labor market. 

It has generated more bad press than perhaps it deserves and that is not because of what it does, but rather because of how some of us have ‘Milgram-ed1’ it into our business operations. When we read about ChatGPT, what it does and who is behind its development, we are awed. The tone in its responses is authoritative and more than that, it was produced by some of the brightest human brains in modern history – facts that make it stand out as a ‘superior intellect’ than our own. This human trait of becoming obedient to entities that ‘sound’ like they know more than us is exactly what Stanley Milgram discovered. We are once again confirming his hypothesis. 

Primarily, let us be clear on one simple fact: ChatGPT is not an oracle and does not know the answers to everything. It sounds authoritative but some estimates put its inaccurate responses at a whopping 30%. The remaining 70%, though sounds accurate, is still nuanced, and has roots in the quality of the corpus it was exposed to while in training. Fake news, mis information, and downright lies have infiltrated the fabrics of our society and WILL influence ChatGPT’s outputs (and that is just the social determinant of its accuracy). From a purely business model enhancement point of view, organizations that want to include ChatGPT in their business model, and mix their own private and specific data corpus to it to get some a-ha realizations, MUST beware that the age-old principle of ‘garbage in leads to garbage out’ still applies even today. If a business does not have a strong grasp on its data quality, it should not be surprised if ChatGPT provides them with erroneous and, potentially, risky suggestions.  

Limitations of ChatGPT: 

  1. Lacks idempotency: Same inputs may not produce the same outputs. Extrapolate this fact and it should be obvious that the same input from the same person at contrasting times may result in differing responses which may be great for brainstorming ideas but dangerous in cases where consistency of responses is critical e.g., developing a business expansion plan in a foreign country or providing an HR (Human Resource) strategy for employee retention. 

  1. Missing current context: ChatGPT and GPT-4 stopped training in September of 2021. Events that occurred post that date are not included in its intelligence, and ChatGPT has been known to fabricate responses to questions asked about such events.  

  1. AI Bias: This has been the fly in AI’s ointment before LLM’s and ChatGPT’s. The fix for this problem lies not in ChatGPT but with the producers of the data that makes it ways into the training sets for all LLM’s including GPT-4 and others. 


Where it shines: 

  1. Content creation: ChatGPT takes the cake when it comes to generating newer sounding creative work but be aware that its outputs must be thoroughly vetted for logical coherence. The more information one can include in the prompts used for generating content, the likelier the possibility of a reasonably decent outcome but don’t print those thoughts without a proper edit.  

  1. Law: GPT-4 scored in the 90th percentile in a bar exam. Law is complex but there is a structure inherent in the profession and it is perhaps this defined set of legal edicts that made GPT-4 so successful.  

  1. Healthcare: Like the law, health care professionals do follow some form of a decision tree to advise patients on their health care needs. Granted these decision trees can explode into a decision forest but if diagnosis A leads to prescription B and outcome C, LLMs (Large Language Models) should be able to step in and be of some value. However, and this must be mentioned because the smallest error in diagnosis can lead to adverse outcomes, Clayton Christensen, a foremost academic who developed the theory of disruptive innovation, famously warned that “health care innovation CANNOT be good enough, it needs to be precise”. This rule of thumb must be kept foremost as LLM’s and other types of Generative AI technologies seep their way into our health care systems.  

“It’s not you, ChatGPT, it’s us” 

Is ChatGPT the ogre that will consume our jobs and is the precursor to SkyNet? Should we be jumping for joy because it can increase our productivity and reduce our cost of operations, or should scared employees make effigies of it and burn it as protest? The jury is out but for right now, ‘till the fog lifts, common sense and human intuition should be used to assess its importance, particularly as it pertains to its impact on our social fabric and long-term economic gains.  


Disclaimer: This content was NOT generated by ChatGPT. 

1 - The Milgram Experiment, in general, showed people are more likely to follow authoritative sources, even if they are not aware of the veracity of the claims made by such sources (https://en.wikipedia.org/wiki/Milgram_experiment)

Technical Advisor – Automation DevOps

Usman Lakhani is a Technical Advisor on Stratascale’s Innovation Advisory team. Usman has a background in Software Delivery Automation and is an ardent supporter of merging RPA tools with other automation technologies.

Related Posts