The digital airwaves and social media feeds have recently gone wild with examples of how the AI-driven chatbot ChatGPT has solved riddles, generated high school essays and explained why the Croatian football team has outperformed similar sized nations at recent World Cup Tournaments. Understandably, it has again raised important questions about the impact of AI on our lives, enterprises, and broader society.
First and foremost, let’s start with definitions. What is Generative AI and where does OpenAI/ChatGPT fit within all of this? Generative AI is a branch of computer science that involves unsupervised and semi-supervised algorithms that enable computers to create new content using previously created content, such as text, audio, video, images and code.
ChatGPT (which stands for Chat Generative Pre-Trained Transformer) is a chatbot developed by OpenAI. ChatGPT is built on top of OpenAI’s GPT-3.5 family of large language models (LLMs) and is fine-tuned with both supervised and reinforcement learning techniques. It is being hailed as the smartest chatbot ever developed. OpenAI was founded in 2015 (initially as a non-profit organization) and early investors included Elon Musk & Peter Thiel. In 2019, it became a for-profit organization and inked a $1bn deal from Microsoft. This deal allowed it to use Microsoft’s Azure Cloud Platform for its research and development; and in return, Microsoft was given the first opportunity to commercially leverage early results from OpenAI’s research. OpenAI has a stated goal of promoting and developing friendly AI in a way that benefits humanity as a whole and is viewed as the leading competitor to DeepMind (acquired by Google in 2014 for $500M).
It is important to understand that while ChatGPT is a good example of generative AI technology, the market segment is much broader. LLMs began at Google Brain in 2017, where they were initially used for translation of words, while preserving context. Since then, large language and text to image models have proliferated at leading tech firms like Google (BERT and LaMDA), Facebook(OPT-175B and BlenderBot) and OpenAI (GPT-3 for text, DallE-2 for images and Whisper for speech). Online communities (e.g. MidJourney), open-source providers (e.g. HuggingFace) and startups such as Stability AI have also created generative models. In Q4 this year, a spate of text-to-video models from Google, Meta and others have emerged. Generative models have largely been confined to larger tech companies because training them requires massive amounts of data and computing power. But once a generative model is trained, it can be “fine-tuned” for a particular content domain with much less data. Today, Generative AI applications largely exist as plugins within software ecosystems.
The questions that technology and business leaders should be asking in terms of what Generative AI means for the enterprise are outlined below:
How will it be incorporated in existing enterprise technology environments?
- Code Generation – GPT-3 has proven to be an effective generator of computer program code. GPT-3’s Codex program is specifically trained for code generation and works well when given a small function. Microsoft’s Github has a version of GPT-3 for code generation and is called CoPilot. The latest versions of Codex can identify bugs and fix mistakes in its code and can explain what it does occasionally. The goal for these tools is to not eliminate programmers but to make tools like Codex and CoPilot “pair digital assistant” with humans to improve their speed and effectiveness.
- Enterprise Content Management – Vendors in the Headless content management space are incorporating these types of generative AI tools for both content generation and recommendations. This is to deal with the increased content velocity as additional forms of content are based on a single source generated by AI with human oversight. It is not being used to write whole copy, but rather an outline for the content author to use as a draft. In addition, there it is likely to impact GUI design in the form of “generative design” with the likes of Figma or Stackbit potentially including generative AI capabilities in as part of collaborative interface design engines.
- Marketing and CX Applications – Outside of the use of content generation for advertising and marketing along with the automation of marketing campaigns, the primary application for early versions of generative AI is in AI driven chatbots and agents for contact centers and customer self-service such as employed by Salesforce and Genesys, and these have initially delivered mixed results. However, this next generation of capabilities will mean a broader range of interactions, more accurate answers, and lower levels of required human interactions which will result in higher adoption and eventually more training data for the models. In the near future, generative AI will become more prevalent in the creation of personalized product recommendations through insight analytics, better and deeper customer segmentation as a steppingstone to true personalization and contextualization of experiences, and better understanding customer satisfaction and performance.
- Product Design & Engineering – It will also affect technologies in the product lifecycle management (PLM) and innovation space with the likes of Autodesk, Dassault Systemes, Siemens, PTC and Ansys continuing to build capabilities to enable design engineers & R&D teams to automate and expand the ideation and optioning process during early-stage product design, simulation, & development. Generative AI design would allow options for engineering and R&D teams to consider in terms of structure, materials, and optimal manufacturing/production tooling. For example, it would potentially suggest a part design that optimizes against factors like cost, load bearing, and weight. Generative design can also enable reimagining of product look and feel, often resulting in unique aesthetics and form that is not only more compelling to end users, but more practical and environmentally sustainable. Many of these vendors have attached their generative design offerings to additive manufacturing capabilities that are needed to realize these unique products. Opportunities exist across multiple industries for generative design. Automotive, aerospace, and machinery organizations can improve product quality, sustainability, and success, while life sciences, healthcare, and consumer products companies can improve patient outcomes and customer experiences.
What are the pitfalls?
Generative AI, while providing lower-cost, higher-value solutions, has significant ethical and perhaps legal implications. There are significant questions over issues like copyright, trust and safety. Organizations must consider issues such as privacy and consent around data, reproduction of biases and toxicity, generation of harmful content, sufficient security against third-party manipulation, and accountability and transparency of processes. Neglect of AI ethics isn’t just a moral quandary – it is a significant business risk that means less trust, less control, and less ability to advance the models in an optimal way. Businesses must take a multi-pronged approach to AI from developer to end-user, first and foremost guided by a framework including principles that appropriately consider all ramifications of AI. Businesses should also choose models where techniques such as adversarial input (training against bad or manipulated data), benchmark dataset training (checking for biases via label tests), and XAI (explainable AI) are used. Finally, concerns with AI ethics are intrinsically linked to how accountability measures are enacted. Businesses should ensure they take a Human-in-the-Loop (HITL) approach to ensure minimal model drift, rigorous monitoring of output, and continuous improvement. AI must not be viewed as an independent, black box entity, but should rather be seen as human-computer interaction where optimal usage comes from deep understanding, meticulous monitoring, and striving for accuracy of the model.
How will it affect jobs?
At the end of 2020, the World Economic Forum (WEF) predicted that AI would displace 85 million jobs by 2025. The main jobs it identified under threat would be the likes of data entry clerks, administrative assistants, accounting and auditing professional amongst others. By the same timeframe, it predicted that 97 million new jobs would be created as AI becomes more mainstream in the enterprise. Growing job demand would focus on data scientists, process automation specialists, digital marketing and strategy experts as well as many other more roles. Generative AI means that we can add a new role to that list – prompt engineers. Basically, this role focuses on working out what to type into AI chatbots to get the best out of them. Some would expect these individuals to also deal with so-called ‘hallucinations’ – where Generative AI gets it completely wrong. These types of entirely new job descriptions highlight how an emerging technology not only displaces activities, but also creates new ones. The classic creative destruction principle initially outlined by Schumpeter. However, for business and technology leaders – it does require a dynamic and ongoing assessment of required digital skills including continuous gap analysis and roadmaps to ensure that the necessary capabilities are available to support the digital business of the future.
Moving forward, the best place to watch new and interesting generative AI use cases is in the start-up and scale-up space. The likes of Jasper (Copywriting), Stability AI (Visual art), DoNotPay (Legal Services), Omnekey (Creative Content), Paige.ai (Cancer diagnostics) and Mostly.ai (Synthetic data) showcase how quickly this space is fueling a range of game changing innovations across the industry – and potentially what’s around the corner for so many industries. It is incumbent on all of us to ensure that we approach this fascinating space with the right balance of curiosity and skepticism.