Generative AI models are at the forefront of technological innovation, enabling conversations, answering queries, crafting narratives, generating code, and fabricating images and videos with unparalleled precision. But what exactly is generative AI, and how does it function? What are its applications, and why might it be more restricted than we initially think?
Generative AI: A New Era of Creation
Generative AI, a subset of artificial intelligence, is responsible for creating fresh content, including text, images, audio, and video, by recognizing patterns from existing content. The current generative AI models are trained on vast amounts of data through deep learning techniques, including deep neural networks. These models can conduct dialogues, respond to questions, write narratives, generate code, and create visual content based on brief text inputs or “prompts.”
The term “generative” in generative AI refers to the AI’s ability to create something that didn’t exist before. This distinguishes it from discriminative AI, which differentiates between various kinds of input. In simple terms, while discriminative AI answers questions like “Is this image a drawing of a rabbit or a lion?”, generative AI responds to prompts like “Draw me a picture of a lion and a rabbit sitting next to each other.”
This article will explore generative AI, its applications with popular models like ChatGPT and DALL-E, and consider the technology’s limitations, including why “too many fingers” has become a telltale sign of artificially generated art.
The Rise of Generative AI
Generative AI has been in existence for years, possibly since the creation of ELIZA, a chatbot that imitates a therapist, at MIT in 1966. However, recent advancements in AI and machine learning have led to the development of new generative AI systems. You’ve likely heard of ChatGPT, a text-based AI chatbot known for its human-like prose. DALL-E and Stable Diffusion have also gained attention for their ability to create vivid and realistic images from text prompts.
The output from these systems is so lifelike that it has sparked philosophical debates about consciousness and concerns about the economic impact of generative AI on human employment. But despite the hype, there might be less happening beneath the surface than some believe. Before diving into these broader questions, let’s examine the underlying mechanics.
How Does Generative AI Function?
Generative AI employs machine learning to process vast amounts of visual or textual data, often scraped from the internet. It then identifies what objects are most likely to appear near others. Much of the programming work in generative AI focuses on creating algorithms that can recognize the “things” of interest to the AI’s creators, such as words and sentences for chatbots like ChatGPT or visual elements for DALL-E.
At its core, generative AI creates output by evaluating a massive corpus of data and responding to prompts within the realm of probability as determined by that corpus. Autocomplete on your cell phone or Gmail is a basic form of generative AI. ChatGPT and DALL-E elevate this concept to more sophisticated levels.
Understanding AI Models
ChatGPT and DALL-E serve as interfaces to underlying AI functionality known as a model. An AI model is a mathematical representation implemented as an algorithm that generates new data resembling a pre-existing data set. AI developers compile a corpus of data, known as the model’s training set, and the development process is referred to as training.
The GPT models were trained on a vast corpus of text from the internet, resulting in natural language responses in idiomatic English or other languages. AI models treat different characteristics of the data as vectors, mathematical structures comprising multiple numbers. The unique aspect of these models is their ability to translate real-world information into vectors meaningfully and determine similarities between them.
Types of AI Models
Several types of AI models exist, and some can fit into multiple categories. Large language models (LLMs) are currently receiving significant public attention. LLMs are based on the concept of a transformer, introduced in a 2017 paper by Google researchers. Transformers derive meaning from long text sequences to understand relationships between words or semantic components.
Diffusion is commonly used in generative AI models for images or video. Diffusion adds noise to an image and gradually removes it, checking against its training set to match semantically similar images. Diffusion is central to models like Stable Diffusion and DALL-E.
Generative adversarial networks (GANs) involve two algorithms competing against each other. One generates text or images based on probabilities from a large data set, while the other assesses whether the output is real or AI-generated. The generative AI tries to “trick” the discriminative AI, adapting to favor successful outcomes.
Is Generative AI Conscious?
The complexity of creating and training generative AI models is immense. Interacting with these models can be uncanny, as they produce art-like works and engage in human-like conversations. But have we truly created a thinking machine?
Chris Phipps, a former IBM natural language processing lead, argues that ChatGPT is merely a “very good prediction machine.” It excels at predicting what humans find coherent, but it doesn’t truly “understand.” Humans interpret the output, making implicit assumptions to make sense of it.
Testing the Boundaries of AI Intelligence
Certain prompts reveal the limitations of AI models. For example, the riddle “What weighs more, a pound of lead or a pound of feathers?” is answered correctly by ChatGPT, not because it logically reasons the answer, but because it generates output based on predictions from its training set.
However, if asked whether two pounds of feathers are heavier than a pound of lead, it may incorrectly state that they weigh the same. This illustrates the model’s reliance on its training set rather than logical reasoning.
Generative AI is a fascinating and rapidly evolving field that continues to push the boundaries of technology and creativity. Its applications are vast, and its potential is enormous. However, understanding its underlying mechanisms, limitations, and ethical considerations is crucial for responsible development and utilization.
As we continue to explore and innovate within the realm of generative AI, we must remain mindful of its capabilities and constraints, ensuring that we harness its power for positive impact while mitigating potential risks and challenges.