This refers to the ability of machines and computer systems to perform tasks that typically require human intelligence. These tasks can include understanding language, recognizing images or patterns, making decisions, learning from experience, and solving problems. AI’s roots trace back to the 1950s with Alan Turing’s work on machine intelligence. The term "AI" was coined in 1956 at a Dartmouth conference.
At its core, AI uses large amounts of data and algorithms to mimic human thought processes. Some AI systems are designed to follow rules and make decisions based on them, while others, like machine learning models, improve over time as they process more data.
AI can be found in everyday tools like voice assistants, recommendation systems, chatbots, image recognition apps, and self-driving cars. It can be narrowly focused on one task or more general in its abilities.
While AI can improve efficiency and uncover insights, it also raises questions about accuracy, fairness, privacy, bias, and the role of human creativity and judgment.
Neural Networks: Inspired by the human brain, these are algorithms structured in layers of interconnected "nodes" that process data. Deep learning, a subset, uses many-layered neural networks for complex tasks like image or speech recognition.
Large Language Models (LLMs) are a subset of Natural Language Processing (NLP) within AI, specifically under Narrow AI. They are advanced machine learning models, typically based on neural networks (e.g., transformer architectures), trained on massive datasets of text to understand, generate, and interact with human language. LLMs power many modern conversational and text-based applications.
Computer Vision: Allows AI to interpret visual information, like identifying objects in images or videos, used in facial recognition or autonomous vehicles.
Robotics and Automation: AI powers robots to navigate environments or automate tasks, like warehouse robots or self-driving cars. AI-driven robotics is transforming industries, with advancements enabling robots to perform complex tasks like navigation and manipulation.
Multimodal AI: AI that integrates multiple data types (e.g., text, images, audio) for comprehensive task performance. An AI that analyzes a video call by processing speech, facial expressions, and text simultaneously.
Expert Systems: Uses rule-based logic to mimic human expertise in specific fields.
Edge AI: Runs AI on local devices for speed and privacy. (A smart doorbell uses edge AI to recognize faces without sending data to the cloud.)
AI Explainability: Ensures AI decisions are transparent and understandable. (A hospital AI explains why it flagged a patient’s X-ray, helping doctors trust its diagnosis.)
Types of AI by Function
Narrow AI (Weak AI) (ANI)
Designed to perform a specific task, these systems operate under a limited set of constraints and do not possess consciousness or genuine intelligence.
Examples: Voice assistants like Siri and Alexa, and recommendation systems on streaming platforms.
General AI (Strong AI) (AGI) (Hypothetical)
This AI would have the ability to perform any intellectual task that a human can do, with understanding and reasoning capabilities. It remains a goal for researchers and has not yet been achieved.
Superintelligent AI (ASI) (Hypothetical)
A form of AI that surpasses human intelligence across all fields, including creativity, general wisdom, and social skills. It is a theoretical concept and does not currently exist. Artificial Superintelligence (ASI) is a distant, speculative concept discussed in academic and philosophical circles. Reliable sources emphasize that ASI is far from realization, with a focus on ensuring safety and control if it ever emerges.
Note: The ANI-AGI-ASI framework, rooted in AI research since the 1950s (e.g., Dartmouth Conference), classifies AI based on its cognitive and functional capabilities, from task-specific to general to superhuman.
Language Models (LLMs)
LLMs are a subset of Natural Language Processing (NLP) within Narrow AI, specifically the Limited Memory subcategory, as they learn from vast text datasets to understand and generate human-like language. They work specifically with text, reading, writing, and understanding.
ChatGPT (based on OpenAI’s GPT models)
Claude (by Anthropic)
Gemini (by Google)
LLaMA (by Meta)
When OpenAI released ChatGPT in November 2022, it was free, web-based, and easy to try. Millions of people used it, including those with no technical background. It was not just a chatbot. It became a writing assistant, tutor, translator, therapist, and research helper all in one tool. It arrived at the right time, when interest in AI was growing, and social media helped it go viral.
ChatGPT is designed to understand natural language and respond in a way that feels natural and conversational. It is built on a complex neural network that’s been trained on an enormous amount of information, helping it recognize patterns and generate thoughtful replies.
When someone types in a question or statement, ChatGPT works through a process called natural language processing, or NLP, for short. It essentially breaks the text into smaller parts, figures out how those pieces relate to one another, and then crafts a response that sounds more like how a person might talk.
Its training data comes from various sources, including books, articles, websites, and other texts representing many different writing styles. This allows it to generate text on various subjects, from scientific and technical topics to social and cultural issues.
One of the advantages of its architecture is that it can continue learning and improving over time. As more data becomes available, it can be retrained on this data to improve its language processing and generation capabilities. It can also learn from user feedback and adjust its responses to better meet their needs and expectations as more users use it over time.
What Does GPT Mean?
GPT stands for "Generative Pre-trained Transformer." It is a type of neural network architecture used in natural language processing tasks, such as text generation, translation, and question answering. The models are designed to generate text similar to everyday human language, using complex algorithms (mathematical procedures) applied to large amounts of data. The GPT models are part of a family of neural network bots that have become increasingly popular in many applications over the past few years, and no doubt, these models will be added to more and more areas of everyday life.
Note: Do not confuse General AI and Generative Pre-trained Transformer. Generative Pre-trained Transformer falls under Narrow AI.
After ChatGPT, a whole ecosystem exploded, with tools designed for writing, search, code, design, productivity, and even humor. Most of them still use LLMs (large language models), but they are fine-tuned for different goals.
Impact
ChatGPT made AI accessible to millions of everyday users, not just tech experts. Within 2 months of its launch, it had over 100 million users. AI went from a research concept to a household tool almost overnight.
Writing, coding, data analysis, and research tasks are now often AI-assisted. Tools like Microsoft Copilot, Gemini, and GrammarlyGo brought AI into Word, Excel, and daily workflows. Employees in marketing, education, law, medicine, and customer support began using AI to save time.
Open-source models (like Meta’s LLaMA and Mistral) gave developers tools to build their own chatbots, tutors, assistants, or apps.
AI raised new concerns, forcing schools and libraries to adapt fast. At the same time, it opened doors to new teaching methods and personalized tutoring. Many libraries now teach AI literacy alongside digital literacy.
Concerns about bias, misinformation, data privacy, and job displacement grew louder. Institutions, governments, and companies are now drafting AI policies and safeguards.
Possible Downsides
There are several potential downsides to consider:
AI tools generate confident but incorrect information, known as hallucinations. This is especially risky when users assume the answers are always factual.
Overreliance and skill decay
When AI handles tasks like writing, problem-solving, or researching, users may gradually lose essential skills such as critical thinking, analysis, and creativity. This can lead to long-term dependency.
Privacy and data use
Some AI platforms collect user input and files, which may be stored or used for training. Even anonymized data can carry risks of exposure, misuse, or unintended sharing, particularly with sensitive information.
Job displacement or deskilling
AI is automating many roles, especially entry-level and repetitive tasks. While it can improve efficiency, it can also reduce the need for certain jobs or shift responsibilities away from people and toward systems.
Bias and discrimination
AI systems can reflect or reinforce biases in their training data. This can lead to unfair or discriminatory outcomes in hiring, lending, policing, and other decisions that affect people's lives and opportunities.
Lack of transparency
It is often unclear how AI systems arrive at specific conclusions or outputs. This lack of transparency makes it difficult to audit, trace, or explain decisions, particularly when they are high-stakes or contested.
Creativity issues
When people rely too heavily on AI to generate ideas, text, or artwork, it can weaken their own creative processes. Over time, this may lead to formulaic thinking, reduced originality, and a lack of personal voice in writing or design. In academic or artistic settings, this can undermine the development of authentic expression and the ability to solve problems in novel ways.
The future of AI will bring smarter, faster, and more human-like systems that are woven into everyday life. AI will move beyond chatbots to assist with decisions, creativity, and daily tasks, often quietly running in the background of apps, devices, and workplaces. At the same time, society will face big challenges around trust, privacy, misinformation, and fairness. New laws and ethical frameworks will likely shape how AI can be used. While AI won’t replace humans, it will change how we work, learn, and connect, pushing us to rethink what it means to be intelligent, creative, and responsible.
Possibilities to consider:
The AI Index Report by Stanford University, Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report is a yearly publication. A new, updated report is released by Stanford HAI each year, providing the latest data and analysis on global AI trends.
Source: Microsoft Corporate Blogs. (2023, February 7). Microsoft and OpenAI extend partnership – The Official Microsoft Blog. The Official Microsoft Blog.
AI Index Report -Stanford University
Source: Stanford Institute for Human-Centered Artificial Intelligence. (n.d.). Artificial Intelligence Index.
Ask a Techspert: What is generative AI?
Source: Nye, B. (2023, June 6). What is generative AI? Google.
Stanford U & Google’s generative agents produce believable proxies of human behaviours.
Source: Synced. (2023, April 12). Stanford U & Google’s generative agents produce believable proxies of human behaviours. Synced.
OpenAI buys iPhone designer Ive's hardware startup, names him creative head.
Source: Reuters. (2025, May 21). OpenAI buys iPhone designer Ive's hardware startup, names him creative head. Reuters.
Source: OpenAI. (2024, May 21). Sam and Jony.