AI Concepts & Fundamentals (25%)
Domain 1 of the CompTIA AI+ exam — understand AI and ML fundamentals, types of learning, neural networks, NLP, computer vision, and generative AI.
What Is Artificial Intelligence?
Artificial Intelligence is the broad field of creating systems that can perform tasks that normally require human intelligence. This includes understanding language, recognizing images, making decisions, and learning from experience.
The AI Hierarchy
- Artificial Intelligence (AI) — The broadest category. Any system that simulates intelligent behavior.
- Machine Learning (ML) — A subset of AI where systems learn from data instead of being explicitly programmed.
- Deep Learning (DL) — A subset of ML using multi-layered neural networks to learn complex patterns.
- Generative AI — A subset of DL that creates new content (text, images, code, audio).
Types of Machine Learning
Supervised Learning
The model learns from labeled data where both the input and correct output are provided. The goal is to learn the mapping from inputs to outputs.
- Classification — Predict a category: spam detection, image classification, disease diagnosis
- Regression — Predict a continuous value: price prediction, temperature forecasting, demand estimation
Unsupervised Learning
The model finds hidden patterns in unlabeled data without being told what to look for.
- Clustering — Group similar items: customer segmentation, document grouping
- Dimensionality reduction — Reduce features while preserving information: PCA, t-SNE
- Anomaly detection — Find unusual patterns: fraud detection, network intrusion
Reinforcement Learning
An agent learns by interacting with an environment, receiving rewards for good actions and penalties for bad ones. Used in robotics, game playing, and autonomous systems.
Semi-Supervised and Self-Supervised Learning
- Semi-supervised — Small amount of labeled data + large amount of unlabeled data. Practical when labeling is expensive.
- Self-supervised — The model creates its own labels from the data structure. GPT learns by predicting the next word; BERT learns by filling in masked words.
Neural Networks
Neural networks are inspired by the human brain. They consist of layers of interconnected nodes (neurons) that process information.
Key Components
- Input layer — Receives the raw data (features)
- Hidden layers — Process information. More layers = "deeper" network = can learn more complex patterns
- Output layer — Produces the prediction or classification
- Weights and biases — Parameters adjusted during training to minimize error
- Activation functions — Non-linear functions (ReLU, sigmoid, softmax) that enable the network to learn complex patterns
Common Neural Network Architectures
- CNN (Convolutional Neural Network) — Designed for images. Uses convolutional layers to detect features like edges, textures, and objects.
- RNN (Recurrent Neural Network) — Designed for sequential data (text, time series). Has memory of previous inputs.
- Transformer — The architecture behind GPT, BERT, and modern LLMs. Uses attention mechanisms to process all tokens in parallel.
- GAN (Generative Adversarial Network) — Two networks (generator and discriminator) compete to generate realistic synthetic data.
Natural Language Processing (NLP)
NLP enables machines to understand, interpret, and generate human language.
- Tokenization — Breaking text into words, subwords, or characters
- Sentiment analysis — Determining if text is positive, negative, or neutral
- Named Entity Recognition (NER) — Identifying names, dates, locations in text
- Machine translation — Translating text between languages
- Text generation — Creating new text (chatbots, content generation)
- Embeddings — Representing words as vectors in high-dimensional space where similar words are close together
Computer Vision
Computer vision enables machines to interpret and analyze visual information from images and video.
- Image classification — Categorizing an entire image (cat, dog, car)
- Object detection — Locating and identifying multiple objects within an image (bounding boxes)
- Image segmentation — Classifying each pixel of an image (medical imaging, autonomous driving)
- OCR — Optical Character Recognition, extracting text from images
- Facial recognition — Identifying or verifying people from facial features
Generative AI
Generative AI creates new content rather than just analyzing existing content.
- Large Language Models (LLMs) — Generate text by predicting the next token (GPT-4, Claude, Llama)
- Image generation — Create images from text descriptions (DALL-E, Midjourney, Stable Diffusion)
- Code generation — Write code from natural language descriptions (GitHub Copilot)
- Hallucination — When AI generates plausible-sounding but factually incorrect information
- Prompt engineering — Crafting effective inputs to get desired outputs from generative AI
- RAG (Retrieval-Augmented Generation) — Combining retrieval of relevant documents with generation to improve accuracy
Practice Questions
A) Unsupervised learning - clustering
B) Supervised learning - classification
C) Reinforcement learning
D) Unsupervised learning - dimensionality reduction
Show Answer
B) Supervised learning - classification. The company has predefined categories (billing, support, sales) and labeled examples of each. This is a classification task where the model learns to assign emails to the correct category based on training data with known labels.
A) RNN (Recurrent Neural Network)
B) CNN (Convolutional Neural Network)
C) GAN (Generative Adversarial Network)
D) Transformer
Show Answer
B) CNN (Convolutional Neural Network). CNNs are specifically designed for image processing. They use convolutional layers to automatically detect features like edges, textures, and shapes, making them ideal for image classification, object detection, and image segmentation.
A) Overfitting
B) Underfitting
C) Hallucination
D) Data drift
Show Answer
C) Hallucination. Hallucination occurs when a generative AI model generates content that sounds plausible and confident but is factually wrong. It is a well-known limitation of LLMs. Mitigation strategies include RAG, fact-checking, and guardrails.
A) Transfer learning
B) Fine-tuning
C) RAG (Retrieval-Augmented Generation)
D) Data augmentation
Show Answer
C) RAG (Retrieval-Augmented Generation). RAG retrieves relevant documents from a knowledge base and includes them in the prompt context, grounding the model's response in factual information. This reduces hallucination without the cost of fine-tuning the entire model.
A) Supervised learning
B) Unsupervised learning
C) Semi-supervised learning
D) Reinforcement learning
Show Answer
D) Reinforcement learning. The robot (agent) interacts with the warehouse (environment) and receives rewards (successful delivery) and penalties (collision). It learns the optimal strategy through trial and error. This is the defining characteristic of reinforcement learning.
Lilly Tech Systems