Introduction to Google Vertex AI Beginner

Google Vertex AI is Google Cloud's unified machine learning platform that brings together all GCP ML services into a single, cohesive environment. It provides tools for the entire ML workflow — from data preparation and model training to deployment and monitoring.

What is Vertex AI?

Vertex AI is Google Cloud's next-generation ML platform, launched in 2021 as the successor to AI Platform (formerly Cloud ML Engine). It unifies AutoML and custom model training under one roof, providing a streamlined experience for data scientists and ML engineers.

Key Insight: Vertex AI combines the simplicity of AutoML (no-code ML) with the flexibility of custom training (bring your own code). This means both beginners and experienced ML practitioners can use the same platform effectively.

Vertex AI vs AI Platform

If you have used Google Cloud's previous ML offerings, here is how Vertex AI compares:

Feature AI Platform (Legacy) Vertex AI
AutoML Separate products (AutoML Vision, Tables, etc.) Unified AutoML under one service
Custom Training AI Platform Training Integrated custom training with pre-built containers
Prediction AI Platform Prediction Unified endpoints with traffic splitting
Pipelines AI Platform Pipelines (Kubeflow) Vertex AI Pipelines (managed Kubeflow + TFX)
Feature Store Not available Built-in Feature Store
Model Registry Basic model versioning Full Model Registry with lineage
Experiments Limited tracking Vertex AI Experiments with TensorBoard

Core Components

Datasets

Vertex AI Datasets provide managed storage for your training data. You can import tabular, image, text, and video data, then use it across AutoML and custom training jobs.

Training

Choose between AutoML (automatic model architecture search and hyperparameter tuning) or Custom Training (bring your own code with TensorFlow, PyTorch, scikit-learn, or XGBoost). Custom training supports GPUs, TPUs, and distributed training.

Model Registry

The Model Registry stores trained models with version tracking, metadata, and lineage information. It integrates with model evaluation and deployment workflows.

Endpoints

Deploy models to managed endpoints for real-time predictions. Endpoints support auto-scaling, traffic splitting between model versions, and A/B testing.

Pipelines

Orchestrate ML workflows using Vertex AI Pipelines, which supports both Kubeflow Pipelines and TFX. Automate data processing, training, evaluation, and deployment.

Feature Store

A centralized repository for organizing, storing, and serving ML features. Feature Store ensures consistency between training and serving, and supports both batch and online serving.

Supported ML Frameworks

Framework Pre-built Container Custom Container
TensorFlow Yes (1.x and 2.x) Yes
PyTorch Yes Yes
scikit-learn Yes Yes
XGBoost Yes Yes
JAX Yes Yes
Custom N/A Yes (any framework via Docker)

Use Cases

  • Computer Vision: Image classification, object detection, and video analysis using AutoML Vision or custom models
  • Natural Language Processing: Text classification, entity extraction, and sentiment analysis
  • Tabular Data: Classification, regression, and forecasting with AutoML Tables
  • Generative AI: Access foundation models through Model Garden, including PaLM 2 and Gemini
  • Recommendation Systems: Build and deploy personalized recommendation engines
Generative AI: Vertex AI now includes Vertex AI Studio and Model Garden for accessing and fine-tuning foundation models like Gemini, PaLM 2, Imagen, and Codey. This makes it a comprehensive platform for both traditional ML and generative AI.

Ready to Get Started?

In the next lesson, you will set up your Google Cloud project, enable the Vertex AI APIs, and configure your development environment.

Next: Setup →