Enhancements & Deployment
Take your AI dashboard to production with auto-refresh, Streamlit Cloud deployment, authentication, performance optimization, and answers to common questions.
Enhancement 1: Auto-Refresh
A monitoring dashboard should update automatically without requiring manual page refreshes. Streamlit provides a built-in auto-rerun feature:
# Add to app.py for auto-refresh:
import streamlit as st
from streamlit_autorefresh import st_autorefresh
# Auto-refresh every 60 seconds (60000 milliseconds)
# The component returns the number of times it has refreshed
refresh_count = st_autorefresh(interval=60000, limit=None, key="data_refresh")
# Alternative: manual refresh button alongside auto-refresh
col1, col2 = st.columns([3, 1])
with col2:
if st.button("Refresh Now"):
st.cache_data.clear()
st.rerun()
Install the auto-refresh component:
pip install streamlit-autorefresh
st.cache_data(ttl=60) to your data fetching functions and the data will automatically refresh when the cache expires on the next user interaction.Enhancement 2: Streamlit Cloud Deployment
Streamlit Cloud provides free hosting for public apps and affordable hosting for private apps. Here is how to deploy:
Step 1: Prepare requirements.txt
# requirements.txt
streamlit==1.32.0
plotly==5.18.0
pandas==2.2.0
numpy==1.26.0
scipy==1.12.0
psycopg2-binary==2.9.9
prometheus-api-client==0.5.3
boto3==1.34.0
python-dotenv==1.0.0
streamlit-autorefresh==1.0.1
Step 2: Push to GitHub
git init
git add .
git commit -m "Initial commit: AI monitoring dashboard"
git remote add origin https://github.com/your-org/ai-dashboard.git
git push -u origin main
Step 3: Deploy on Streamlit Cloud
- Go to share.streamlit.io
- Click New app
- Select your GitHub repository and branch
- Set the main file path to
app.py - Add your environment variables in Advanced settings → Secrets
- Click Deploy
.env files. On Streamlit Cloud, add secrets in the dashboard UI under Settings → Secrets. They are available at st.secrets["KEY_NAME"] or as environment variables.Enhancement 3: Authentication
Protect your dashboard from unauthorized access. Streamlit Cloud supports native authentication, or you can add a simple password gate:
# utils/auth.py
import streamlit as st
import hashlib
def check_password():
"""
Simple password authentication for the dashboard.
Returns True if the user is authenticated.
"""
def password_entered():
entered = st.session_state.get("password", "")
hashed = hashlib.sha256(entered.encode()).hexdigest()
expected = st.secrets.get("DASHBOARD_PASSWORD_HASH", "")
if hashed == expected:
st.session_state["authenticated"] = True
del st.session_state["password"]
else:
st.session_state["authenticated"] = False
if st.session_state.get("authenticated"):
return True
st.title("AI Model Dashboard")
st.text_input("Password", type="password", key="password", on_change=password_entered)
if st.session_state.get("authenticated") is False:
st.error("Incorrect password")
return False
# Usage in app.py:
# from utils.auth import check_password
#
# if not check_password():
# st.stop()
# main() # Only runs if authenticated
For production teams, consider Streamlit Cloud Teams which provides SSO with Google, GitHub, or SAML authentication out of the box.
Enhancement 4: Performance Optimization
Large datasets can make Streamlit dashboards slow. Apply these optimizations:
# Performance tips for Streamlit dashboards:
# 1. Cache aggressively with TTL
@st.cache_data(ttl=300) # 5-minute cache
def get_model_metrics(days=30):
return fetch_from_database(days)
# 2. Use st.cache_resource for expensive connections
@st.cache_resource
def get_db_connection():
return psycopg2.connect(**config)
# 3. Limit data volume in queries
# Bad: SELECT * FROM metrics WHERE date > '2025-01-01'
# Good: SELECT date, model, accuracy FROM metrics WHERE date > '2025-01-01' LIMIT 10000
# 4. Use Plotly's WebGL renderer for large datasets
fig = px.scatter(df, x="x", y="y", render_mode="webgl") # 10x faster
# 5. Lazy-load views with st.fragment (Streamlit 1.33+)
@st.fragment
def expensive_chart():
data = compute_heavy_aggregation()
st.plotly_chart(create_chart(data))
# 6. Compress DataFrames before caching
def optimize_dtypes(df):
for col in df.select_dtypes(include=["float64"]).columns:
df[col] = df[col].astype("float32")
for col in df.select_dtypes(include=["int64"]).columns:
if df[col].max() < 2**31:
df[col] = df[col].astype("int32")
return df
Enhancement 5: Alerting Integration
Send alerts to Slack or email when drift or latency thresholds are crossed:
# utils/alerts.py
import os
import json
import urllib.request
def send_slack_alert(message, severity="warning"):
"""Send an alert to a Slack webhook."""
webhook_url = os.getenv("SLACK_WEBHOOK_URL")
if not webhook_url:
return
color_map = {
"critical": "#ef4444",
"warning": "#f59e0b",
"info": "#6366f1",
}
payload = {
"attachments": [{
"color": color_map.get(severity, "#6366f1"),
"title": f"ML Dashboard Alert ({severity.upper()})",
"text": message,
"footer": "AI Model Dashboard",
}]
}
req = urllib.request.Request(
webhook_url,
data=json.dumps(payload).encode("utf-8"),
headers={"Content-Type": "application/json"},
)
urllib.request.urlopen(req)
def check_and_alert(metrics_df, cost_df, reference_df, current_df, feature_names):
"""Run all alert checks and send notifications."""
from utils.stats import calculate_psi, get_drift_severity
alerts = []
# Check accuracy drop
latest = metrics_df[metrics_df["date"] == metrics_df["date"].max()]
for _, row in latest.iterrows():
if row["accuracy"] < 0.85:
alerts.append(
f"Model {row['model']} accuracy dropped to {row['accuracy']:.4f}"
)
# Check drift
for feat in feature_names:
psi = calculate_psi(reference_df[feat].values, current_df[feat].values)
severity, _ = get_drift_severity(psi, 0.05)
if severity == "critical":
alerts.append(f"Critical drift detected in {feat} (PSI={psi:.4f})")
# Check latency SLA
if cost_df is not None:
p99 = cost_df["inference_latency_ms"].quantile(0.99)
if p99 > 200:
alerts.append(f"P99 latency {p99:.1f}ms exceeds 200ms SLA")
# Send combined alert
if alerts:
message = "\n".join([f"- {a}" for a in alerts])
severity = "critical" if any("Critical" in a or "dropped" in a for a in alerts) else "warning"
send_slack_alert(message, severity)
Frequently Asked Questions
Can I use this with models served by different platforms?
Yes. The connector architecture is platform-agnostic. Prometheus works with TFServing, Triton, Seldon, and BentoML. PostgreSQL works with any platform that logs predictions. You can also add custom connectors for MLflow, Weights and Biases, or any REST API.
How many models can this dashboard handle?
Streamlit handles dozens of models easily. For 100+ models, add pagination to the model selector and use aggregated views (e.g., show only models with drift alerts). The main bottleneck is chart rendering, not data processing.
Can I embed this in an existing web application?
Yes. Streamlit apps can be embedded via iframes. Use st.set_page_config(layout="wide") and remove the sidebar for embedded mode. Alternatively, export the Plotly charts as standalone HTML and embed those directly.
How do I handle real-time streaming data?
Streamlit is not designed for sub-second real-time updates. For real-time needs, use the auto-refresh at 10-30 second intervals, or combine Streamlit with a WebSocket layer. For true real-time dashboards, consider Grafana with Prometheus or a custom React/D3 frontend.
What is the difference between PSI and KS test?
PSI (Population Stability Index) measures overall distribution shift magnitude, good for business reporting. KS test (Kolmogorov-Smirnov) gives a statistical p-value, better for automated decision-making. Use both: PSI for severity and KS for statistical significance.
How much does Streamlit Cloud cost?
Free tier: unlimited public apps, 1GB memory. Community tier ($0): same. Teams: $250/month for private apps with SSO and viewer authentication. For self-hosted, run Streamlit on any server with Python — it is free and open source.
Can I add custom themes and branding?
Yes. Use .streamlit/config.toml for theme colors and fonts. Inject custom CSS with st.markdown(unsafe_allow_html=True) for fine-grained control. You can also add a company logo with st.sidebar.image("logo.png").
Project Recap
Over these 7 lessons, you built a complete, production-ready ML monitoring dashboard:
Lesson 1: Project Setup
Architecture, Streamlit configuration, tech stack, project scaffolding, and mock data generator.
Lesson 2: Data Connectors
Prometheus, PostgreSQL, and S3 connectors with a unified data manager and automatic fallback.
Lesson 3: Model Performance
Accuracy trends, confusion matrix heatmaps, feature importance charts, and multi-metric comparison.
Lesson 4: Drift Monitoring
PSI and KS tests, distribution overlays, drift heatmaps, and per-feature deep dives.
Lesson 5: Cost Tracking
GPU utilization gauges, API cost trends, latency percentiles, and SLA compliance tracking.
Lesson 6: Interactive Features
Global filters, date ranges, model comparison, CSV/JSON export, and drill-down navigation.
Lesson 7: Enhancements
Auto-refresh, Streamlit Cloud deployment, authentication, performance tips, and alerting.
Lilly Tech Systems