Beginner
Gradio Interface
gr.Interface() is the simplest way to build a Gradio app. Define a function, specify inputs and outputs, and Gradio generates the entire UI automatically.
Basic Interface
The gr.Interface class takes three core arguments: the function to wrap, the input components, and the output components.
Python
import gradio as gr def classify_sentiment(text): # Your ML model logic here if "good" in text.lower(): return {"Positive": 0.9, "Negative": 0.1} return {"Positive": 0.3, "Negative": 0.7} demo = gr.Interface( fn=classify_sentiment, inputs=gr.Textbox(label="Enter text", placeholder="Type something..."), outputs=gr.Label(num_top_classes=2), title="Sentiment Classifier", description="Analyze the sentiment of your text.", examples=[["This product is great!"], ["Terrible experience."]] ) demo.launch()
Input Components
Gradio provides 30+ built-in components for different data types:
| Component | Python Type | Use Case |
|---|---|---|
gr.Textbox | str | Text input, prompts, queries |
gr.Number | float | Numeric parameters |
gr.Slider | float | Range selection (temperature, etc.) |
gr.Checkbox | bool | Toggle options |
gr.Dropdown | str | Select from options |
gr.Image | PIL.Image / numpy | Image upload or webcam |
gr.Audio | tuple(sr, data) | Audio upload or microphone |
gr.Video | str (filepath) | Video upload |
gr.File | str (filepath) | Any file upload |
gr.Dataframe | pandas.DataFrame | Tabular data input |
Image Classification Example
Python
import gradio as gr from transformers import pipeline classifier = pipeline("image-classification", model="google/vit-base-patch16-224") def classify_image(image): results = classifier(image) return {r["label"]: r["score"] for r in results} demo = gr.Interface( fn=classify_image, inputs=gr.Image(type="pil", label="Upload Image"), outputs=gr.Label(num_top_classes=5), title="Image Classifier", ) demo.launch()
Multiple Inputs and Outputs
Python
def process(text, language, temperature): translated = translate(text, language) audio = text_to_speech(translated) return translated, audio demo = gr.Interface( fn=process, inputs=[ gr.Textbox(label="Input Text"), gr.Dropdown(["French", "Spanish", "German"], label="Language"), gr.Slider(0, 1, value=0.7, label="Temperature"), ], outputs=[ gr.Textbox(label="Translation"), gr.Audio(label="Audio Output"), ], )
Audio and Video
Python
# Speech-to-text demo def transcribe(audio): sr, data = audio result = whisper_model.transcribe(data) return result["text"] demo = gr.Interface( fn=transcribe, inputs=gr.Audio(sources=["microphone", "upload"], label="Speak or Upload"), outputs=gr.Textbox(label="Transcription"), )
Examples and Flagging
Python
demo = gr.Interface( fn=my_function, inputs="text", outputs="text", examples=[ ["Hello, how are you?"], ["Explain quantum computing"], ["Write a Python function"], ], cache_examples=True, # Pre-compute examples allow_flagging="manual", # "never", "manual", "auto" )
Pro tip: Use
cache_examples=True to pre-compute example outputs. This makes the demo load faster and shows users what to expect.What's Next?
Interface is great for quick demos, but for custom layouts, multi-step workflows, and advanced event handling, you need gr.Blocks(). That's what we cover next.
Lilly Tech Systems