Flax Linen (Legacy API)
A practical guide to flax linen (legacy api) within the flax (jax neural networks) topic.
What This Lesson Covers
Flax Linen (Legacy API) is a key topic within Flax (JAX Neural Networks). In this lesson you will learn what it is, why it matters, the mechanics behind it, and the patterns experienced engineers use in production. By the end you will be able to apply flax linen (legacy api) in real systems with confidence.
This lesson belongs to the Deep Learning Frameworks category of the AI Frameworks track. The right framework choice compounds across every project — pick well at the start, you ship faster forever after; pick poorly, and you fight your tools every release.
Why It Matters
Master Flax: the most popular JAX neural network library. Learn nnx (new) and linen (old) APIs, training loops, and porting PyTorch models to Flax.
The reason flax linen (legacy api) deserves dedicated attention is that the difference between productive use and constant friction usually comes down to a small number of design decisions made at the start. Two teams using the same framework can ship at very different speeds based on how well they execute on this technique. Understanding the underlying mechanics — not just memorizing the API — is what lets you adapt when the documented patterns do not fit your problem.
How It Works in Practice
Below is a worked example showing how to apply flax linen (legacy api) in real code. Read through it once, then experiment by changing the parameters and observing the effect.
import jax
import jax.numpy as jnp
from flax import nnx
import optax
class MLP(nnx.Module):
def __init__(self, in_dim, hidden, out_dim, rngs: nnx.Rngs):
self.linear1 = nnx.Linear(in_dim, hidden, rngs=rngs)
self.linear2 = nnx.Linear(hidden, out_dim, rngs=rngs)
def __call__(self, x):
return self.linear2(jax.nn.gelu(self.linear1(x)))
model = MLP(784, 512, 10, rngs=nnx.Rngs(0))
optimizer = nnx.Optimizer(model, optax.adamw(3e-4))
@nnx.jit
def train_step(model, optimizer, x, y):
def loss_fn(model):
logits = model(x)
return optax.softmax_cross_entropy_with_integer_labels(logits, y).mean()
loss, grads = nnx.value_and_grad(loss_fn)(model)
optimizer.update(grads)
return loss
Step-by-Step Walkthrough
- Set up your environment — Install the framework with the right extras (often
[gpu],[all], or framework-specific). Pin versions; framework breakage between versions is a top source of debugging pain. - Read the framework's idioms — Every framework has a "blessed path" and a "fight the framework" path. The first 90% is much easier on the blessed path. Learn the idioms before trying to be clever.
- Write a tiny end-to-end example first — Get the smallest possible thing working before scaling up. End-to-end at small scale catches integration issues that unit tests miss.
- Profile before you optimize — Built-in profilers (PyTorch profiler, JAX trace, MLflow autolog) cost almost nothing to enable and save hours of guessing.
- Iterate one variable at a time — When tuning, change one thing, measure, repeat. Five simultaneous changes leave you guessing which one mattered.
When To Use It (and When Not To)
Flax Linen (Legacy API) is the right tool when:
- The use case fits the framework's strengths (read the design docs to verify)
- You can commit to the framework's idioms rather than fighting them
- The team will live with the framework's release cadence and breakage
- The added power outweighs the added complexity over the project's lifetime
It is the wrong tool when:
- A simpler approach (or simpler framework) already meets your needs
- The use case is at odds with the framework's design
- The framework's release cadence will outpace your maintenance bandwidth
- You are still iterating on requirements — pick the framework after you know the shape of the problem
Production Checklist
- Are framework versions pinned with exact constraints in requirements?
- Are upgrade paths tested in staging before promoting to production?
- Is profiling and tracing enabled (and the data actually reviewed)?
- Do you have integration tests that exercise the framework, not just unit tests of your code?
- Is there a rollback path if a framework upgrade introduces regressions?
- Have you load-tested at 2-3x your projected peak to find the breaking point?
Next Steps
The other lessons in Flax (JAX Neural Networks) build directly on this one. Once you are comfortable with flax linen (legacy api), the natural next step is to combine it with the patterns in the surrounding lessons — that is where compound returns kick in. Framework skills are most useful as a system, not as isolated tricks.
Lilly Tech Systems