Exploring the GRIN Lens Design Space
https://www.thorlabs.com
GRIN Lenses have many more degrees of freedom over traditional lenses.
What sort of effects can we achieve in this space? What are the limitations?
What can GRIN Lenses do that traditional refractive lenses cannot? Vice Versa?
Pretty ill-defined question, so instead...
We know some (Luneburg, Maxwell, etc.), but what about other objectives?
Assuming our desired inputs, what refractive field generates the desired output?
(We will ignore wavelength)
The cost objective can be anything as long as its a function of \(x\) and \(v\)
Initial Conditions
\(x_o, v_o\)
Refractive Field
\(\eta\)
Eikonal Tracer
Cost Objective
Backprop
Gradient
An example is an image formation model, where each ray carries constant irradiance
The goal is to have multiple images based on direction
Collimated Sources
Sensors
\(\eta\)
Ground
Truth
Iter 0
Iter 20
Iter 300
PyTorch Implementation:
Every integration step is a "layer" in the graph
\(\Delta s\)
\(\Delta s\)
\(\Delta s\)
Each time step in the simulation adds to the computation graph.
Halving the step size effectively doubles the computation graph!
Too low of a step size means the volume itself will be undersampled.
A method for computing the derivative of interest via a set of partial differential equations
Calculating the adjoint state \(\mu\) gives us the value needed for the derivative of interest
With this value, we can use a gradient based optimizer like Adam, SGD, BFGS, etc.
\(\mu\) can be thought of as the error vector at the boundary being parallel transported back through the volume
We still need the path for the adjoint state, since its defined on the path.
Naively, this means that we have to save the whole path, which has the same problem as before.
The optical paths of the rays are "symplectic" or reversible.
Ground
Truth
Autodiff
Adjoint
AD Implementation:
Adjoint Implementation:
AD can also achieve these results, it would just take much longer and use more memory.
Adjoint
Tracer
Collimated Source
Near-Field Sensor
\(\eta\)
Far-Field Sensor
Optimization
Ground Truth
Near
Far
Same near-far setup, but use a geometric cost objective instead
desired image
signed distance field
Near Field
Far Field
Ground Truth
Volume
Near Field
Far Field
The energy distribution isn't uniform. That isn't a constraint in the optimization.
As long as the ray reaches the circle, the loss will go to zero.
The Luneburg lens can be defined by a geometric property rather than an index profile.
It works! We can match the actual Luneburg profile with just the geometric description.
What sort of things would be cool to see?
What other experiments should I try?