Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

supporting pytorch/jax optimizer #12

Open
XLY43 opened this issue Nov 26, 2024 · 1 comment
Open

supporting pytorch/jax optimizer #12

XLY43 opened this issue Nov 26, 2024 · 1 comment
Labels

Comments

@XLY43
Copy link

XLY43 commented Nov 26, 2024

Hi, I'm looking to implement RL to optimize optics. I see there are currently a couple of optimizers implemented but unsure if I can use Pytorch or JAX optimizer classes instead. If there needs some more work, can you help provide some pointers to where I can modify to do this? Also if you can help provide a rough idea of how to use this code base for RL, that would be much appreciated. Thanks.

@HarrisonKramer
Copy link
Owner

HarrisonKramer commented Nov 27, 2024

Hi,

Thanks for opening this issue.

On using Pytorch or JAX

In theory, the package could be updated to use pytorch or jax optimizers. However, this would likely require significant changes to the structure of the optimization code. For instance, I expect the current numpy-based computations would need to be adapted to work with torch/jax tensors to ensure compatibility with automatic differentiation.

While this is possible, it’s non-trivial and would require a closer look at the code to assess the exact modifications needed. It might also be feasible to refactor only specific parts of the optimization framework, depending on the functionality you’re aiming to use. Another potential approach could involve making the backend configurable (e.g., allowing users to choose between numpy, torch, or jax).

On using RL

There are many options here:

  1. Guiding the lens design process - use RL to assist in decision-making, such as selecting which variables to optimize, when to add/remove lenses, or when to trigger an optimization run. This approach effectively replaces the lens designer with a trained RL agent. See [1] below, which you may already be familiar with.
  2. Direct optimization - you could let an agent make changes directly to an instance of 'optiland.optic.Optic'. Rewards could be based on improvements to a metric of your choice, e.g. RMS spot size.
  3. Tolerancing or adaptive optics - you could use RL to identify and dynamically correct non-ideal or misaligned systems.

I can elaborate on these if you want more info on what the code might look like. I plan to include a simple tutorial demonstrating option 1 in the learning guide in the coming months.

Happy to discuss further if you have any other questions.

Regards,
Kramer

[1] Tong Yang, Dewen Cheng, and Yongtian Wang, "Designing freeform imaging systems based on reinforcement learning," Opt. Express 28, 30309-30323 (2020)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants