You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Generalized LoRA: LoRA for smaller (in contrast to large models: LLM/LVM/...) models, for their near-real-time adaptation.
Rather than selecting adaptive layers, we may apply generalized LoRA to conventional models with nntrainer, which makes the selecting process "unsupervised".
Anyway, G-LoRA can behave as a MiRA (not low-rank, but mid-rank. reducing to 1/5 ~ 1/10, not 1/1000 ~ 1/10000).
For convolution layers, we can see the whole layer as a 2-dimensional matrix (actually, most frameworks will put them on a single memory buffer as a 1-dimensional vector, which can be easily mapped into 2-d matrix) and apply LoRA as a personalization adaptor to it.
We also need to start thinking about how to package, version, integrate, deploy such adaptors for devices. This incurs more issues to device MLOps, too.
The text was updated successfully, but these errors were encountered:
Generalized LoRA: LoRA for smaller (in contrast to large models: LLM/LVM/...) models, for their near-real-time adaptation.
Rather than selecting adaptive layers, we may apply generalized LoRA to conventional models with nntrainer, which makes the selecting process "unsupervised".
Anyway, G-LoRA can behave as a MiRA (not low-rank, but mid-rank. reducing to 1/5 ~ 1/10, not 1/1000 ~ 1/10000).
For convolution layers, we can see the whole layer as a 2-dimensional matrix (actually, most frameworks will put them on a single memory buffer as a 1-dimensional vector, which can be easily mapped into 2-d matrix) and apply LoRA as a personalization adaptor to it.
We also need to start thinking about how to package, version, integrate, deploy such adaptors for devices. This incurs more issues to device MLOps, too.
The text was updated successfully, but these errors were encountered: