You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm finding that training a 1-expert dMoE (brown) has worse training loss than an otherwise equivalent dense model (green). Is there some reason why this difference is expected or can I expect them to be the same? Thanks!
The text was updated successfully, but these errors were encountered:
The difference between the Dense and MoE variation is that:
the Dense model has only one MLP after the attention mechanism and
the MoE model has NxMLPs with a gating mechanism after the attention.
So in general, the respective experts are smaller in size (parameters) than their dense MLP counterpart. This means that they have less "capacity" to learn more complex patterns.
Next, you might ask yourself, why do we use MoE architectures then? Well for Efficiency and speed purposes.
Firstly, only a subset of the experts are chosen, so not all weights are used at inference (+speed +efficiency)
MoE architectures allow for parallelization between the experts so inference is sped up (+speed)
So by increasing a model's size with MoE architecture you can have at inference the same load on your machine as a smaller model by keeping the same performance as an equally big dense model.
I'm finding that training a 1-expert dMoE (brown) has worse training loss than an otherwise equivalent dense model (green). Is there some reason why this difference is expected or can I expect them to be the same? Thanks!
The text was updated successfully, but these errors were encountered: