You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing. I noticed that the SGLD implementation here yields much better result i.e. lower KL-Div than when I tried using SGLD implementation from some of previous SGLD papers in the same toy classification experiment (despite that the latter could produce a good classification accuracy but the KL-Div was rather high and the probability distribution plot didn't resemble the HMC reference).
I wonder where the chosen prior came from as I find it hard to get detailed information about setting prior over network parameters in practice from previous SGLD publications (I could find very little clue both from the papers and the source code).
And what is the range for valid value of parameter alpha in the prior formula?
The text was updated successfully, but these errors were encountered:
Thanks for sharing. I noticed that the SGLD implementation here yields much better result i.e. lower KL-Div than when I tried using SGLD implementation from some of previous SGLD papers in the same toy classification experiment (despite that the latter could produce a good classification accuracy but the KL-Div was rather high and the probability distribution plot didn't resemble the HMC reference).
I wonder where the chosen prior came from as I find it hard to get detailed information about setting prior over network parameters in practice from previous SGLD publications (I could find very little clue both from the papers and the source code).
And what is the range for valid value of parameter alpha in the prior formula?
The text was updated successfully, but these errors were encountered: