You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# post-process offsets to get centers for gaussians
offsets = offsets * scaling_repeat[:,:3]
xyz = repeat_anchor + offsets
`
What does this stands for scaling_repeat[:,3:], i have found that this is from grid_scaling initialize by
scales = torch.log(torch.sqrt(dist2))[...,None].repeat(1, 6)
Could you explain the role of each dimension in scaling_repeat, especially in the context of the slicing [:,3:] and [:,:3]?
Thanks
The text was updated successfully, but these errors were encountered:
The [:,:3] controls the step size of offset. The [:,3:] serves as the base scale for neural gaussian's shape, which means the cov MLP learn a residual scales.
Therefore, the learnable scaling l takes charge of both scales and positions of 10 neural Gaussians.
If the shape of anchors is [N,3], then the shape of scaling is [N,6].
Is my understanding correct? Just want to make sure.
upon this code section
` # post-process cov
scaling = scaling_repeat[:,3:] * torch.sigmoid(scale_rot[:,:3]) # * (1+torch.sigmoid(repeat_dist))
rot = pc.rotation_activation(scale_rot[:,3:7])
`
What does this stands for scaling_repeat[:,3:], i have found that this is from grid_scaling initialize by
scales = torch.log(torch.sqrt(dist2))[...,None].repeat(1, 6)
Could you explain the role of each dimension in scaling_repeat, especially in the context of the slicing [:,3:] and [:,:3]?
Thanks
The text was updated successfully, but these errors were encountered: