-
The challenge for Tutorial 6 was to make the model rotate independently from the camera. The solution given here: https://github.com/sotrh/learn-wgpu/blob/master/code/beginner/tutorial6-uniforms/src/challenge.rs#L104-L108 seems like it uses the camera projection to give a rotated view of the object. I'm new to 3D programming (which is why I'm following this tutorial), but the given solution does not seem like it would work if we had multiple models that we wanted to rotated separately. I found a different solution where I rotate the vertices and then update the struct State {
// ...
fn update(&mut self) {
// ...
self.model_rotation += cgmath::Deg(2.0);
let rotation: cgmath::Matrix4<f32> = cgmath::Matrix4::from_angle_z(self.model_rotation);
let vertices = VERTICES
.iter()
.map(|vert| {
let pos =
cgmath::Vector4::new(vert.position[0], vert.position[1], vert.position[2], 0.0);
let new_pos = rotation * pos;
Vertex {
position: [new_pos.x, new_pos.y, new_pos.z],
tex_coords: vert.tex_coords,
}
})
.collect::<Vec<_>>();
self.queue.write_buffer(
&self.vertex_buffer,
0,
bytemuck::cast_slice(vertices.as_slice()),
);
}
// ..
} Is my solution a good solution, or is rewriting the |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
This solution requires reuploading all the vertices, which is an expensive operation. It works, but if you need to do this for hundreds of models, your code will get really slow. The intended solution involves creating a separate matrix known as a model matrix. Basically each model gets it's own matrix that you use to position it in the world. This gets passed into the vertex shader where you multiply it with the camera's matrix to transform the vertices. I should probably update the |
Beta Was this translation helpful? Give feedback.
This solution requires reuploading all the vertices, which is an expensive operation. It works, but if you need to do this for hundreds of models, your code will get really slow. The intended solution involves creating a separate matrix known as a model matrix.
Basically each model gets it's own matrix that you use to position it in the world. This gets passed into the vertex shader where you multiply it with the camera's matrix to transform the vertices.
I should probably update the
challenge.rs
to use this method. I think I just got a bit lazy.