-Specifying the attributes as we did now is quite verbose. We could use the `vertex_attr_array` macro provided by wgpu to clean things up a bit. With it our `VertexBufferLayout` becomes
+Specifying the attributes as we did now is quite verbose. We could use the `vertex_attr_array` macro provided by wgpu to clean things up a bit. With it, our `VertexBufferLayout` becomes
```rust
wgpu::VertexBufferLayout {
@@ -204,11 +204,11 @@ impl Vertex {
}
```
-Regardless I feel it's good to show how the data gets mapped, so I'll forgo using this macro for now.
+Regardless, I feel it's good to show how the data gets mapped, so I'll forgo using this macro for now.
-Now we can use it when we create the `render_pipeline`.
+Now, we can use it when we create the `render_pipeline`.
```rust
let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
@@ -223,7 +223,7 @@ let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescrip
});
```
-One more thing: we need to actually set the vertex buffer in the render method otherwise our program will crash.
+One more thing: we need to actually set the vertex buffer in the render method. Otherwise, our program will crash.
```rust
// render()
@@ -266,7 +266,7 @@ impl State {
}
```
-Then use it in the draw call.
+Then, use it in the draw call.
```rust
// render
@@ -315,7 +315,7 @@ We technically don't *need* an index buffer, but they still are plenty useful. A
![A pentagon made of 3 triangles](./pentagon.png)
-It has a total of 5 vertices and 3 triangles. Now if we wanted to display something like this using just vertices we would need something like the following.
+It has a total of 5 vertices and 3 triangles. Now, if we wanted to display something like this using just vertices, we would need something like the following.
```rust
const VERTICES: &[Vertex] = &[
@@ -333,9 +333,9 @@ const VERTICES: &[Vertex] = &[
];
```
-You'll note though that some of the vertices are used more than once. C, and B get used twice, and E is repeated 3 times. Assuming that each float is 4 bytes, then that means of the 216 bytes we use for `VERTICES`, 96 of them are duplicate data. Wouldn't it be nice if we could list these vertices once? Well, we can! That's where an index buffer comes into play.
+You'll note, though, that some of the vertices are used more than once. C and B are used twice, and E is repeated three times. Assuming that each float is 4 bytes, then that means of the 216 bytes we use for `VERTICES`, 96 of them are duplicate data. Wouldn't it be nice if we could list these vertices once? Well, we can! That's where an index buffer comes into play.
-Basically, we store all the unique vertices in `VERTICES` and we create another buffer that stores indices to elements in `VERTICES` to create the triangles. Here's an example of that with our pentagon.
+Basically, we store all the unique vertices in `VERTICES`, and we create another buffer that stores indices to elements in `VERTICES` to create the triangles. Here's an example of that with our pentagon.
```rust
// lib.rs
@@ -354,7 +354,7 @@ const INDICES: &[u16] = &[
];
```
-Now with this setup, our `VERTICES` take up about 120 bytes and `INDICES` is just 18 bytes given that `u16` is 2 bytes wide. In this case, wgpu automatically adds 2 extra bytes of padding to make sure the buffer is aligned to 4 bytes, but it's still just 20 bytes. All together our pentagon is 140 bytes in total. That means we saved 76 bytes! It may not seem like much, but when dealing with tri counts in the hundreds of thousands, indexing saves a lot of memory.
+Now, with this setup, our `VERTICES` take up about 120 bytes and `INDICES` is just 18 bytes, given that `u16` is 2 bytes wide. In this case, wgpu automatically adds 2 extra bytes of padding to make sure the buffer is aligned to 4 bytes, but it's still just 20 bytes. Altogether, our pentagon is 140 bytes in total. That means we saved 76 bytes! It may not seem like much, but when dealing with tri counts in the hundreds of thousands, indexing saves a lot of memory.
There are a couple of things we need to change in order to use indexing. The first is we need to create a buffer to store the indices. In `State`'s `new()` method, create the `index_buffer` after you create the `vertex_buffer`. Also, change `num_vertices` to `num_indices` and set it equal to `INDICES.len()`.
@@ -377,7 +377,7 @@ let index_buffer = device.create_buffer_init(
let num_indices = INDICES.len() as u32;
```
-We don't need to implement `Pod` and `Zeroable` for our indices, because `bytemuck` has already implemented them for basic types such as `u16`. That means we can just add `index_buffer` and `num_indices` to the `State` struct.
+We don't need to implement `Pod` and `Zeroable` for our indices because `bytemuck` has already implemented them for basic types such as `u16`. That means we can just add `index_buffer` and `num_indices` to the `State` struct.
```rust
struct State {
@@ -421,9 +421,9 @@ render_pass.set_index_buffer(self.index_buffer.slice(..), wgpu::IndexFormat::Uin
render_pass.draw_indexed(0..self.num_indices, 0, 0..1); // 2.
```
-A couple things to note:
-1. The method name is `set_index_buffer` not `set_index_buffers`. You can only have one index buffer set at a time.
-2. When using an index buffer, you need to use `draw_indexed`. The `draw` method ignores the index buffer. Also make sure you use the number of indices (`num_indices`), not vertices as your model will either draw wrong, or the method will `panic` because there are not enough indices.
+A couple of things to note:
+1. The method name is `set_index_buffer`, not `set_index_buffers`. You can only have one index buffer set at a time.
+2. When using an index buffer, you need to use `draw_indexed`. The `draw` method ignores the index buffer. Also, make sure you use the number of indices (`num_indices`), not vertices, as your model will either draw wrong or the method will `panic` because there are not enough indices.
With all that you should have a garishly magenta pentagon in your window.
@@ -431,11 +431,11 @@ With all that you should have a garishly magenta pentagon in your window.
## Color Correction
-If you use a color picker on the magenta pentagon, you'll get a hex value of #BC00BC. If you convert this to RGB values you'll get (188, 0, 188). Dividing these values by 255 to get them into the [0, 1] range we get roughly (0.737254902, 0, 0.737254902). This is not the same as what we are using for our vertex colors, which is (0.5, 0.0, 0.5). The reason for this has to do with color spaces.
+If you use a color picker on the magenta pentagon, you'll get a hex value of #BC00BC. If you convert this to RGB values, you'll get (188, 0, 188). Dividing these values by 255 to get them into the [0, 1] range, we get roughly (0.737254902, 0, 0.737254902). This is not the same as what we are using for our vertex colors, which is (0.5, 0.0, 0.5). The reason for this has to do with color spaces.
-Most monitors use a color space known as sRGB. Our surface is (most likely depending on what is returned from `surface.get_preferred_format()`) using an sRGB texture format. The sRGB format stores colors according to their relative brightness instead of their actual brightness. The reason for this is that our eyes don't perceive light linearly. We notice more differences in darker colors than we do in lighter colors.
+Most monitors use a color space known as sRGB. Our surface is (most likely depending on what is returned from `surface.get_preferred_format()`) using an sRGB texture format. The sRGB format stores colors according to their relative brightness instead of their actual brightness. The reason for this is that our eyes don't perceive light linearly. We notice more differences in darker colors than in lighter colors.
-You get the correct color using the following formula: `srgb_color = ((rgb_color / 255 + 0.055) / 1.055) ^ 2.4`. Doing this with an RGB value of (188, 0, 188) will give us (0.5028864580325687, 0.0, 0.5028864580325687). A little off from our (0.5, 0.0, 0.5). Instead of doing manual color conversion, you'll likely save a lot of time by using textures instead as they are stored as sRGB by default, so they don't suffer from the same color inaccuracies that vertex colors do. We'll cover textures in the next lesson.
+You get the correct color using the following formula: `srgb_color = ((rgb_color / 255 + 0.055) / 1.055) ^ 2.4`. Doing this with an RGB value of (188, 0, 188) will give us (0.5028864580325687, 0.0, 0.5028864580325687). A little off from our (0.5, 0.0, 0.5). Instead of doing a manual color conversion, you'll likely save a lot of time by using textures instead, as they are stored as sRGB by default, so they don't suffer from the same color inaccuracies that vertex colors do. We'll cover textures in the next lesson.
## Challenge
Create a more complex shape than the one we made (aka. more than three triangles) using a vertex buffer and an index buffer. Toggle between the two with the space key.
diff --git a/docs/beginner/tutorial5-textures/README.md b/docs/beginner/tutorial5-textures/README.md
index 0a808c9da..1f2ccf2e8 100644
--- a/docs/beginner/tutorial5-textures/README.md
+++ b/docs/beginner/tutorial5-textures/README.md
@@ -2,7 +2,7 @@
Up to this point, we have been drawing super simple shapes. While we can make a game with just triangles, trying to draw highly detailed objects would massively limit what devices could even run our game. However, we can get around this problem with **textures**.
-Textures are images overlaid on a triangle mesh to make it seem more detailed. There are multiple types of textures such as normal maps, bump maps, specular maps, and diffuse maps. We're going to talk about diffuse maps, or more simply, the color texture.
+Textures are images overlaid on a triangle mesh to make it seem more detailed. There are multiple types of textures, such as normal maps, bump maps, specular maps, and diffuse maps. We're going to talk about diffuse maps or, more simply, the color texture.
## Loading an image from a file
@@ -19,15 +19,15 @@ default-features = false
features = ["png", "jpeg"]
```
-The jpeg decoder that `image` includes uses [rayon](https://docs.rs/rayon) to speed up the decoding with threads. WASM doesn't support threads currently so we need to disable this so that our code won't crash when we try to load a jpeg on the web.
+The jpeg decoder that `image` includes uses [rayon](https://docs.rs/rayon) to speed up the decoding with threads. WASM doesn't support threads currently, so we need to disable this so our code won't crash when we try to load a jpeg on the web.
-The old way of writing data to a texture was to copy the pixel data to a buffer and then copy it to the texture. Using `write_texture` is a bit more efficient as it uses one buffer less - I'll leave it here though in case you need it.
+The old way of writing data to a texture was to copy the pixel data to a buffer and then copy it to the texture. Using `write_texture` is a bit more efficient as it uses one buffer less - I'll leave it here, though, in case you need it.
```rust
let buffer = device.create_buffer_init(
@@ -173,11 +173,11 @@ The `address_mode_*` parameters determine what to do if the sampler gets a textu
The `mag_filter` and `min_filter` fields describe what to do when the sample footprint is smaller or larger than one texel. These two fields usually work when the mapping in the scene is far from or close to the camera.
-There are 2 options:
+There are two options:
* `Linear`: Select two texels in each dimension and return a linear interpolation between their values.
-* `Nearest`: Return the value of the texel nearest to the texture coordinates. This creates an image that's crisper from far away but pixelated up close. This can be desirable, however, if your textures are designed to be pixelated, like in pixel art games, or voxel games like Minecraft.
+* `Nearest`: Return the texel value nearest to the texture coordinates. This creates an image that's crisper from far away but pixelated up close. This can be desirable, however, if your textures are designed to be pixelated, like in pixel art games or voxel games like Minecraft.
-Mipmaps are a complex topic and will require their own section in the future. For now, we can say that `mipmap_filter` functions similar to `(mag/min)_filter` as it tells the sampler how to blend between mipmaps.
+Mipmaps are a complex topic and will require their own section in the future. For now, we can say that `mipmap_filter` functions are similar to `(mag/min)_filter` as it tells the sampler how to blend between mipmaps.
I'm using some defaults for the other fields. If you want to see what they are, check [the wgpu docs](https://docs.rs/wgpu/latest/wgpu/struct.SamplerDescriptor.html).
@@ -214,7 +214,7 @@ let texture_bind_group_layout =
});
```
-Our `texture_bind_group_layout` has two entries: one for a sampled texture at binding 0, and one for a sampler at binding 1. Both of these bindings are visible only to the fragment shader as specified by `FRAGMENT`. The possible values for this field are any bitwise combination of `NONE`, `VERTEX`, `FRAGMENT`, or `COMPUTE`. Most of the time we'll only use `FRAGMENT` for textures and samplers, but it's good to know what else is available.
+Our `texture_bind_group_layout` has two entries: one for a sampled texture at binding 0 and one for a sampler at binding 1. Both of these bindings are visible only to the fragment shader as specified by `FRAGMENT`. The possible values for this field are any bitwise combination of `NONE`, `VERTEX`, `FRAGMENT`, or `COMPUTE`. Most of the time, we'll only use `FRAGMENT` for textures and samplers, but it's good to know what else is available.
With `texture_bind_group_layout`, we can now create our `BindGroup`:
@@ -237,7 +237,7 @@ let diffuse_bind_group = device.create_bind_group(
);
```
-Looking at this you might get a bit of déjà vu! That's because a `BindGroup` is a more specific declaration of the `BindGroupLayout`. The reason they're separate is that it allows us to swap out `BindGroup`s on the fly, so long as they all share the same `BindGroupLayout`. Each texture and sampler we create will need to be added to a `BindGroup`. For our purposes, we'll create a new bind group for each texture.
+Looking at this, you might get a bit of déjà vu! That's because a `BindGroup` is a more specific declaration of the `BindGroupLayout`. The reason they're separate is that it allows us to swap out `BindGroup`s on the fly, so long as they all share the same `BindGroupLayout`. Each texture and sampler we create will need to be added to a `BindGroup`. For our purposes, we'll create a new bind group for each texture.
Now that we have our `diffuse_bind_group`, let's add it to our `State` struct:
@@ -294,7 +294,7 @@ render_pass.draw_indexed(0..self.num_indices, 0, 0..1);
## PipelineLayout
-Remember the `PipelineLayout` we created back in [the pipeline section](learn-wgpu/beginner/tutorial3-pipeline#how-do-we-use-the-shaders)? Now we finally get to use it! The `PipelineLayout` contains a list of `BindGroupLayout`s that the pipeline can use. Modify `render_pipeline_layout` to use our `texture_bind_group_layout`.
+Remember the `PipelineLayout` we created back in [the pipeline section](learn-wgpu/beginner/tutorial3-pipeline#how-do-we-use-the-shaders)? Now, we finally get to use it! The `PipelineLayout` contains a list of `BindGroupLayout`s that the pipeline can use. Modify `render_pipeline_layout` to use our `texture_bind_group_layout`.
```rust
async fn new(...) {
@@ -313,7 +313,7 @@ async fn new(...) {
## A change to the VERTICES
There are a few things we need to change about our `Vertex` definition. Up to now, we've been using a `color` attribute to set the color of our mesh. Now that we're using a texture, we want to replace our `color` with `tex_coords`. These coordinates will then be passed to the `Sampler` to retrieve the appropriate color.
-Since our `tex_coords` are two dimensional, we'll change the field to take two floats instead of three.
+Since our `tex_coords` are two-dimensional, we'll change the field to take two floats instead of three.
First, we'll change the `Vertex` struct:
@@ -413,11 +413,11 @@ The variables `t_diffuse` and `s_diffuse` are what's known as uniforms. We'll go
## The results
-If we run our program now we should get the following result:
+If we run our program now, we should get the following result:
![an upside down tree on a pentagon](./upside-down.png)
-That's weird, our tree is upside down! This is because wgpu's world coordinates have the y-axis pointing up, while texture coordinates have the y-axis pointing down. In other words, (0, 0) in texture coordinates corresponds to the top-left of the image, while (1, 1) is the bottom right.
+That's weird. Our tree is upside down! This is because wgpu's world coordinates have the y-axis pointing up, while texture coordinates have the y-axis pointing down. In other words, (0, 0) in texture coordinates corresponds to the top-left of the image, while (1, 1) is the bottom right.
![happy-tree-uv-coords.png](./happy-tree-uv-coords.png)
@@ -541,11 +541,11 @@ impl Texture {
-Notice that we're using `to_rgba8()` instead of `as_rgba8()`. PNGs work fine with `as_rgba8()`, as they have an alpha channel. But, JPEGs don't have an alpha channel, and the code would panic if we try to call `as_rgba8()` on the JPEG texture image we are going to use. Instead, we can use `to_rgba8()` to handle such an image, which will generate a new image buffer with alpha channel even if the original image does not have one.
+Notice that we're using `to_rgba8()` instead of `as_rgba8()`. PNGs work fine with `as_rgba8()`, as they have an alpha channel. But JPEGs don't have an alpha channel, and the code would panic if we try to call `as_rgba8()` on the JPEG texture image we are going to use. Instead, we can use `to_rgba8()` to handle such an image, which will generate a new image buffer with an alpha channel even if the original image does not have one.
-We need to import `texture.rs` as a module, so somewhere at the top of `lib.rs` add the following.
+We need to import `texture.rs` as a module, so at the top of `lib.rs` add the following.
```rust
mod texture;
@@ -608,7 +608,7 @@ impl State {
Phew!
-With these changes in place, the code should be working the same as it was before, but we now have a much easier way to create textures.
+With these changes in place, the code should be working the same as before, but we now have a much easier way to create textures.
## Challenge
diff --git a/docs/beginner/tutorial6-uniforms/README.md b/docs/beginner/tutorial6-uniforms/README.md
index f7a63a41f..d1819497c 100644
--- a/docs/beginner/tutorial6-uniforms/README.md
+++ b/docs/beginner/tutorial6-uniforms/README.md
@@ -1,6 +1,6 @@
# Uniform buffers and a 3d camera
-While all of our previous work has seemed to be in 2d, we've actually been working in 3d the entire time! That's part of the reason why our `Vertex` structure has `position` be an array of 3 floats instead of just 2. We can't really see the 3d-ness of our scene, because we're viewing things head-on. We're going to change our point of view by creating a `Camera`.
+While all of our previous work has seemed to be in 2D, we've actually been working in 3d the entire time! That's part of the reason why our `Vertex` structure has `position` as an array of 3 floats instead of just 2. We can't really see the 3d-ness of our scene because we're viewing things head-on. We're going to change our point of view by creating a `Camera`.
## A perspective camera
@@ -12,7 +12,7 @@ This tutorial is more about learning to use wgpu and less about linear algebra,
cgmath = "0.18"
```
-Now that we have a math library, let's put it to use! Create a `Camera` struct above the `State` struct.
+Now that we have a math library let's put it to use! Create a `Camera` struct above the `State` struct.
```rust
struct Camera {
@@ -41,7 +41,7 @@ impl Camera {
The `build_view_projection_matrix` is where the magic happens.
1. The `view` matrix moves the world to be at the position and rotation of the camera. It's essentially an inverse of whatever the transform matrix of the camera would be.
2. The `proj` matrix warps the scene to give the effect of depth. Without this, objects up close would be the same size as objects far away.
-3. The coordinate system in Wgpu is based on DirectX, and Metal's coordinate systems. That means that in [normalized device coordinates](https://github.com/gfx-rs/gfx/tree/master/src/backend/dx12#normalized-coordinates) the x axis and y axis are in the range of -1.0 to +1.0, and the z axis is 0.0 to +1.0. The `cgmath` crate (as well as most game math crates) is built for OpenGL's coordinate system. This matrix will scale and translate our scene from OpenGL's coordinate system to WGPU's. We'll define it as follows.
+3. The coordinate system in Wgpu is based on DirectX and Metal's coordinate systems. That means that in [normalized device coordinates](https://github.com/gfx-rs/gfx/tree/master/src/backend/dx12#normalized-coordinates), the x-axis and y-axis are in the range of -1.0 to +1.0, and the z-axis is 0.0 to +1.0. The `cgmath` crate (as well as most game math crates) is built for OpenGL's coordinate system. This matrix will scale and translate our scene from OpenGL's coordinate system to WGPU's. We'll define it as follows.
```rust
#[rustfmt::skip]
@@ -68,7 +68,7 @@ async fn new(window: Window) -> Self {
// let diffuse_bind_group ...
let camera = Camera {
- // position the camera one unit up and 2 units back
+ // position the camera 1 unit up and 2 units back
// +z is out of the screen
eye: (0.0, 1.0, 2.0).into(),
// have it look at the origin
@@ -93,7 +93,7 @@ Now that we have our camera, and it can make us a view projection matrix, we nee
## The uniform buffer
-Up to this point, we've used `Buffer`s to store our vertex and index data, and even to load our textures. We are going to use them again to create what's known as a uniform buffer. A uniform is a blob of data that is available to every invocation of a set of shaders. We've technically already used uniforms for our texture and sampler. We're going to use them again to store our view projection matrix. To start let's create a struct to hold our uniform.
+Up to this point, we've used `Buffer`s to store our vertex and index data, and even to load our textures. We are going to use them again to create what's known as a uniform buffer. A uniform is a blob of data available to every invocation of a set of shaders. Technically, we've already used uniforms for our texture and sampler. We're going to use them again to store our view projection matrix. To start, let's create a struct to hold our uniform.
```rust
// We need this for Rust to store our data correctly for the shaders
@@ -101,7 +101,7 @@ Up to this point, we've used `Buffer`s to store our vertex and index data, and e
// This is so we can store this in a buffer
#[derive(Debug, Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
struct CameraUniform {
- // We can't use cgmath with bytemuck directly so we'll have
+ // We can't use cgmath with bytemuck directly, so we'll have
// to convert the Matrix4 into a 4x4 f32 array
view_proj: [[f32; 4]; 4],
}
@@ -139,7 +139,7 @@ let camera_buffer = device.create_buffer_init(
## Uniform buffers and bind groups
-Cool, now that we have a uniform buffer, what do we do with it? The answer is we create a bind group for it. First, we have to create the bind group layout.
+Cool! Now that we have a uniform buffer, what do we do with it? The answer is we create a bind group for it. First, we have to create the bind group layout.
```rust
let camera_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
@@ -164,12 +164,12 @@ Some things to note:
1. We set `visibility` to `ShaderStages::VERTEX` as we only really need camera information in the vertex shader, as
that's what we'll use to manipulate our vertices.
2. The `has_dynamic_offset` means that the location of the data in the buffer may change. This will be the case if you
- store multiple sets of data that vary in size in a single buffer. If you set this to true you'll have to supply the
+ store multiple data sets that vary in size in a single buffer. If you set this to true, you'll have to supply the
offsets later.
3. `min_binding_size` specifies the smallest size the buffer can be. You don't have to specify this, so we
- leave it `None`. If you want to know more you can check [the docs](https://docs.rs/wgpu/latest/wgpu/enum.BindingType.html#variant.Buffer.field.min_binding_size).
+ leave it `None`. If you want to know more, you can check [the docs](https://docs.rs/wgpu/latest/wgpu/enum.BindingType.html#variant.Buffer.field.min_binding_size).
-Now we can create the actual bind group.
+Now, we can create the actual bind group.
```rust
let camera_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
@@ -273,7 +273,7 @@ fn vs_main(
## A controller for our camera
-If you run the code right now, you should get something that looks like this.
+If you run the code right now, you should get something like this.
![./static-tree.png](./static-tree.png)
@@ -340,7 +340,7 @@ impl CameraController {
let forward_norm = forward.normalize();
let forward_mag = forward.magnitude();
- // Prevents glitching when camera gets too close to the
+ // Prevents glitching when the camera gets too close to the
// center of the scene.
if self.is_forward_pressed && forward_mag > self.speed {
camera.eye += forward_norm * self.speed;
@@ -351,13 +351,13 @@ impl CameraController {
let right = forward_norm.cross(camera.up);
- // Redo radius calc in case the fowrard/backward is pressed.
+ // Redo radius calc in case the forward/backward is pressed.
let forward = camera.target - camera.eye;
let forward_mag = forward.magnitude();
if self.is_right_pressed {
- // Rescale the distance between the target and eye so
- // that it doesn't change. The eye therefore still
+ // Rescale the distance between the target and the eye so
+ // that it doesn't change. The eye, therefore, still
// lies on the circle made by the target and eye.
camera.eye = camera.target - (forward + right * self.speed).normalize() * forward_mag;
}
@@ -368,7 +368,7 @@ impl CameraController {
}
```
-This code is not perfect. The camera slowly moves back when you rotate it. It works for our purposes though. Feel free to improve it!
+This code is not perfect. The camera slowly moves back when you rotate it. It works for our purposes, though. Feel free to improve it!
We still need to plug this into our existing code to make it do anything. Add the controller to `State` and create it in `new()`.
@@ -405,8 +405,8 @@ fn input(&mut self, event: &WindowEvent) -> bool {
```
Up to this point, the camera controller isn't actually doing anything. The values in our uniform buffer need to be updated. There are a few main methods to do that.
-1. We can create a separate buffer and copy its contents to our `camera_buffer`. The new buffer is known as a staging buffer. This method is usually how it's done as it allows the contents of the main buffer (in this case `camera_buffer`) to only be accessible by the gpu. The gpu can do some speed optimizations which it couldn't if we could access the buffer via the cpu.
-2. We can call one of the mapping methods `map_read_async`, and `map_write_async` on the buffer itself. These allow us to access a buffer's contents directly but require us to deal with the `async` aspect of these methods this also requires our buffer to use the `BufferUsages::MAP_READ` and/or `BufferUsages::MAP_WRITE`. We won't talk about it here, but you check out [Wgpu without a window](../../showcase/windowless) tutorial if you want to know more.
+1. We can create a separate buffer and copy its contents to our `camera_buffer`. The new buffer is known as a staging buffer. This method is usually how it's done as it allows the contents of the main buffer (in this case, `camera_buffer`) to be accessible only by the GPU. The GPU can do some speed optimizations, which it couldn't if we could access the buffer via the CPU.
+2. We can call one of the mapping methods `map_read_async`, and `map_write_async` on the buffer itself. These allow us to access a buffer's contents directly but require us to deal with the `async` aspect of these methods. This also requires our buffer to use the `BufferUsages::MAP_READ` and/or `BufferUsages::MAP_WRITE`. We won't talk about it here, but check out the [Wgpu without a window](../../showcase/windowless) tutorial if you want to know more.
3. We can use `write_buffer` on `queue`.
We're going to use option number 3.
@@ -419,7 +419,7 @@ fn update(&mut self) {
}
```
-That's all we need to do. If you run the code now you should see a pentagon with our tree texture that you can rotate around and zoom into with the wasd/arrow keys.
+That's all we need to do. If you run the code now, you should see a pentagon with our tree texture that you can rotate around and zoom into with the wasd/arrow keys.
## Challenge
diff --git a/docs/beginner/tutorial7-instancing/README.md b/docs/beginner/tutorial7-instancing/README.md
index f3ffc4618..8490f6012 100644
--- a/docs/beginner/tutorial7-instancing/README.md
+++ b/docs/beginner/tutorial7-instancing/README.md
@@ -17,15 +17,15 @@ pub fn draw_indexed(
)
```
-The `instances` parameter takes a `Range
`. This parameter tells the GPU how many copies, or instances, of the model we want to draw. Currently, we are specifying `0..1`, which instructs the GPU to draw our model once, and then stop. If we used `0..5`, our code would draw 5 instances.
+The `instances` parameter takes a `Range`. This parameter tells the GPU how many copies, or instances, of the model we want to draw. Currently, we are specifying `0..1`, which instructs the GPU to draw our model once and then stop. If we used `0..5`, our code would draw five instances.
-The fact that `instances` is a `Range` may seem weird as using `1..2` for instances would still draw 1 instance of our object. Seems like it would be simpler to just use a `u32` right? The reason it's a range is that sometimes we don't want to draw **all** of our objects. Sometimes we want to draw a selection of them, because others are not in frame, or we are debugging and want to look at a particular set of instances.
+The fact that `instances` is a `Range` may seem weird, as using `1..2` for instances would still draw one instance of our object. It seems like it would be simpler just to use a `u32`, right? The reason it's a range is that sometimes we don't want to draw **all** of our objects. Sometimes, we want to draw a selection of them because others are not in the frame, or we are debugging and want to look at a particular set of instances.
-Ok, now we know how to draw multiple instances of an object, how do we tell wgpu what particular instance to draw? We are going to use something known as an instance buffer.
+Ok, now we know how to draw multiple instances of an object. How do we tell wgpu what particular instance to draw? We are going to use something known as an instance buffer.
## The Instance Buffer
-We'll create an instance buffer in a similar way to how we create a uniform buffer. First, we'll create a struct called `Instance`.
+We'll create an instance buffer similarly to how we create a uniform buffer. First, we'll create a struct called `Instance`.
```rust
// lib.rs
@@ -40,11 +40,11 @@ struct Instance {
-A `Quaternion` is a mathematical structure often used to represent rotation. The math behind them is beyond me (it involves imaginary numbers and 4D space) so I won't be covering them here. If you really want to dive into them [here's a Wolfram Alpha article](https://mathworld.wolfram.com/Quaternion.html).
+A `Quaternion` is a mathematical structure often used to represent rotation. The math behind them is beyond me (it involves imaginary numbers and 4D space), so I won't be covering them here. If you really want to dive into them [here's a Wolfram Alpha article](https://mathworld.wolfram.com/Quaternion.html).
-Using these values directly in the shader would be a pain as quaternions don't have a WGSL analog. I don't feel like writing the math in the shader, so we'll convert the `Instance` data into a matrix and store it into a struct called `InstanceRaw`.
+Using these values directly in the shader would be a pain, as quaternions don't have a WGSL analog. I don't feel like writing the math in the shader, so we'll convert the `Instance` data into a matrix and store it in a struct called `InstanceRaw`.
```rust
// NEW!
@@ -70,7 +70,7 @@ impl Instance {
}
```
-Now we need to add 2 fields to `State`: `instances`, and `instance_buffer`.
+Now we need to add two fields to `State`: `instances` and `instance_buffer`.
```rust
struct State {
@@ -79,7 +79,7 @@ struct State {
}
```
-The `cgmath` crate uses traits to provide common mathematical methods across its structs such as `Vector3`, and these traits must be imported before these methods can be called. For convenience, the `prelude` module within the crate provides the most common of these extension crates when it is imported.
+The `cgmath` crate uses traits to provide common mathematical methods across its structs, such as `Vector3`, which must be imported before these methods can be called. For convenience, the `prelude` module within the crate provides the most common of these extension crates when it is imported.
To import this prelude module, put this line near the top of `lib.rs`.
@@ -94,7 +94,7 @@ const NUM_INSTANCES_PER_ROW: u32 = 10;
const INSTANCE_DISPLACEMENT: cgmath::Vector3 = cgmath::Vector3::new(NUM_INSTANCES_PER_ROW as f32 * 0.5, 0.0, NUM_INSTANCES_PER_ROW as f32 * 0.5);
```
-Now we can create the actual instances.
+Now, we can create the actual instances.
```rust
impl State {
@@ -106,7 +106,7 @@ impl State {
let rotation = if position.is_zero() {
// this is needed so an object at (0, 0, 0) won't get scaled to zero
- // as Quaternions can effect scale if they're not created correctly
+ // as Quaternions can affect scale if they're not created correctly
cgmath::Quaternion::from_axis_angle(cgmath::Vector3::unit_z(), cgmath::Deg(0.0))
} else {
cgmath::Quaternion::from_axis_angle(position.normalize(), cgmath::Deg(45.0))
@@ -152,8 +152,8 @@ impl InstanceRaw {
// for each vec4. We'll have to reassemble the mat4 in the shader.
wgpu::VertexAttribute {
offset: 0,
- // While our vertex shader only uses locations 0, and 1 now, in later tutorials we'll
- // be using 2, 3, and 4, for Vertex. We'll start at slot 5 not conflict with them later
+ // While our vertex shader only uses locations 0, and 1 now, in later tutorials, we'll
+ // be using 2, 3, and 4, for Vertex. We'll start at slot 5, not conflict with them later
shader_location: 5,
format: wgpu::VertexFormat::Float32x4,
},
@@ -203,7 +203,7 @@ Self {
}
```
-The last change we need to make is in the `render()` method. We need to bind our `instance_buffer` and we need to change the range we're using in `draw_indexed()` to include the number of instances.
+The last change we need to make is in the `render()` method. We need to bind our `instance_buffer` and change the range we're using in `draw_indexed()` to include the number of instances.
```rust
render_pass.set_pipeline(&self.render_pipeline);
@@ -220,7 +220,7 @@ render_pass.draw_indexed(0..self.num_indices, 0, 0..self.instances.len() as _);
-Make sure if you add new instances to the `Vec`, that you recreate the `instance_buffer` and as well as `camera_bind_group`, otherwise your new instances won't show up correctly.
+Make sure that if you add new instances to the `Vec`, you recreate the `instance_buffer` as well as `camera_bind_group`. Otherwise, your new instances won't show up correctly.
diff --git a/docs/beginner/tutorial8-depth/README.md b/docs/beginner/tutorial8-depth/README.md
index 685b262e5..d189a80c5 100644
--- a/docs/beginner/tutorial8-depth/README.md
+++ b/docs/beginner/tutorial8-depth/README.md
@@ -1,24 +1,24 @@
# The Depth Buffer
-Let's take a closer look at the last example at an angle.
+Let's take a closer look at the last example from an angle.
![depth_problems.png](./depth_problems.png)
-Models that should be in the back are getting rendered ahead of ones that should be in the front. This is caused by the draw order. By default, pixel data from a new object will replace old pixel data.
+Models that should be in the back are getting rendered ahead of those in the front. This is caused by the draw order. By default, pixel data from a new object will replace old pixel data.
-There are two ways to solve this: sort the data from back to front, or use what's known as a depth buffer.
+There are two ways to solve this: sort the data from back to front or use what's known as a depth buffer.
## Sorting from back to front
-This is the go-to method for 2d rendering as it's pretty easy to know what's supposed to go in front of what. You can just use the z order. In 3d rendering, it gets a little trickier because the order of the objects changes based on the camera angle.
+This is the go-to method for 2D rendering as it's pretty easy to know what's supposed to go in front of what. You can just use the z-order. In 3d rendering, it gets a little trickier because the order of the objects changes based on the camera angle.
-A simple way of doing this is to sort all the objects by their distance to the camera's position. There are flaws with this method though as when a large object is behind a small object, parts of the large object that should be in front of the small object will be rendered behind it. We'll also run into issues with objects that overlap *themselves*.
+A simple way of doing this is to sort all the objects by their distance from the camera's position. There are flaws with this method, though, as when a large object is behind a small object, parts of the large object that should be in front of the small object will be rendered behind it. We'll also run into issues with objects that overlap *themselves*.
-If we want to do this properly we need to have pixel-level precision. That's where a *depth buffer* comes in.
+If we want to do this properly, we need to have pixel-level precision. That's where a *depth buffer* comes in.
## A pixels depth
-A depth buffer is a black and white texture that stores the z-coordinate of rendered pixels. Wgpu can use this when drawing new pixels to determine whether to replace the data or keep it. This technique is called depth testing. This will fix our draw order problem without needing us to sort our objects!
+A depth buffer is a black and white texture that stores the z-coordinate of rendered pixels. Wgpu can use this when drawing new pixels to determine whether to replace or keep the data. This technique is called depth testing. This will fix our draw order problem without needing us to sort our objects!
Let's make a function to create the depth texture in `texture.rs`.
@@ -66,11 +66,11 @@ impl Texture {
}
```
-1. We need the DEPTH_FORMAT for when we create the depth stage of the `render_pipeline` and for creating the depth texture itself.
-2. Our depth texture needs to be the same size as our screen if we want things to render correctly. We can use our `config` to make sure that our depth texture is the same size as our surface textures.
+1. We need the DEPTH_FORMAT for creating the depth stage of the `render_pipeline` and for creating the depth texture itself.
+2. Our depth texture needs to be the same size as our screen if we want things to render correctly. We can use our `config` to ensure our depth texture is the same size as our surface textures.
3. Since we are rendering to this texture, we need to add the `RENDER_ATTACHMENT` flag to it.
4. We technically don't *need* a sampler for a depth texture, but our `Texture` struct requires it, and we need one if we ever want to sample it.
-5. If we do decide to render our depth texture, we need to use `CompareFunction::LessEqual`. This is due to how the `sampler_comparison` and `textureSampleCompare()` interacts with the `texture()` function in GLSL.
+5. If we do decide to render our depth texture, we need to use `CompareFunction::LessEqual`. This is due to how the `sampler_comparison` and `textureSampleCompare()` interact with the `texture()` function in GLSL.
We create our `depth_texture` in `State::new()`.
@@ -113,7 +113,7 @@ pub enum CompareFunction {
}
```
-2. There's another type of buffer called a stencil buffer. It's common practice to store the stencil buffer and depth buffer in the same texture. These fields control values for stencil testing. Since we aren't using a stencil buffer, we'll use default values. We'll cover stencil buffers [later](../../todo).
+2. There's another type of buffer called a stencil buffer. It's common practice to store the stencil buffer and depth buffer in the same texture. These fields control values for stencil testing. We'll use default values since we aren't using a stencil buffer. We'll cover stencil buffers [later](../../todo).
Don't forget to store the `depth_texture` in `State`.
@@ -165,13 +165,13 @@ let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
});
```
-And that's all we have to do! No shader code needed! If you run the application, the depth issues will be fixed.
+And that's all we have to do! No shader code is needed! If you run the application, the depth issues will be fixed.
![forest_fixed.png](./forest_fixed.png)
## Challenge
-Since the depth buffer is a texture, we can sample it in the shader. Because it's a depth texture, we'll have to use the `sampler_comparison` uniform type and the `textureSampleCompare` function instead of `sampler`, and `sampler2D` respectively. Create a bind group for the depth texture (or reuse an existing one), and render it to the screen.
+Since the depth buffer is a texture, we can sample it in the shader. Because it's a depth texture, we'll have to use the `sampler_comparison` uniform type and the `textureSampleCompare` function instead of `sampler` and `sampler2D` respectively. Create a bind group for the depth texture (or reuse an existing one), and render it to the screen.
diff --git a/docs/beginner/tutorial9-models/README.md b/docs/beginner/tutorial9-models/README.md
index e3b19d038..210353be2 100644
--- a/docs/beginner/tutorial9-models/README.md
+++ b/docs/beginner/tutorial9-models/README.md
@@ -1,8 +1,8 @@
# Model Loading
-Up to this point we've been creating our models manually. While this is an acceptable way to do this, it's really slow if we want to include complex models with lots of polygons. Because of this, we're going to modify our code to leverage the `.obj` model format so that we can create a model in software such as blender and display it in our code.
+Up to this point, we've been creating our models manually. While this is an acceptable way to do this, it's really slow if we want to include complex models with lots of polygons. Because of this, we're going to modify our code to leverage the `.obj` model format so that we can create a model in software such as Blender and display it in our code.
-Our `lib.rs` file is getting pretty cluttered, let's create a `model.rs` file that we can put our model loading code into.
+Our `lib.rs` file is getting pretty cluttered. Let's create a `model.rs` file into which we can put our model loading code.
```rust
// model.rs
@@ -25,7 +25,7 @@ impl Vertex for ModelVertex {
}
```
-You'll notice a couple of things here. In `lib.rs` we had `Vertex` as a struct, here we're using a trait. We could have multiple vertex types (model, UI, instance data, etc.). Making `Vertex` a trait will allow us to abstract out the `VertexBufferLayout` creation code to make creating `RenderPipeline`s simpler.
+You'll notice a couple of things here. In `lib.rs`, we had `Vertex` as a struct, but here we're using a trait. We could have multiple vertex types (model, UI, instance data, etc.). Making `Vertex` a trait will allow us to abstract out the `VertexBufferLayout` creation code to make creating `RenderPipeline`s simpler.
Another thing to mention is the `normal` field in `ModelVertex`. We won't use this until we talk about lighting, but will add it to the struct for now.
@@ -81,13 +81,13 @@ Since the `desc` method is implemented on the `Vertex` trait, the trait needs to
use model::Vertex;
```
-With all that in place, we need a model to render. If you have one already that's great, but I've supplied a [zip file](https://github.com/sotrh/learn-wgpu/blob/master/code/beginner/tutorial9-models/res/cube.zip) with the model and all of its textures. We're going to put this model in a new `res` folder next to the existing `src` folder.
+With all that in place, we need a model to render. If you have one already, that's great, but I've supplied a [zip file](https://github.com/sotrh/learn-wgpu/blob/master/code/beginner/tutorial9-models/res/cube.zip) with the model and all of its textures. We're going to put this model in a new `res` folder next to the existing `src` folder.
## Accessing files in the res folder
-When cargo builds and runs our program it sets what's known as the current working directory. This directory is usually the folder containing your project's root `Cargo.toml`. The path to our res folder may differ depending on the structure of the project. In the `res` folder for the example code for this section tutorial is at `code/beginner/tutorial9-models/res/`. When loading our model we could use this path, and just append `cube.obj`. This is fine, but if we change our project's structure, our code will break.
+When Cargo builds and runs our program, it sets what's known as the current working directory. This directory usually contains your project's root `Cargo.toml`. The path to our res folder may differ depending on the project's structure. In the `res` folder, the example code for this section tutorial is at `code/beginner/tutorial9-models/res/`. When loading our model, we could use this path and just append `cube.obj`. This is fine, but if we change our project's structure, our code will break.
-We're going to fix that by modifying our build script to copy our `res` folder to where cargo creates our executable, and we'll reference it from there. Create a file called `build.rs` and add the following:
+We're going to fix that by modifying our build script to copy our `res` folder to where Cargo creates our executable, and we'll reference it from there. Create a file called `build.rs` and add the following:
```rust
use anyhow::*;
@@ -96,7 +96,7 @@ use fs_extra::dir::CopyOptions;
use std::env;
fn main() -> Result<()> {
- // This tells cargo to rerun this script if something in /res/ changes.
+ // This tells Cargo to rerun this script if something in /res/ changes.
println!("cargo:rerun-if-changed=res/*");
let out_dir = env::var("OUT_DIR")?;
@@ -112,13 +112,13 @@ fn main() -> Result<()> {
-Make sure to put `build.rs` in the same folder as the `Cargo.toml`. If you don't, cargo won't run it when your crate builds.
+Make sure to put `build.rs` in the same folder as the `Cargo.toml`. If you don't, Cargo won't run it when your crate builds.
-The `OUT_DIR` is an environment variable that cargo uses to specify where our application will be built.
+The `OUT_DIR` is an environment variable that Cargo uses to specify where our application will be built.
@@ -133,7 +133,7 @@ glob = "0.3"
## Accessing files from WASM
-By design, you can't access files on a user's filesystem in Web Assembly. Instead, we'll serve those files up using a web serve, and then load those files into our code using an http request. In order to simplify this, let's create a file called `resources.rs` to handle this for us. We'll create two functions that will load text files and binary files respectively.
+By design, you can't access files on a user's filesystem in Web Assembly. Instead, we'll serve those files up using a web serve and then load those files into our code using an http request. In order to simplify this, let's create a file called `resources.rs` to handle this for us. We'll create two functions that load text and binary files, respectively.
```rust
use std::io::{BufReader, Cursor};
@@ -197,11 +197,11 @@ pub async fn load_binary(file_name: &str) -> anyhow::Result> {
-We're using `OUT_DIR` on desktop to get to our `res` folder.
+We're using `OUT_DIR` on desktop to access our `res` folder.
-I'm using [reqwest](https://docs.rs/reqwest) to handle loading the requests when using WASM. Add the following to the Cargo.toml:
+I'm using [reqwest](https://docs.rs/reqwest) to handle loading the requests when using WASM. Add the following to the `Cargo.toml`:
```toml
[target.'cfg(target_arch = "wasm32")'.dependencies]
@@ -238,7 +238,7 @@ tobj = { version = "3.2.1", features = [
]}
```
-Before we can load our model though, we need somewhere to put it.
+Before we can load our model, though, we need somewhere to put it.
```rust
// model.rs
@@ -248,7 +248,7 @@ pub struct Model {
}
```
-You'll notice that our `Model` struct has a `Vec` for the `meshes`, and for `materials`. This is important as our obj file can include multiple meshes and materials. We still need to create the `Mesh` and `Material` classes, so let's do that.
+You'll notice that our `Model` struct has a `Vec` for the `meshes` and `materials`. This is important as our obj file can include multiple meshes and materials. We still need to create the `Mesh` and `Material` classes, so let's do that.
```rust
pub struct Material {
@@ -266,7 +266,7 @@ pub struct Mesh {
}
```
-The `Material` is pretty simple, it's just the name and one texture. Our cube obj actually has 2 textures, but one is a normal map, and we'll get to those [later](../../intermediate/tutorial11-normals). The name is more for debugging purposes.
+The `Material` is pretty simple. It's just the name and one texture. Our cube obj actually has two textures, but one is a normal map, and we'll get to those [later](../../intermediate/tutorial11-normals). The name is more for debugging purposes.
Speaking of textures, we'll need to add a function to load a `Texture` in `resources.rs`.
@@ -282,9 +282,9 @@ pub async fn load_texture(
}
```
-The `load_texture` method will be useful when we load the textures for our models, as `include_bytes!` requires that we know the name of the file at compile time which we can't really guarantee with model textures.
+The `load_texture` method will be useful when we load the textures for our models, as `include_bytes!` requires that we know the name of the file at compile time, which we can't really guarantee with model textures.
-`Mesh` holds a vertex buffer, an index buffer, and the number of indices in the mesh. We're using an `usize` for the material. This `usize` will be used to index the `materials` list when it comes time to draw.
+`Mesh` holds a vertex buffer, an index buffer, and the number of indices in the mesh. We're using an `usize` for the material. This `usize` will index the `materials` list when it comes time to draw.
With all that out of the way, we can get to loading our model.
@@ -385,7 +385,7 @@ pub async fn load_model(
## Rendering a mesh
-Before we can draw the model, we need to be able to draw an individual mesh. Let's create a trait called `DrawModel`, and implement it for `RenderPass`.
+Before we can draw the model, we need to be able to draw an individual mesh. Let's create a trait called `DrawModel` and implement it for `RenderPass`.
```rust
// model.rs
@@ -417,9 +417,9 @@ where
}
```
-We could have put these methods in an `impl Model`, but I felt it made more sense to have the `RenderPass` do all the rendering, as that's kind of its job. This does mean we have to import `DrawModel` when we go to render though.
+We could have put these methods in an `impl Model`, but I felt it made more sense to have the `RenderPass` do all the rendering, as that's kind of its job. This does mean we have to import `DrawModel` when we go to render, though.
-When we removed `vertex_buffer`, etc. we also removed their render_pass setup.
+When we removed `vertex_buffer`, etc., we also removed their render_pass setup.
```rust
// lib.rs
@@ -432,7 +432,7 @@ use model::DrawModel;
render_pass.draw_mesh_instanced(&self.obj_model.meshes[0], 0..self.instances.len() as u32);
```
-Before that though we need to actually load the model and save it to `State`. Put the following in `State::new()`.
+Before that, though, we need to load the model and save it to `State`. Put the following in `State::new()`.
```rust
let obj_model =
@@ -441,7 +441,7 @@ let obj_model =
.unwrap();
```
-Our new model is a bit bigger than our previous one so we're gonna need to adjust the spacing on our instances a bit.
+Our new model is a bit bigger than our previous one, so we're gonna need to adjust the spacing on our instances a bit.
```rust
const SPACE_BETWEEN: f32 = 3.0;
@@ -477,7 +477,7 @@ If you look at the texture files for our obj, you'll see that they don't match u
but we're still getting our happy tree texture.
-The reason for this is quite simple. Though we've created our textures we haven't created a bind group to give to the `RenderPass`. We're still using our old `diffuse_bind_group`. If we want to change that we need to use the bind group from our materials - the `bind_group` member of the `Material` struct.
+The reason for this is quite simple. Though we've created our textures, we haven't created a bind group to give to the `RenderPass`. We're still using our old `diffuse_bind_group`. If we want to change that, we need to use the bind group from our materials - the `bind_group` member of the `Material` struct.
We're going to add a material parameter to `DrawModel`.
@@ -536,7 +536,7 @@ With all that in place, we should get the following.
## Rendering the entire model
-Right now we are specifying the mesh and the material directly. This is useful if we want to draw a mesh with a different material. We're also not rendering other parts of the model (if we had some). Let's create a method for `DrawModel` that will draw all the parts of the model with their respective materials.
+Right now, we are specifying the mesh and the material directly. This is useful if we want to draw a mesh with a different material. We're also not rendering other parts of the model (if we had some). Let's create a method for `DrawModel` that will draw all the parts of the model with their respective materials.
```rust
pub trait DrawModel<'a> {
diff --git a/docs/intermediate/tutorial10-lighting/README.md b/docs/intermediate/tutorial10-lighting/README.md
index 9a0a44584..ba63b95e5 100644
--- a/docs/intermediate/tutorial10-lighting/README.md
+++ b/docs/intermediate/tutorial10-lighting/README.md
@@ -1,10 +1,10 @@
# Working with Lights
-While we can tell that our scene is 3d because of our camera, it still feels very flat. That's because our model stays the same color regardless of how it's oriented. If we want to change that we need to add lighting to our scene.
+While we can tell our scene is 3D because of our camera, it still feels very flat. That's because our model stays the same color regardless of its orientation. If we want to change that, we need to add lighting to our scene.
-In the real world, a light source emits photons that bounce around until they enter our eyes. The color we see is the light's original color minus whatever energy it lost while it was bouncing around.
+In the real world, a light source emits photons that bounce around until they enter our eyes. The color we see is the light's original color minus whatever energy it lost while bouncing around.
-In the computer graphics world, modeling individual photons would be hilariously computationally expensive. A single 100 Watt light bulb emits about 3.27 x 10^20 photons *per second*. Just imagine that for the sun! To get around this, we're gonna use math to cheat.
+In the computer graphics world, modeling individual photons would be hilariously computationally expensive. A single 100 Watt light bulb emits about 3.27 x 10^20 photons *per second*. Just imagine that for the sun! To get around this, we're going to use math to cheat.
Let's discuss a few options.
@@ -14,9 +14,9 @@ This is an *advanced* topic, and we won't be covering it in depth here. It's the
## The Blinn-Phong Model
-Ray/path tracing is often too computationally expensive for most real-time applications (though that is starting to change), so a more efficient, if less accurate method based on the [Phong reflection model](https://en.wikipedia.org/wiki/Phong_shading) is often used. It splits up the lighting calculation into three (3) parts: ambient lighting, diffuse lighting, and specular lighting. We're going to be learning the [Blinn-Phong model](https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_reflection_model), which cheats a bit at the specular calculation to speed things up.
+Ray/path tracing is often too computationally expensive for most real-time applications (though that is starting to change), so a more efficient, if less accurate method based on the [Phong reflection model](https://en.wikipedia.org/wiki/Phong_shading) is often used. It splits up the lighting calculation into three parts: ambient lighting, diffuse lighting, and specular lighting. We're going to be learning the [Blinn-Phong model](https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_reflection_model), which cheats a bit at the specular calculation to speed things up.
-Before we can get into that though, we need to add a light to our scene.
+Before we can get into that, though, we need to add a light to our scene.
```rust
// lib.rs
@@ -37,14 +37,10 @@ Our `LightUniform` represents a colored point in space. We're just going to use
-The rule of thumb for alignment with WGSL structs is field alignments are
-always powers of 2. For example, a `vec3` may only have 3 float fields giving
-it a size of 12, the alignment will be bumped up to the next power of 2 being
-16. This means that you have to be more careful with how you layout your struct
- in Rust.
+The rule of thumb for alignment with WGSL structs is field alignments are always powers of 2. For example, a `vec3` may only have three float fields, giving it a size of 12. The alignment will be bumped up to the next power of 2 being 16. This means that you have to be more careful with how you layout your struct in Rust.
-Some developers choose the use `vec4`s instead of `vec3`s to avoid alignment
-issues. You can learn more about the alignment rules in the [wgsl spec](https://www.w3.org/TR/WGSL/#alignment-and-size)
+Some developers choose to use `vec4`s instead of `vec3`s to avoid alignment
+issues. You can learn more about the alignment rules in the [WGSL spec](https://www.w3.org/TR/WGSL/#alignment-and-size)
@@ -97,7 +93,7 @@ let light_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
});
```
-Add those to `State`, and also update the `render_pipeline_layout`.
+Add those to `State` and also update the `render_pipeline_layout`.
```rust
let render_pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
@@ -109,7 +105,7 @@ let render_pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayout
});
```
-Let's also update the light's position in the `update()` method, so we can see what our objects look like from different angles.
+Let's also update the light's position in the `update()` method to see what our objects look like from different angles.
```rust
// Update the light
@@ -297,7 +293,7 @@ where
}
```
-With that done we can create another render pipeline for our light.
+With that done, we can create another render pipeline for our light.
```rust
// lib.rs
@@ -322,7 +318,7 @@ let light_render_pipeline = {
};
```
-I chose to create a separate layout for the `light_render_pipeline`, as it doesn't need all the resources that the regular `render_pipeline` needs (main just the textures).
+I chose to create a separate layout for the `light_render_pipeline`, as it doesn't need all the resources that the regular `render_pipeline` needs (mainly just the textures).
With that in place, we need to write the actual shaders.
@@ -371,7 +367,7 @@ fn fs_main(in: VertexOutput) -> @location(0) vec4 {
}
```
-Now we could manually implement the draw code for the light in `render()`, but to keep with the pattern we developed, let's create a new trait called `DrawLight`.
+Now, we could manually implement the draw code for the light in `render()`, but to keep with the pattern we developed, let's create a new trait called `DrawLight`.
```rust
// model.rs
@@ -487,9 +483,9 @@ With all that, we'll end up with something like this.
## Ambient Lighting
-Light has a tendency to bounce around before entering our eyes. That's why you can see in areas that are in shadow. Actually modeling this interaction is computationally expensive, so we cheat. We define an ambient lighting value that stands for the light bouncing off other parts of the scene to light our objects.
+Light has a tendency to bounce around before entering our eyes. That's why you can see in areas that are in shadow. Modeling this interaction would be computationally expensive, so we will cheat. We define an ambient lighting value for the light bouncing off other parts of the scene to light our objects.
-The ambient part is based on the light color as well as the object color. We've already added our `light_bind_group`, so we just need to use it in our shader. In `shader.wgsl`, add the following below the texture uniforms.
+The ambient part is based on the light color and the object color. We've already added our `light_bind_group`, so we just need to use it in our shader. In `shader.wgsl`, add the following below the texture uniforms.
```wgsl
struct Light {
@@ -500,7 +496,7 @@ struct Light {
var light: Light;
```
-Then we need to update our main shader code to calculate and use the ambient color value.
+Then, we need to update our main shader code to calculate and use the ambient color value.
```wgsl
@fragment
@@ -523,11 +519,11 @@ With that, we should get something like this.
## Diffuse Lighting
-Remember the normal vectors that were included with our model? We're finally going to use them. Normals represent the direction a surface is facing. By comparing the normal of a fragment with a vector pointing to a light source, we get a value of how light/dark that fragment should be. We compare the vector using the dot product to get the cosine of the angle between them.
+Remember the normal vectors that were included in our model? We're finally going to use them. Normals represent the direction a surface is facing. By comparing the normal of a fragment with a vector pointing to a light source, we get a value of how light/dark that fragment should be. We compare the vectors using the dot product to get the cosine of the angle between them.
![./normal_diagram.png](./normal_diagram.png)
-If the dot product of the normal and light vector is 1.0, that means that the current fragment is directly in line with the light source and will receive the light's full intensity. A value of 0.0 or lower means that the surface is perpendicular or facing away from the light, and therefore will be dark.
+If the dot product of the normal and light vector is 1.0, that means that the current fragment is directly in line with the light source and will receive the light's full intensity. A value of 0.0 or lower means that the surface is perpendicular or facing away from the light and, therefore, will be dark.
We're going to need to pull in the normal vector into our `shader.wgsl`.
@@ -539,7 +535,7 @@ struct VertexInput {
};
```
-We're also going to want to pass that value, as well as the vertex's position to the fragment shader.
+We're also going to want to pass that value, as well as the vertex's position, to the fragment shader.
```wgsl
struct VertexOutput {
@@ -574,7 +570,7 @@ fn vs_main(
}
```
-With that, we can do the actual calculation. Below the `ambient_color` calculation, but above `result`, add the following.
+With that, we can do the actual calculation. Add the following below the `ambient_color` calculation but above the `result`.
```wgsl
let light_dir = normalize(light.position - in.world_position);
@@ -600,7 +596,7 @@ Remember when I said passing the vertex normal directly to the fragment shader w
```rust
const NUM_INSTANCES_PER_ROW: u32 = 1;
-// In the loop we create the instances in
+// In the loop, we create the instances in
let rotation = cgmath::Quaternion::from_axis_angle((0.0, 1.0, 0.0).into(), cgmath::Deg(180.0));
```
@@ -614,15 +610,15 @@ That should give us something that looks like this.
![./diffuse_wrong.png](./diffuse_wrong.png)
-This is clearly wrong as the light is illuminating the wrong side of the cube. This is because we aren't rotating our normals with our object, so no matter what direction the object faces, the normals will always face the same way.
+This is clearly wrong, as the light is illuminating the wrong side of the cube. This is because we aren't rotating our normals with our object, so no matter what direction the object faces, the normals will always face the same way.
![./normal_not_rotated.png](./normal_not_rotated.png)
-We need to use the model matrix to transform the normals to be in the right direction. We only want the rotation data though. A normal represents a direction and should be a unit vector throughout the calculation. We can get our normals in the right direction using what is called a normal matrix.
+We need to use the model matrix to transform the normals to be in the right direction. We only want the rotation data, though. A normal represents a direction and should be a unit vector throughout the calculation. We can get our normals in the right direction using what is called a normal matrix.
-We could compute the normal matrix in the vertex shader, but that would involve inverting the `model_matrix`, and WGSL doesn't actually have an inverse function. We would have to code our own. On top of that computing, the inverse of a matrix is actually really expensive, especially doing that computation for every vertex.
+We could compute the normal matrix in the vertex shader, but that would involve inverting the `model_matrix`, and WGSL doesn't actually have an inverse function. We would have to code our own. On top of that, computing the inverse of a matrix is actually really expensive, especially doing that computation for every vertex.
-Instead, we're going to add a `normal` matrix field to `InstanceRaw`. Instead of inverting the model matrix, we'll just be using the instance's rotation to create a `Matrix3`.
+Instead, we're going to add a `normal` matrix field to `InstanceRaw`. Instead of inverting the model matrix, we'll just use the instance's rotation to create a `Matrix3`.
@@ -651,13 +647,13 @@ impl model::Vertex for InstanceRaw {
attributes: &[
wgpu::VertexAttribute {
offset: 0,
- // While our vertex shader only uses locations 0, and 1 now, in later tutorials we'll
- // be using 2, 3, and 4, for Vertex. We'll start at slot 5 not conflict with them later
+ // While our vertex shader only uses locations 0, and 1 now, in later tutorials, we'll
+ // be using 2, 3, and 4 for Vertex. We'll start at slot 5 to not conflict with them later
shader_location: 5,
format: wgpu::VertexFormat::Float32x4,
},
// A mat4 takes up 4 vertex slots as it is technically 4 vec4s. We need to define a slot
- // for each vec4. We don't have to do this in code though.
+ // for each vec4. We don't have to do this in code, though.
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 4]>() as wgpu::BufferAddress,
shader_location: 6,
@@ -716,7 +712,7 @@ impl Instance {
}
```
-Now we need to reconstruct the normal matrix in the vertex shader.
+Now, we need to reconstruct the normal matrix in the vertex shader.
```wgsl
struct InstanceInput {
@@ -766,9 +762,9 @@ fn vs_main(
-I'm currently doing things in [world space](https://gamedev.stackexchange.com/questions/65783/what-are-world-space-and-eye-space-in-game-development). Doing things in view-space also known as eye-space, is more standard as objects can have lighting issues when they are further away from the origin. If we wanted to use view-space, we would have included the rotation due to the view matrix as well. We'd also have to transform our light's position using something like `view_matrix * model_matrix * light_position` to keep the calculation from getting messed up when the camera moves.
+I'm currently doing things in [world space](https://gamedev.stackexchange.com/questions/65783/what-are-world-space-and-eye-space-in-game-development). Doing things in view-space, also known as eye-space, is more standard as objects can have lighting issues when they are further away from the origin. If we wanted to use view-space, we would have included the rotation due to the view matrix as well. We'd also have to transform our light's position using something like `view_matrix * model_matrix * light_position` to keep the calculation from getting messed up when the camera moves.
-There are advantages to using view space. The main one is when you have massive worlds doing lighting and other calculations in model spacing can cause issues as floating-point precision degrades when numbers get really large. View space keeps the camera at the origin meaning all calculations will be using smaller numbers. The actual lighting math ends up the same, but it does require a bit more setup.
+There are advantages to using view space. The main one is that when you have massive worlds doing lighting and other calculations in model spacing, it can cause issues as floating-point precision degrades when numbers get really large. View space keeps the camera at the origin meaning all calculations will be using smaller numbers. The actual lighting math ends up the same, but it does require a bit more setup.
@@ -776,21 +772,21 @@ With that change, our lighting now looks correct.
![./diffuse_right.png](./diffuse_right.png)
-Bringing back our other objects, and adding the ambient lighting gives us this.
+Bringing back our other objects and adding the ambient lighting gives us this.
![./ambient_diffuse_lighting.png](./ambient_diffuse_lighting.png);
-If you can guarantee that your model matrix will always apply uniform scaling to your objects, you can get away with just using the model matrix. Github user @julhe pointed shared this code with me that does the trick:
+If you can guarantee that your model matrix will always apply uniform scaling to your objects, you can get away with just using the model matrix. Github user @julhe shared this code with me that does the trick:
```wgsl
out.world_normal = (model_matrix * vec4
(model.normal, 0.0)).xyz;
```
-This works by exploiting the fact that by multiplying a 4x4 matrix by a vector with 0 in the w component, only the rotation and scaling will be applied to the vector. You'll need to normalize this vector though as normals need to be unit length for the calculations to work.
+This works by exploiting the fact that by multiplying a 4x4 matrix by a vector with 0 in the w component, only the rotation and scaling will be applied to the vector. You'll need to normalize this vector, though, as normals need to be unit length for the calculations to work.
-The scaling factor *needs* to be uniform in order for this to work. If it's not the resulting normal will be skewed as you can see in the following image.
+The scaling factor *needs* to be uniform in order for this to work. If it's not, the resulting normal will be skewed, as you can see in the following image.
![./normal-scale-issue.png](./normal-scale-issue.png)
@@ -863,7 +859,7 @@ let camera_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupL
});
```
-We're going to get the direction from the fragment's position to the camera, and use that with the normal to calculate the `reflect_dir`.
+We're going to get the direction from the fragment's position to the camera and use that with the normal to calculate the `reflect_dir`.
```wgsl
// shader.wgsl
@@ -872,7 +868,7 @@ let view_dir = normalize(camera.view_pos.xyz - in.world_position);
let reflect_dir = reflect(-light_dir, in.world_normal);
```
-Then we use the dot product to calculate the `specular_strength` and use that to compute the `specular_color`.
+Then, we use the dot product to calculate the `specular_strength` and use that to compute the `specular_color`.
```wgsl
let specular_strength = pow(max(dot(view_dir, reflect_dir), 0.0), 32.0);
@@ -889,13 +885,13 @@ With that, you should have something like this.
![./ambient_diffuse_specular_lighting.png](./ambient_diffuse_specular_lighting.png)
-If we just look at the `specular_color` on its own we get this.
+If we just look at the `specular_color` on its own, we get this.
![./specular_lighting.png](./specular_lighting.png)
## The half direction
-Up to this point, we've actually only implemented the Phong part of Blinn-Phong. The Phong reflection model works well, but it can break down under [certain circumstances](https://learnopengl.com/Advanced-Lighting/Advanced-Lighting). The Blinn part of Blinn-Phong comes from the realization that if you add the `view_dir`, and `light_dir` together, normalize the result and use the dot product of that and the `normal`, you get roughly the same results without the issues that using `reflect_dir` had.
+Up to this point, we've actually only implemented the Phong part of Blinn-Phong. The Phong reflection model works well, but it can break down under [certain circumstances](https://learnopengl.com/Advanced-Lighting/Advanced-Lighting). The Blinn part of Blinn-Phong comes from the realization that if you add the `view_dir` and `light_dir` together, normalize the result and use the dot product of that and the `normal`, you get roughly the same results without the issues that using `reflect_dir` had.
```wgsl
let view_dir = normalize(camera.view_pos.xyz - in.world_position);
diff --git a/docs/intermediate/tutorial11-normals/README.md b/docs/intermediate/tutorial11-normals/README.md
index f53f4408f..a674e6119 100644
--- a/docs/intermediate/tutorial11-normals/README.md
+++ b/docs/intermediate/tutorial11-normals/README.md
@@ -1,14 +1,14 @@
# Normal Mapping
-With just lighting, our scene is already looking pretty good. Still, our models are still overly smooth. This is understandable because we are using a very simple model. If we were using a texture that was supposed to be smooth, this wouldn't be a problem, but our brick texture is supposed to be rougher. We could solve this by adding more geometry, but that would slow our scene down, and it be would hard to know where to add new polygons. This is where normal mapping comes in.
+With just lighting, our scene is already looking pretty good. Still, our models are still overly smooth. This is understandable because we are using a very simple model. If we were using a texture that was supposed to be smooth, this wouldn't be a problem, but our brick texture is supposed to be rougher. We could solve this by adding more geometry, but that would slow our scene down, and it would be hard to know where to add new polygons. This is where normal mapping comes in.
-Remember in [the instancing tutorial](/beginner/tutorial7-instancing/#a-different-way-textures), we experimented with storing instance data in a texture? A normal map is doing just that with normal data! We'll use the normals in the normal map in our lighting calculation in addition to the vertex normal.
+Remember when we experimented with storing instance data in a texture in [the instancing tutorial](/beginner/tutorial7-instancing/#a-different-way-textures)? A normal map is doing just that with normal data! We'll use the normals in the normal map in our lighting calculation in addition to the vertex normal.
The brick texture I found came with a normal map. Let's take a look at it!
![./cube-normal.png](./cube-normal.png)
-The r, g, and b components of the texture correspond to the x, y, and z components or the normals. All the z values should be positive, that's why the normal map has a bluish tint.
+The r, g, and b components of the texture correspond to the x, y, and z components or the normals. All the z values should be positive. That's why the normal map has a bluish tint.
We'll need to modify our `Material` struct in `model.rs` to include a `normal_texture`.
@@ -49,7 +49,7 @@ let texture_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroup
});
```
-We'll need to actually load the normal map. We'll do this in the loop where we create the materials in the `load_model()` function in `resources.rs`.
+We'll need to load the normal map. We'll do this in the loop where we create the materials in the `load_model()` function in `resources.rs`.
```rust
// resources.rs
@@ -114,7 +114,7 @@ impl Material {
}
```
-Now we can use the texture in the fragment shader.
+Now, we can use the texture in the fragment shader.
```wgsl
// Fragment shader
@@ -164,31 +164,31 @@ Parts of the scene are dark when they should be lit up, and vice versa.
## Tangent Space to World Space
-I mentioned briefly in the [lighting tutorial](/intermediate/tutorial10-lighting/#the-normal-matrix), that we were doing our lighting calculation in "world space". This meant that the entire scene was oriented with respect to the *world's* coordinate system. When we pull the normal data from our normal texture, all the normals are in what's known as pointing roughly in the positive z direction. That means that our lighting calculation thinks all of the surfaces of our models are facing in roughly the same direction. This is referred to as `tangent space`.
+I mentioned briefly in the [lighting tutorial](/intermediate/tutorial10-lighting/#the-normal-matrix) that we were doing our lighting calculation in "world space". This meant that the entire scene was oriented with respect to the *world's* coordinate system. When we pull the normal data from our normal texture, all the normals are in what's known as pointing roughly in the positive z direction. That means that our lighting calculation thinks all of the surfaces of our models are facing in roughly the same direction. This is referred to as `tangent space`.
-If we remember the [lighting-tutorial](/intermediate/tutorial10-lighting/#), we used the vertex normal to indicate the direction of the surface. It turns out we can use that to transform our normals from `tangent space` into `world space`. In order to do that we need to draw from the depths of linear algebra.
+If we remember the [lighting-tutorial](/intermediate/tutorial10-lighting/#), we used the vertex normal to indicate the direction of the surface. It turns out we can use that to transform our normals from `tangent space` into `world space`. In order to do that, we need to draw from the depths of linear algebra.
-We can create a matrix that represents a coordinate system using 3 vectors that are perpendicular (or orthonormal) to each other. Basically, we define the x, y, and z axes of our coordinate system.
+We can create a matrix that represents a coordinate system using three vectors that are perpendicular (or orthonormal) to each other. Basically, we define the x, y, and z axes of our coordinate system.
```wgsl
let coordinate_system = mat3x3(
- vec3(1, 0, 0), // x axis (right)
- vec3(0, 1, 0), // y axis (up)
- vec3(0, 0, 1) // z axis (forward)
+ vec3(1, 0, 0), // x-axis (right)
+ vec3(0, 1, 0), // y-axis (up)
+ vec3(0, 0, 1) // z-axis (forward)
);
```
We're going to create a matrix that will represent the coordinate space relative to our vertex normals. We're then going to use that to transform our normal map data to be in world space.
-## The tangent, and the bitangent
+## The tangent and the bitangent
-We have one of the 3 vectors we need, the normal. What about the others? These are the tangent and bitangent vectors. A tangent represents any vector that is parallel with a surface (aka. doesn't intersect with it). The tangent is always perpendicular to the normal vector. The bitangent is a tangent vector that is perpendicular to the other tangent vector. Together the tangent, bitangent, and normal represent the x, y, and z axes respectively.
+We have one of the three vectors we need, the normal. What about the others? These are the tangent and bitangent vectors. A tangent represents any vector parallel with a surface (aka. doesn't intersect with it). The tangent is always perpendicular to the normal vector. The bitangent is a tangent vector that is perpendicular to the other tangent vector. Together, the tangent, bitangent, and normal represent the x, y, and z axes, respectively.
-Some model formats include the tanget and bitangent (sometimes called the binormal) in the vertex data, but OBJ does not. We'll have to calculate them manually. Luckily we can derive our tangent and bitangent from our existing vertex data. Take a look at the following diagram.
+Some model formats include the tangent and bitangent (sometimes called the binormal) in the vertex data, but OBJ does not. We'll have to calculate them manually. Luckily, we can derive our tangent and bitangent from our existing vertex data. Take a look at the following diagram.
![](./tangent_space.png)
-Basically, we can use the edges of our triangles, and our normal to calculate the tangent and bitangent. But first, we need to update our `ModelVertex` struct in `model.rs`.
+Basically, we can use the edges of our triangles and our normal to calculate the tangent and bitangent. But first, we need to update our `ModelVertex` struct in `model.rs`.
```rust
#[repr(C)]
@@ -232,7 +232,7 @@ impl Vertex for ModelVertex {
}
```
-Now we can calculate the new tangent and bitangent vectors. Update the mesh generation in `load_model()` in `resource.rs` to use the following code:
+Now, we can calculate the new tangent and bitangent vectors. Update the mesh generation in `load_model()` in `resource.rs` to use the following code:
```rust
let meshes = models
@@ -349,7 +349,7 @@ let meshes = models
## World Space to Tangent Space
-Since the normal map by default is in tangent space, we need to transform all the other variables used in that calculation to tangent space as well. We'll need to construct the tangent matrix in the vertex shader. First, we need our `VertexInput` to include the tangent and bitangents we calculated earlier.
+Since the normal map, by default, is in tangent space, we need to transform all the other variables used in that calculation to tangent space as well. We'll need to construct the tangent matrix in the vertex shader. First, we need our `VertexInput` to include the tangent and bitangents we calculated earlier.
```wgsl
struct VertexInput {
@@ -429,7 +429,7 @@ We get the following from this calculation.
## Srgb and normal textures
-We've been using `Rgba8UnormSrgb` for all our textures. The `Srgb` bit specifies that we will be using [standard red green blue color space](https://en.wikipedia.org/wiki/SRGB). This is also known as linear color space. Linear color space has less color density. Even so, it is often used for diffuse textures, as they are typically made in `Srgb` color space.
+We've been using `Rgba8UnormSrgb` for all our textures. The `Srgb` bit specifies that we will be using [standard RGB (red, green, blue) color space](https://en.wikipedia.org/wiki/SRGB). This is also known as linear color space. Linear color space has less color density. Even so, it is often used for diffuse textures, as they are typically made in `Srgb` color space.
Normal textures aren't made with `Srgb`. Using `Rgba8UnormSrgb` can change how the GPU samples the texture. This can make the resulting simulation [less accurate](https://medium.com/@bgolus/generating-perfect-normal-maps-for-unity-f929e673fc57#b86c). We can avoid these issues by using `Rgba8Unorm` when we create the texture. Let's add an `is_normal_map` method to our `Texture` struct.
@@ -589,7 +589,7 @@ impl State {
}
```
-Then to render with the `debug_material` I used the `draw_model_instanced_with_material()` that I created.
+Then, to render with the `debug_material`, I used the `draw_model_instanced_with_material()` that I created.
```rust
render_pass.set_pipeline(&self.render_pipeline);
@@ -606,7 +606,7 @@ That gives us something like this.
![](./debug_material.png)
-You can find the textures I use in the Github Repository.
+You can find the textures I use in the GitHub Repository.
diff --git a/docs/intermediate/tutorial12-camera/README.md b/docs/intermediate/tutorial12-camera/README.md
index 2d289243d..c6f43d809 100644
--- a/docs/intermediate/tutorial12-camera/README.md
+++ b/docs/intermediate/tutorial12-camera/README.md
@@ -1,6 +1,6 @@
# A Better Camera
-I've been putting this off for a while. Implementing a camera isn't specifically related to using WGPU properly, but it's been bugging me so let's do it.
+I've been putting this off for a while. Implementing a camera isn't specifically related to using WGPU properly, but it's been bugging me, so let's do it.
`lib.rs` is getting a little crowded, so let's create a `camera.rs` file to put our camera code. The first things we're going to put in it are some imports and our `OPENGL_TO_WGPU_MATRIX`.
@@ -80,7 +80,7 @@ impl Camera {
## The Projection
-I've decided to split the projection from the camera. The projection only really needs to change if the window resizes, so let's create a `Projection` struct.
+I've decided to split the projection from the camera. The projection only needs to change if the window resizes, so let's create a `Projection` struct.
```rust
pub struct Projection {
@@ -235,7 +235,7 @@ impl CameraController {
// If process_mouse isn't called every frame, these values
// will not get set to zero, and the camera will rotate
- // when moving in a non cardinal direction.
+ // when moving in a non-cardinal direction.
self.rotate_horizontal = 0.0;
self.rotate_vertical = 0.0;
@@ -251,7 +251,7 @@ impl CameraController {
## Cleaning up `lib.rs`
-First things first we need to delete `Camera` and `CameraController` as well as the extra `OPENGL_TO_WGPU_MATRIX` from `lib.rs`. Once you've done that import `camera.rs`.
+First things first, we need to delete `Camera` and `CameraController`, as well as the extra `OPENGL_TO_WGPU_MATRIX` from `lib.rs`. Once you've done that, import `camera.rs`.
```rust
mod model;
@@ -320,7 +320,7 @@ impl State {
}
```
-We need to change our `projection` in `resize` as well.
+We also need to change our `projection` in `resize`.
```rust
fn resize(&mut self, new_size: winit::dpi::PhysicalSize) {
@@ -332,7 +332,7 @@ fn resize(&mut self, new_size: winit::dpi::PhysicalSize) {
`input()` will need to be updated as well. Up to this point, we have been using `WindowEvent`s for our camera controls. While this works, it's not the best solution. The [winit docs](https://docs.rs/winit/0.24.0/winit/event/enum.WindowEvent.html?search=#variant.CursorMoved) inform us that OS will often transform the data for the `CursorMoved` event to allow effects such as cursor acceleration.
-Now to fix this we could change the `input()` function to process `DeviceEvent` instead of `WindowEvent`, but keyboard and button presses don't get emitted as `DeviceEvent`s on MacOS and WASM. Instead, we'll just remove the `CursorMoved` check in `input()`, and a manual call to `camera_controller.process_mouse()` in the `run()` function.
+Now, to fix this, we could change the `input()` function to process `DeviceEvent` instead of `WindowEvent`, but keyboard and button presses don't get emitted as `DeviceEvent`s on MacOS and WASM. Instead, we'll just remove the `CursorMoved` check in `input()` and a manual call to `camera_controller.process_mouse()` in the `run()` function.
```rust
// UPDATED!
@@ -412,7 +412,7 @@ fn main() {
}
```
-The `update` function requires a bit more explanation. The `update_camera` function on the `CameraController` has a parameter `dt: Duration` which is the delta time or time between frames. This is to help smooth out the camera movement so that it's not locked by the framerate. Currently, we aren't calculating `dt`, so I decided to pass it into `update` as a parameter.
+The `update` function requires a bit more explanation. The `update_camera` function on the `CameraController` has a parameter `dt: Duration`, which is the delta time or time between frames. This is to help smooth out the camera movement so that it's not locked by the framerate. Currently, we aren't calculating `dt`, so I decided to pass it into `update` as a parameter.
```rust
fn update(&mut self, dt: instant::Duration) {
@@ -424,7 +424,7 @@ fn update(&mut self, dt: instant::Duration) {
}
```
-While we're at it, let's use `dt` for the light's rotation as well.
+While we're at it, let's also use `dt` for the light's rotation.
```rust
self.light_uniform.position =
diff --git a/docs/intermediate/tutorial13-hdr/readme.md b/docs/intermediate/tutorial13-hdr/readme.md
index 09368c3c7..5a30d315d 100644
--- a/docs/intermediate/tutorial13-hdr/readme.md
+++ b/docs/intermediate/tutorial13-hdr/readme.md
@@ -1,48 +1,26 @@
# High Dynamic Range Rendering
-Up to this point we've been using the sRGB colorspace to render our scene.
-While this is fine it limits what we can do with our lighting. We are using
-`TextureFormat::Bgra8UnormSrgb` (on most systems) for our surface texture.
-This means that we have 8bits for each of the color and alpha channels. While
-the channels are stored as integers between 0 and 255 inclusively, they get
-converted to and from floating point values between 0.0 and 1.0. The TL:DR of
-this is that using 8bit textures we only get 256 possible values in each
-channel.
-
-The kicker with this is most of the precision gets used to represent darker
-values of the scene. This means that bright objects like a light bulb have
-the same value as exeedingly bright objects such as the sun. This inaccuracy
-makes realistic lighting difficult to do right. Because of this, we are going
-to switch our rendering system to use high dynamic range in order to give our
-scene more flexibility and enable use to leverage more advanced techniques
-such as Physically Based Rendering.
+Up to this point, we've been using the sRGB colorspace to render our scene. While this is fine, it limits what we can do with our lighting. We are using `TextureFormat::Bgra8UnormSrgb` (on most systems) for our surface texture. This means we have 8 bits for each red, green, blue and alpha channel. While the channels are stored as integers between 0 and 255 inclusively, they get converted to and from floating point values between 0.0 and 1.0. The TL:DR of this is that using 8-bit textures, we only get 256 possible values in each channel.
+
+The kicker with this is most of the precision gets used to represent darker values of the scene. This means that bright objects like light bulbs have the same value as exceedingly bright objects like the sun. This inaccuracy makes realistic lighting difficult to do right. Because of this, we are going to switch our rendering system to use high dynamic range in order to give our scene more flexibility and enable us to leverage more advanced techniques such as Physically Based Rendering.
## What is High Dynamic Range?
-In laymans terms, a High Dynamic Range texture is a texture with more bits
-per pixel. In addition to this, HDR textures are stored as floating point values
-instead of integer values. This means that the texture can have brightness values
-greater than 1.0 meaning you can have a dynamic range of brighter objects.
+In layman's terms, a High Dynamic Range texture is a texture with more bits per pixel. In addition to this, HDR textures are stored as floating point values instead of integer values. This means that the texture can have brightness values greater than 1.0, meaning you can have a dynamic range of brighter objects.
## Switching to HDR
-As of writing, wgpu doesn't allow us to use a floating point format such as
-`TextureFormat::Rgba16Float` as the surface texture format (not all
-monitors support that anyways), so we will have to render our scene in
-an HDR format, then convert the values to a supported format such as
-`TextureFormat::Bgra8UnormSrgb` using a technique called tonemapping.
+As of writing, wgpu doesn't allow us to use a floating point format such as `TextureFormat::Rgba16Float` as the surface texture format (not all monitors support that anyway), so we will have to render our scene in an HDR format, then convert the values to a supported format, such as `TextureFormat::Bgra8UnormSrgb` using a technique called tonemapping.
-There are some talks about implementing HDR surface texture support in
-wgpu. Here is a github issues if you want to contribute to that
-effort: https://github.com/gfx-rs/wgpu/issues/2920
+There are some talks about implementing HDR surface texture support in wgpu. Here is a GitHub issue if you want to contribute to that effort: https://github.com/gfx-rs/wgpu/issues/2920
-Before we do that though we need to switch to using an HDR texture for rendering.
+Before we do that, though, we need to switch to using an HDR texture for rendering.
-To start we'll create a file called `hdr.rs` and put the some code in it:
+To start, we'll create a file called `hdr.rs` and put some code in it:
```rust
use wgpu::Operations;
@@ -235,14 +213,9 @@ fn create_render_pipeline(
## Tonemapping
-The process of tonemapping is taking an HDR image and converting it to
-a Standard Dynamic Range (SDR) which is usually sRGB. The exact
-tonemapping curve you uses is ultimately up to your artistic needs, but
-for this tutorial we'll use a popular one know as the Academy Color
-Encoding System or ACES used throughout the game industry as well as the film industry.
+The process of tonemapping is taking an HDR image and converting it to a Standard Dynamic Range (SDR), which is usually sRGB. The exact tonemapping curve you use is ultimately up to your artistic needs, but for this tutorial, we'll use a popular one known as the Academy Color Encoding System or ACES used throughout the game industry as well as the film industry.
-With that let's jump into the the shader. Create a file called `hdr.wgsl`
-and add the following code:
+With that, let's jump into the the shader. Create a file called `hdr.wgsl` and add the following code:
```wgsl
// Maps HDR values to linear values
@@ -259,8 +232,8 @@ fn aces_tone_map(hdr: vec3) -> vec3 {
-0.07367, -0.00605, 1.07602,
);
let v = m1 * hdr;
- let a = v * (v + 0.0245786) - 0.000090537;
- let b = v * (0.983729 * v + 0.4329510) + 0.238081;
+ let a = v * (v + 0.0245786) - 0.000090537;
+ let b = v * (0.983729 * v + 0.4329510) + 0.238081;
return clamp(m2 * (a / b), vec3(0.0), vec3(1.0));
}
@@ -302,8 +275,7 @@ fn fs_main(vs: VertexOutput) -> @location(0) vec4 {
}
```
-With those in place we can start using our HDR texture in our core
-render pipeline. First we need to add the new `HdrPipeline` to `State`:
+With those in place, we can start using our HDR texture in our core render pipeline. First, we need to add the new `HdrPipeline` to `State`:
```rust
// lib.rs
@@ -334,8 +306,7 @@ impl State {
}
```
-Then when we resize the window, we need to call `resize()` on our
-`HdrPipeline`:
+Then, when we resize the window, we need to call `resize()` on our `HdrPipeline`:
```rust
fn resize(&mut self, new_size: winit::dpi::PhysicalSize) {
@@ -349,8 +320,7 @@ fn resize(&mut self, new_size: winit::dpi::PhysicalSize) {
}
```
-Next in `render()` we need to switch the `RenderPass` to use our HDR
-texture instead of the surface texture:
+Next, in `render()`, we need to switch the `RenderPass` to use our HDR texture instead of the surface texture:
```rust
// render()
@@ -375,8 +345,7 @@ let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
});
```
-Finally after we draw all the objects in the frame we can run our
-tonemapper with the surface texture as the output:
+Finally, after we draw all the objects in the frame, we can run our tonemapper with the surface texture as the output:
```rust
// NEW!
@@ -394,59 +363,35 @@ Here's what it looks like after implementing HDR:
## Loading HDR textures
-Now that we have an HDR render buffer, we can start leveraging
-HDR textures to their fullest. One of the main uses for HDR
-textures is to store lighting information in the form of an
-environment map.
+Now that we have an HDR render buffer, we can start leveraging HDR textures to their fullest. One of the primary uses for HDR textures is to store lighting information in the form of an environment map.
-This map can be used to light objects, display reflections and
-also to make a skybox. We're going to create a skybox using HDR
-texture, but first we need to talk about how environment maps are
-stored.
+This map can be used to light objects, display reflections and also to make a skybox. We're going to create a skybox using HDR texture, but first, we need to talk about how environment maps are stored.
## Equirectangular textures
-An equirectangluar texture is a texture where a sphere is stretched
-across a rectangular surface using what's known as an equirectangular
-projection. This map of the Earth is an example of this projection.
+An equirectangular texture is a texture where a sphere is stretched across a rectangular surface using what's known as an equirectangular projection. This map of the Earth is an example of this projection.
![map of the earth](https://upload.wikimedia.org/wikipedia/commons/thumb/8/83/Equirectangular_projection_SW.jpg/1024px-Equirectangular_projection_SW.jpg)
-This projection maps the latitude values of the sphere to the
-horizontal coordinates of the texture. The longitude values get
-mapped to the vertical coordinates. This means that the vertical
-middle of the texture is the equator (0° longitude) of the sphere,
-the horizontal middle is the prime meridian (0° latitude) of the
-sphere, the left and right edges of the texture are the anti-meridian
-(+180°/-180° latitude) the top and bottom edges of the texture are
-the north pole (90° longitude) and south pole (-90° longitude)
-respectively.
+This projection maps the latitude values of the sphere to the horizontal coordinates of the texture. The longitude values get mapped to the vertical coordinates. This means that the vertical middle of the texture is the equator (0° longitude) of the sphere, the horizontal middle is the prime meridian (0° latitude) of the sphere, the left and right edges of the texture are the anti-meridian (+180°/-180° latitude) the top and bottom edges of the texture are the north pole (90° longitude) and south pole (-90° longitude), respectively.
![equirectangular diagram](./equirectangular.svg)
-This simple projection is easy to use, leading it to be one of the
-most popular projections for storing spherical textures. You can
-see the particular environment map we are going to use below.
+This simple projection is easy to use, making it one of the most popular projections for storing spherical textures. You can see the particular environment map we are going to use below.
![equirectangular skybox](./kloofendal_43d_clear_puresky.jpg)
## Cube Maps
-While we technically can use an equirectangular map directly as long
-as we do some math to figure out the correct coordinates, it is a lot
-more convenient to convert our environment map into a cube map.
+While we can technically use an equirectangular map directly, as long as we do some math to figure out the correct coordinates, it is a lot more convenient to convert our environment map into a cube map.
-A cube map is special kind of texture that has 6 layers. Each layer
-corresponds to a different face of an imaginary cube that is aligned
-to the X, Y and Z axes. The layers are stored in the following order:
-+X, -X, +Y, -Y, +Z, -Z.
+A cube map is a special kind of texture that has six layers. Each layer corresponds to a different face of an imaginary cube that is aligned to the X, Y and Z axes. The layers are stored in the following order: +X, -X, +Y, -Y, +Z, -Z.
-To prepare to store the cube texture, we are going to create
-a new struct called `CubeTexture` in `texture.rs`.
+To prepare to store the cube texture, we are going to create a new struct called `CubeTexture` in `texture.rs`.
```rust
pub struct CubeTexture {
@@ -516,25 +461,13 @@ impl CubeTexture {
}
```
-With this we can now write the code to load the HDR into
-a cube texture.
+With this, we can now write the code to load the HDR into a cube texture.
## Compute shaders
-Up to this point we've been exclusively using render
-pipelines, but I felt this was a good time to introduce
-compute pipelines and by extension compute shaders. Compute
-pipelines are a lot easier to setup. All you need is to tell
-the pipeline what resources you want to use, what code you
-want to run, and how many threads you'd like the GPU to use
-when running your code. We're going to use a compute shader
-to give each pixel in our cube textue a color from the
-HDR image.
-
-Before we can use compute shaders, we need to enable them
-in wgpu. We can do that just need to change the line where
-we specify what features we want to use. In `lib.rs`, change
-the code where we request a device:
+Up to this point, we've been exclusively using render pipelines, but I felt this was a good time to introduce the compute pipelines and, by extension, compute shaders. Compute pipelines are a lot easier to set up. All you need is to tell the pipeline what resources you want to use, what code you want to run, and how many threads you'd like the GPU to use when running your code. We're going to use a compute shader to give each pixel in our cube texture a color from the HDR image.
+
+Before we can use compute shaders, we need to enable them in wgpu. We can do that by changing the line where we specify what features we want to use. In `lib.rs`, change the code where we request a device:
```rust
let (device, queue) = adapter
@@ -554,16 +487,9 @@ let (device, queue) = adapter
-You may have noted that we have switched from
-`downlevel_webgl2_defaults()` to `downlevel_defaults()`.
-This means that we are dropping support for WebGL2. The
-reason for this is that WebGL2 doesn't support compute
-shaders. WebGPU was built with compute shaders in mind. As
-of writing the only browser that supports WebGPU is Chrome,
-and some experimental browsers such as Firefox Nightly.
+You may have noted that we have switched from `downlevel_webgl2_defaults()` to `downlevel_defaults()`. This means that we are dropping support for WebGL2. The reason for this is that WebGL2 doesn't support the compute shaders. WebGPU was built with compute shaders in mind. As of writing, the only browser that supports WebGPU is Chrome and some experimental browsers such as Firefox Nightly.
-Consequently we are going to remove the webgl feature from
-`Cargo.toml`. This line in particular:
+Consequently, we are going to remove the WebGL feature from `Cargo.toml`. This line in particular:
```toml
wgpu = { version = "0.18", features = ["webgl"]}
@@ -571,9 +497,7 @@ wgpu = { version = "0.18", features = ["webgl"]}
-Now that we've told wgpu that we want to use compute
-shaders, let's create a struct in `resource.rs` that we'll
-use to load the HDR image into our cube map.
+Now that we've told wgpu that we want to use the compute shaders, let's create a struct in `resource.rs` that we'll use to load the HDR image into our cube map.
```rust
pub struct HdrLoader {
@@ -696,10 +620,10 @@ impl HdrLoader {
let dst_view = dst.texture().create_view(&wgpu::TextureViewDescriptor {
label,
- // Normally you'd use `TextureViewDimension::Cube`
+ // Normally, you'd use `TextureViewDimension::Cube`
// for a cube texture, but we can't use that
// view dimension with a `STORAGE_BINDING`.
- // We need to access the cube texure layers
+ // We need to access the cube texture layers
// directly.
dimension: Some(wgpu::TextureViewDimension::D2Array),
..Default::default()
@@ -737,21 +661,13 @@ impl HdrLoader {
}
```
-The `dispatch_workgroups` call tells the gpu to run our
-code in batchs called workgroups. Each workgroup has a
-number of worker threads called invocations that run the
-code in parallel. Workgroups are organized as a 3d grid
-with the dimensions we pass to `dispatch_workgroups`.
+The `dispatch_workgroups` call tells the GPU to run our code in batches called workgroups. Each workgroup has a number of worker threads called invocations that run the code in parallel. Workgroups are organized as a 3d grid with the dimensions we pass to `dispatch_workgroups`.
-In this example we have a workgroup grid divided into 16x16
-chunks and storing the layer in z dimension.
+In this example, we have a workgroup grid divided into 16x16 chunks and storing the layer in the z dimension.
## The compute shader
-Now let's write a compute shader that will convert
-our equirectangular texture to a cube texture. Create a file
-called `equirectangular.wgsl`. We're going to break it down
-chunk by chunk.
+Now, let's write a compute shader that will convert our equirectangular texture to a cube texture. Create a file called `equirectangular.wgsl`. We're going to break it down chunk by chunk.
```wgsl
const PI: f32 = 3.1415926535897932384626433832795;
@@ -765,10 +681,8 @@ struct Face {
Two things here:
-1. wgsl doesn't have a builtin for PI so we need to specify
- it ourselves.
-2. each face of the cube map has an orientation to it, so we
- need to store that.
+1. WGSL doesn't have a built-in for PI, so we need to specify it ourselves.
+2. each face of the cube map has an orientation to it, so we need to store that.
```wgsl
@group(0)
@@ -780,19 +694,11 @@ var src: texture_2d;
var dst: texture_storage_2d_array;
```
-Here we have the only two bindings we need. The equirectangular
-`src` texture and our `dst` cube texture. Some things to note:
-about `dst`:
+Here, we have the only two bindings we need. The equirectangular `src` texture and our `dst` cube texture. Some things to note about `dst`:
-1. While `dst` is a cube texture, it's stored as a array of
- 2d textures.
-2. The type of binding we're using here is a storage texture.
- An array storage texture to be precise. This is a unique
- binding only available to compute shaders. It allows us
- to directly write to the texture.
-3. When using a storage texture binding we need to specify the
- format of the texture. If you try to bind a texture with
- a different format, wgpu will panic.
+1. While `dst` is a cube texture, it's stored as an array of 2d textures.
+2. The type of binding we're using here is a storage texture. An array storage texture, to be precise. This is a unique binding only available to compute shaders. It allows us to write directly to the texture.
+3. When using a storage texture binding, we need to specify the format of the texture. If you try to bind a texture with a different format, wgpu will panic.
```wgsl
@compute
@@ -801,7 +707,7 @@ fn compute_equirect_to_cubemap(
@builtin(global_invocation_id)
gid: vec3,
) {
- // If texture size is not divisible by 32 we
+ // If texture size is not divisible by 32, we
// need to make sure we don't try to write to
// pixels that don't exist.
if gid.x >= u32(textureDimensions(dst).x) {
@@ -867,23 +773,17 @@ fn compute_equirect_to_cubemap(
}
```
-While I commented some the previous code, there are some
-things I want to go over that wouldn't fit well in a
-comment.
+While I commented in the previous code, there are some things I want to go over that wouldn't fit well in a comment.
-The `workgroup_size` decorator tells the dimensions of the
-workgroup's local grid of invocations. Because we are
-dispatching one workgroup for every pixel in the texture,
-we have each workgroup be a 16x16x1 grid. This means that each workgroup can have 256 threads to work with.
+The `workgroup_size` decorator tells the dimensions of the workgroup's local grid of invocations. Because we are dispatching one workgroup for every pixel in the texture, we have each workgroup be a 16x16x1 grid. This means that each workgroup can have 256 threads to work with.
-For Webgpu each workgroup can only have a max of 256 threads (also
-called invocations).
+For WebGPU, each workgroup can only have a max of 256 threads (also called invocations).
-With this we can load the environment map in the `new()` function:
+With this, we can load the environment map in the `new()` function:
```rust
let hdr_loader = resources::HdrLoader::new(&device);
@@ -899,18 +799,9 @@ let sky_texture = hdr_loader.from_equirectangular_bytes(
## Skybox
-No that we have an environment map to render. Let's use
-it to make our skybox. There are different ways to render
-a skybox. A standard way is to render a cube and map the
-environment map on it. While that method works, it can
-have some artifacts in the corners and edges where the
-cubes faces meet.
+Now that we have an environment map to render let's use it to make our skybox. There are different ways to render a skybox. A standard way is to render a cube and map the environment map on it. While that method works, it can have some artifacts in the corners and edges where the cube's faces meet.
-Instead we are going to render to the entire screen and
-compute the view direction from each pixel, and use that
-to sample the texture. First though we need to create a
-bindgroup for the environment map so that we can use it
-for rendering. Add the following to `new()`:
+Instead, we are going to render to the entire screen, compute the view direction from each pixel and use that to sample the texture. First, we need to create a bindgroup for the environment map so that we can use it for rendering. Add the following to `new()`:
```rust
let environment_layout =
@@ -952,8 +843,7 @@ let environment_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor
});
```
-Now that we have the bindgroup, we need a render pipeline
-to render the skybox.
+Now that we have the bindgroup, we need a render pipeline to render the skybox.
```rust
// NEW!
@@ -976,13 +866,9 @@ let sky_pipeline = {
};
```
-One thing to not here. We added the primitive format to
-`create_render_pipeline()`. Also we changed the depth compare
-function to `CompareFunction::LessEqual` (we'll discuss why when
-we go over the sky shader). Here's the changes to that:
+One thing to note here. We added the primitive format to `create_render_pipeline()`. Also, we changed the depth compare function to `CompareFunction::LessEqual` (we'll discuss why when we go over the sky shader). Here are the changes to that:
```rust
-
fn create_render_pipeline(
device: &wgpu::Device,
layout: &wgpu::PipelineLayout,
@@ -1012,8 +898,7 @@ fn create_render_pipeline(
}
```
-Don't forget to add the new bindgroup and pipeline to the
-to `State`.
+Don't forget to add the new bindgroup and pipeline to the to `State`.
```rust
struct State {
@@ -1094,19 +979,12 @@ fn fs_main(in: VertexOutput) -> @location(0) vec4 {
Let's break this down:
1. We create a triangle twice the size of the screen.
-2. In the fragment shader we get the view direction from
- the clip position. We use the inverse projection
- matrix to get convert the clip coordinates to view
- direction. Then we use the inverse view matrix to
- get the direction into world space as that's what we
- need for to sample the sky box correctly.
+2. In the fragment shader, we get the view direction from the clip position. We use the inverse projection matrix to convert the clip coordinates to view direction. Then, we use the inverse view matrix to get the direction into world space, as that's what we need to sample the sky box correctly.
3. We then sample the sky texture with the view direction.
-In order for this to work we need to change our camera
-uniforms a bit. We need to add the inverse view matrix,
-and inverse projection matrix to `CameraUniform` struct.
+For this to work, we need to change our camera uniforms a bit. We need to add the inverse view matrix and inverse projection matrix to `CameraUniform` struct.
```rust
#[repr(C)]
@@ -1144,9 +1022,7 @@ impl CameraUniform {
}
```
-Make sure to change the `Camera` definition in
-`shader.wgsl`, and `light.wgsl`. Just as a reminder
-it looks like this:
+Make sure to change the `Camera` definition in `shader.wgsl`, and `light.wgsl`. Just as a reminder, it looks like this:
```wgsl
struct Camera {
@@ -1161,30 +1037,19 @@ var camera: Camera;
-You may have noticed that we removed the `OPENGL_TO_WGPU_MATRIX`. The reason for this is
-that it was messing with the projection of the
-skybox.
+You may have noticed that we removed the `OPENGL_TO_WGPU_MATRIX`. The reason for this is that it was messing with the projection of the skybox.
![projection error](./project-error.png)
-It wasn't technically needed, so I felt fine
-removing it.
+Technically, it wasn't needed, so I felt fine removing it.
## Reflections
-Now that we have a sky, we can mess around with
-using it for lighting. This won't be physically
-accurate (we'll look into that later). That being
-said, we have the environment map, we might as
-well use it.
+Now that we have a sky, we can mess around with using it for lighting. This won't be physically accurate (we'll look into that later). That being said, we have the environment map, so we might as well use it.
-In order to do that though we need to change our
-shader to do lighting in world space instead of
-tangent space because our environment map is in
-world space. Because there are a lot of changes
-I'll post the whole shader here:
+In order to do that though, we need to change our shader to do lighting in world space instead of tangent space because our environment map is in world space. Because there are a lot of changes I'll post the whole shader here:
```wgsl
// Vertex shader
@@ -1291,7 +1156,7 @@ fn fs_main(in: VertexOutput) -> @location(0) vec4 {
// NEW!
// Adjust the tangent and bitangent using the Gramm-Schmidt process
- // This makes sure that they are perpedicular to each other and the
+ // This makes sure that they are perpendicular to each other and the
// normal of the surface.
let world_tangent = normalize(in.world_tangent - dot(in.world_tangent, in.world_normal) * in.world_normal);
let world_bitangent = cross(world_tangent, in.world_normal);
@@ -1328,16 +1193,7 @@ fn fs_main(in: VertexOutput) -> @location(0) vec4 {
}
```
-A little note on the reflection math. The `view_dir`
-gives us the direction to the camera from the surface.
-The reflection math needs the direction from the
-camera to the surface so we negate `view_dir`. We
-then use `wgsl`'s builtin `reflect` function to
-reflect the inverted `view_dir` about the `world_normal`.
-This gives us a direction that we can use sample the
-environment map to get the color of the sky in that
-direction. Just looking at the reflection component
-gives us the following:
+A little note on the reflection math. The `view_dir` gives us the direction to the camera from the surface. The reflection math needs the direction from the camera to the surface, so we negate `view_dir`. We then use `wgsl`'s built-in `reflect` function to reflect the inverted `view_dir` about the `world_normal`. This gives us a direction that we can use to sample the environment map and get the color of the sky in that direction. Just looking at the reflection component gives us the following:
![just-reflections](./just-reflections.png)
@@ -1349,8 +1205,7 @@ Here's the finished scene:
-If your browser doesn't support WebGPU, this example
-won't work for you.
+If your browser doesn't support WebGPU, this example won't work for you.
diff --git a/docs/intermediate/wip-terrain/README.md b/docs/intermediate/wip-terrain/README.md
index ef1f36fff..512d3fa0b 100644
--- a/docs/intermediate/wip-terrain/README.md
+++ b/docs/intermediate/wip-terrain/README.md
@@ -1,22 +1,22 @@
# Procedural Terrain
-Up to this point we've been working in an empty void. This is great when you want to get your shading code just right, but most applications will want to fill the screen more interesting things. You could aproach this in a variety of ways. You could create a bunch of models in Blender and load them into the scene. This method works great if you have some decent artistic skills, and some patience. I'm lacking in both those departments, so let's write some code to make something that looks nice.
+Up to this point, we've been working in an empty void. This is great when you want to get your shading code just right, but most applications will want to fill the screen with more interesting things. You could approach this in a variety of ways. You could create a bunch of models in Blender and load them into the scene. This method works great if you have some decent artistic skills and some patience. I'm lacking in both those departments, so let's write some code to make something that looks nice.
-As the name of this article suggests we're going to create a terrain. Now the traditional method to create a terrain mesh is to use a pre-generated noise texture and sampling it to get the height values at each point in the mesh. This is a perfectly valid way to approach this, but I opted to generate the noise using a Compute Shader directly. Let's get started!
+As the name of this article suggests, we're going to create a terrain. Now, the traditional method to create a terrain mesh is to use a pre-generated noise texture and sample it to get the height values at each point in the mesh. This is a valid approach, but I opted to generate the noise using a compute shader directly. Let's get started!
## Compute Shaders
-A compute shader is simply a shader that allows you to leverage the GPU's parallel computing power for arbitrary tasks. You can use them for anything from creating a texture to running a neural network. I'll get more into how they work in a bit, but for now suffice to say that we're going to use them to create the vertex and index buffers for our terrain.
+A compute shader is simply a shader that allows you to leverage the GPU's parallel computing power for arbitrary tasks. You can use them for anything from creating a texture to running a neural network. I'll get more into how they work in a bit, but for now, suffice to say that we're going to use them to create the vertex and index buffers for our terrain.
-As of writing, compute shaders are still experimental on the web. You can enable them on beta versions of browsers such as Chrome Canary and Firefox Nightly. Because of this I'll cover a method to use a fragment shader to compute the vertex and index buffers after we cover the compute shader method.
+As of writing, compute shaders are still experimental on the web. You can enable them on beta versions of browsers such as Chrome Canary and Firefox Nightly. Because of this, I'll cover a method to use a fragment shader to compute the vertex and index buffers after we cover the compute shader method.
## Noise Functions
-Lets start with the shader code for the compute shader. First we'll create the noise functions, then we'll create the compute shader's entry function. Create a new file called `terrain.wgsl`. Then add the following:
+Let's start with the shader code for the compute shader. First, we'll create the noise functions. Then, we'll create the compute shader's entry function. Create a new file called `terrain.wgsl`. Then add the following:
```wgsl
// ============================
@@ -53,13 +53,13 @@ fn snoise2(v: vec2) -> f32 {
```
-Some of my readers may recognize this as an implementation of Simplex noise (specifically OpenSimplex noise). I'll admit to not really understanding the math behind OpenSimplex noise. The basics of it are that it's similar to Perlin Noise, but instead of a square grid it's a hexagonal grid which removes some of the artifacts that generating the noise on a square grid gets you. Again I'm not an expert on this, so to summarize: `permute3()` takes a `vec3` and returns a pseudorandom `vec3`, `snoise2()` takes a `vec2` and returns a floating point number between [-1, 1]. If you want to learn more about noise functions, check out [this article from The Book of Shaders](https://thebookofshaders.com/11/). The code's in GLSL, but the concepts are the same.
+Some of my readers may recognize this as an implementation of Simplex noise (specifically OpenSimplex noise). I'll admit to not really understanding the math behind OpenSimplex noise. The basics are that it's similar to Perlin Noise, but instead of a square grid, it's a hexagonal grid that removes some of the artifacts that generating the noise on a square grid gets you. Again, I'm not an expert on this, so to summarize: `permute3()` takes a `vec3` and returns a pseudorandom `vec3`, `snoise2()` takes a `vec2` and returns a floating point number between [-1, 1]. If you want to learn more about noise functions, check out [this article from The Book of Shaders](https://thebookofshaders.com/11/). The code's in GLSL, but the concepts are the same.
-While we can use the output of `snoise` directly to generate the terrains height values. The result of this tends to be very smooth, which may be what you want, but doesn't look very organic as you can see below:
+While we can use the output of `snoise` directly to generate the terrain's height values, the result of this tends to be very smooth, which may be what you want, but it doesn't look very organic, as you can see below:
![smooth terrain](./figure_no-fbm.png)
-To make the terrain a bit rougher we're going to use a technique called [Fractal Brownian Motion](https://thebookofshaders.com/13/). This technique works by sampling the noise function multiple times cutting the strength in half each time while doubling the frequency of the noise. This means that the overall shape of the terrain will be fairly smooth, but it will have sharper details. You can see what that will look like below:
+To make the terrain a bit rougher, we're going to use a technique called [Fractal Brownian Motion](https://thebookofshaders.com/13/). This technique works by sampling the noise function multiple times, cutting the strength in half each time while doubling the frequency of the noise. This means that the overall shape of the terrain will be fairly smooth, but it will have sharper details. You can see what that will look like below:
![more organic terrain](./figure_fbm.png)
@@ -85,16 +85,16 @@ fn fbm(p: vec2) -> f32 {
}
```
-Let's go over some this a bit:
+Let's go over this for a bit:
- The `NUM_OCTAVES` constant is the number of levels of noise you want. More octaves will add more texture to the terrain mesh, but you'll get diminishing returns at higher levels. I find that 5 is a good number.
-- We multiple `p` by `0.01` to "zoom in" on the noise function. This is because as our mesh will be 1x1 quads and the simplex noise function resembles white noise when stepping by one each time. You can see what that looks like to use `p` directly: ![spiky terrain](./figure_spiky.png)
+- We multiply `p` by `0.01` to "zoom in" on the noise function. This is because our mesh will be 1x1 quads, and the simplex noise function resembles white noise when stepping by one each time. You can see what it looks like to use `p` directly: ![spiky terrain](./figure_spiky.png)
- The `a` variable is the amplitude of the noise at the given noise level.
-- `shift` and `rot` are used to reduce artifacts in the generated noise. One such artiface is that at `0,0` the output of the `snoise` will always be the same regardless of how much you scale `p`.
+- `shift` and `rot` are used to reduce artifacts in the generated noise. One such artifact is that at `0,0`, the output of the `snoise` will always be the same regardless of how much you scale `p`.
## Generating the mesh
-To generate the terrain mesh we're going to need to pass some information into the shader:
+To generate the terrain mesh, we're going to need to pass some information into the shader:
```wgsl
struct ChunkData {
@@ -121,11 +121,11 @@ struct IndexBuffer {
@group(0)@binding(2) var indices: IndexBuffer;
```
-Our shader will expect a `uniform` buffer that includes the size of the quad grid in `chunk_size`, the `chunk_corner` that our noise algorithm should start at, and `min_max_height` of the terrain.
+Our shader will expect a `uniform` buffer that includes the size of the quad grid in `chunk_size`, the `chunk_corner` that our noise algorithm should start at, and the `min_max_height` of the terrain.
The vertex and index buffers are passed in as `storage` buffers with `read_write` enabled. We'll create the actual buffers in Rust and bind them when we execute the compute shader.
-The next part of the shader will be the functions that generate a point on the mesh, and a vertex at that point:
+The next part of the shader will be the functions that generate a point on the mesh and a vertex at that point:
```wgsl
fn terrain_point(p: vec2) -> vec3 {
@@ -155,17 +155,17 @@ fn terrain_vertex(p: vec2) -> Vertex {
The `terrain_point` function takes an XZ point on the terrain and returns a `vec3` with the `y` value between the min and max height values.
-`terrain_vertex` uses `terrain_point` to get it's position and also to compute of the normal of the surface by sampling 4 nearby points and uses them to compute the normal using some [cross products](https://www.khanacademy.org/math/multivariable-calculus/thinking-about-multivariable-function/x786f2022:vectors-and-matrices/a/cross-products-mvc).
+`terrain_vertex` uses `terrain_point` to get its position and also to compute the normal of the surface by sampling four nearby points and uses them to compute the normal using [cross products](https://www.khanacademy.org/math/multivariable-calculus/thinking-about-multivariable-function/x786f2022:vectors-and-matrices/a/cross-products-mvc).
You'll notice that our `Vertex` struct doesn't include a texture coordinate. We could easily create texture coordinates by using the XZ coords of the vertices and having the texture sampler mirror the texture on the x and y axes, but heightmaps tend to have stretching when textured in this way.
-We'll cover a method called triplanar mapping to texture the terrain in a future tutorial. For now we'll just use a procedural texture that will create in the fragment shader we use to render the terrain.
+We'll cover a method called triplanar mapping to texture the terrain in a future tutorial. For now, we'll just use a procedural texture that will be created in the fragment shader we use to render the terrain.
-Now that we can get a vertex on the terrains surface we can fill our vertex and index buffers with actual data. We'll create a `gen_terrain()` function that will be the entry point for our compute shader:
+Now that we can get a vertex on the terrain surface, we can fill our vertex and index buffers with actual data. We'll create a `gen_terrain()` function that will be the entry point for our compute shader:
```wgsl
@compute @workgroup_size(64)
@@ -178,11 +178,11 @@ fn gen_terrain(
We specify that `gen_terrain` is a compute shader entry point by annotating it with `stage(compute)`.
-The `workgroup_size()` is the number of workers that the GPU can allocate per `workgroup`. We specify the number of workers when we execute the compute shader. There are technically 3 parameters to this as work groups are a 3d grid, but if you don't specify them they default to 1. In other words `workgroup_size(64)` is equivalent to `workgroup_size(64, 1, 1)`.
+The `workgroup_size()` is the number of workers the GPU can allocate per `workgroup`. We specify the number of workers when we execute the compute shader. There are technically three parameters to this as work groups are a 3d grid, but if you don't specify them, they default to 1. In other words `workgroup_size(64)` is equivalent to `workgroup_size(64, 1, 1)`.
The `global_invocation_id` is a 3d index. This may seem weird, but you can think of work groups as a 3d grid of work groups. These workgroups have an internal grid of workers. The `global_invocation_id` is the id of the current worker relative to all the other works.
-Visually the workgroup grid would look something like this:
+Visually, the workgroup grid would look something like this:
![work group grid](./figure_work-groups.jpg)
@@ -203,7 +203,7 @@ for wgx in num_workgroups.x:
```
-If you want learn more about workgroups [check out the docs](https://www.w3.org/TR/WGSL/#compute-shader-workgroups).
+If you want to learn more about workgroups, [check out the docs](https://www.w3.org/TR/WGSL/#compute-shader-workgroups).
@@ -212,6 +212,6 @@ If you want learn more about workgroups [check out the docs](https://www.w3.org/
TODO:
- Note changes to `create_render_pipeline`
- Mention `swizzle` feature for cgmath
-- Compare workgroups and workgroups sizes to nested for loops
- - Maybe make a diagram in blender?
+- Compare workgroups and workgroups sizes to nested for-loops
+ - Maybe make a diagram in Blender?
- Change to camera movement speed
\ No newline at end of file