Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question for render a scene #1912

Open
SYSUykLin opened this issue Nov 6, 2024 · 8 comments
Open

Question for render a scene #1912

SYSUykLin opened this issue Nov 6, 2024 · 8 comments

Comments

@SYSUykLin
Copy link

Hello,
I have a mesh. There are floor, wall. However, when my camera ray is parallel with the floor, it can not render floor. For example:

image
Under the yellow box should be a green floor, but nothing is rendered.
I use the plotly to visual mesh:
image
Could anyone can help? Thanks

Could it be a problem with the normal vectors? But I have already set cull_backfaces=False in RasterizationSettings.

My code:

        camera = self.set_cameras(width=width, height=height, R=r.unsqueeze(0), T=t.unsqueeze(0), FOV=FOV)
        lights = AmbientLights(device=self.device)

        raster_settings = RasterizationSettings(
            image_size=(height, width),
            blur_radius=0.0,
            cull_backfaces=False,
            faces_per_pixel=100,
        )

        renderer = MeshRenderer(
                                                    rasterizer=MeshRasterizer(
                                                        cameras=camera,
                                                        raster_settings=raster_settings
                                                    ),
                                                    shader=HardGouraudShader(self.device, cameras=camera, lights=lights)
                                                )

        image = renderer(scene_obj_mesh, znear=-10000, zfar=100000.0)
@bottler
Copy link
Contributor

bottler commented Nov 6, 2024

Can you share more code? What is the camera object?

@SYSUykLin
Copy link
Author

Thanks for your reply. PerspectiveCameras:

    def set_cameras(self, width, height, R, T, FOV=60):
        # 通过FOV计算相机焦距
       # (width / 2) / x_focal = tan(FOV / 2)
        x_focal = (width / 2) / math.tan(math.radians(FOV / 2))
        y_focal = (height / 2) / math.tan(math.radians(FOV / 2))

        # PerspectiveCamera需要这个参数,我猜应该是相机对应到screen的中心
        principal_point = torch.tensor([height / 2, width / 2], dtype=torch.float32, device=self.device).unsqueeze(0)
        focal_length = torch.tensor([y_focal, x_focal], dtype=torch.float32, device=self.device).unsqueeze(0)
        image_size = torch.tensor([height, width], dtype=torch.float32, device=self.device).unsqueeze(0)

        cameras = PerspectiveCameras(
                                                            device=self.device,
                                                            R=R,
                                                            T=T,
                                                            principal_point=principal_point,
                                                            focal_length=focal_length,
                                                            in_ndc=False,
                                                            image_size=image_size
                                                          )

        return cameras

@SYSUykLin
Copy link
Author

SYSUykLin commented Nov 6, 2024

Thanks for your reply. PerspectiveCameras:

    def set_cameras(self, width, height, R, T, FOV=60):
        # 通过FOV计算相机焦距
       # (width / 2) / x_focal = tan(FOV / 2)
        x_focal = (width / 2) / math.tan(math.radians(FOV / 2))
        y_focal = (height / 2) / math.tan(math.radians(FOV / 2))

        # PerspectiveCamera需要这个参数,我猜应该是相机对应到screen的中心
        principal_point = torch.tensor([height / 2, width / 2], dtype=torch.float32, device=self.device).unsqueeze(0)
        focal_length = torch.tensor([y_focal, x_focal], dtype=torch.float32, device=self.device).unsqueeze(0)
        image_size = torch.tensor([height, width], dtype=torch.float32, device=self.device).unsqueeze(0)

        cameras = PerspectiveCameras(
                                                            device=self.device,
                                                            R=R,
                                                            T=T,
                                                            principal_point=principal_point,
                                                            focal_length=focal_length,
                                                            in_ndc=False,
                                                            image_size=image_size
                                                          )

        return cameras
  1. Additionally, when my cameras is not parallel with the floor, it can render the floor. For example:
    image
  2. I did not provide the normal, this is the way what I load mesh:
def load_mesh_pytorch3d(file_path):
   verts, faces_idx, aux = load_obj(file_path)
   faces = faces_idx.verts_idx
   vertices_color = load_mesh_vertices_color(file_path)
   vertices_color = vertices_color[None] / 255

   textures = TexturesVertex(verts_features=vertices_color)

   mesh = Meshes(verts=[verts], faces=[faces], textures=textures)
   return mesh

@bottler
Copy link
Contributor

bottler commented Nov 6, 2024

Is the floor made of a few very large faces? Playing with cull_to_frustrum might help. Also might be worth trying with bin_size=0 in the RasterizationSettings.

@SYSUykLin
Copy link
Author

Is the floor made of a few very large faces? Playing with cull_to_frustrum might help. Also might be worth trying with bin_size=0 in the RasterizationSettings.

Yes, the floor is a square which made of two big triangles.

@bottler
Copy link
Contributor

bottler commented Nov 6, 2024

I don't think I can help further.

@SYSUykLin
Copy link
Author

I don't think I can help further.
Could you explain what 'play with cull_to_frustrum ' means? I noticed that this variable seems to only have True or False values. Thanks.

@SYSUykLin
Copy link
Author

I solve this problem!!!!!!!
Share my experience:
1)If you use the PerspectiveCamera and custom rotation matrix and translation matrix, it's different from the blender. In blender, the R is the three basis of camera coordinate, T is the location of the camera. But in pytorch3d, the [R, T] is the w2c,so the R is $R^T$, and T is $-R^T T$.
2)In pytorch3d,the matrix is on the right side, it mean $C = X*M$, M is matrix, so the matrix is raw list. If u use the custom rotation, u should feed the $R$ not $R^T$.
3)If the face can not render, the face is to large. subdevide face follow code:

    mesh = Meshes(verts=[verts], faces=[faces], textures=textures)

    # face太大,渲染不出来的,需要subdivide mesh才能把地面渲染出来。
    subdivide_time = 3
    for t in range(subdivide_time):
        subdivide_mesh = SubdivideMeshes(mesh)
        mesh, vertices_color = subdivide_mesh(mesh, feats=vertices_color)
        textures = TexturesVertex(verts_features=vertices_color)
        mesh.textures = textures

@bottler Thank you for the reminder!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants