Skip to content

My own physically-based renderer built upon Nori with extended functionalities.

Notifications You must be signed in to change notification settings

kehanxuuu/Nori-CPU-Renderer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Nori Renderer

Kehan Xu (LinkedIn | Github | Personal Website | Email), Zijun Hui (LinkedIn | Github)

Tested on: Mac OS (Version 12.3), Linux (Ubuntu 20.04)

This is my own offline physically-based ray tracer. To achieve photorealism, the program simulates light transport in a modeled scene based on unbiased Monte Carlo integration of the rendering equation. My ray tracer supports a wide range of rendering-related techniques, including path tracing with MIS, volume rendering, spectral rendering and photon mapping. The goal of writing this renderer is to gain hands-on experience in implementing state-of-the-art algorithms that are adopted by production renderers.

My renderer is written in C++. It is first created in the computer graphics course at ETH Zurich, and added with more functionalities afterwards.

Note: Due to permission issue from graphics course staff, the source code cannot be made public for now. If you are interested in discussing the details of implementation, please email me.

Table of Contents

Nori Framework

Feature Overview

Gallery

Install and Build Instructions

Third-Party Credits

TODOs

My renderer is built upon the awesome educational ray tracing framework Nori 2 by Wenzel Jakob and his team.

The framework is written in C++ and runs on Windows, Linux, and Mac OS. It comes with basic functionalities that facilitate rendering algorithm development and are otherwise tedious to implement from scratch, including but not limited to:

You can refer to Nori website for more details.

At each surface intersection, naive path tracing algorithm determines the bounced ray direction according to material properties, or bidirectional scattering distribution function (BSDF). Sometimes a better strategy to form valid light paths (i.e. paths starting from camera that reaches light) is to consider known light locations in the scene, and shoot ray towards some position on any of light sources from the current intersection point. An even better strategy, so called multiple importance sampling (MIS), is to combine these two schemes. Veach proved that balance heuristics gives near-optimal combine weights.

Path tracing with MIS (and Russian Roulette) is the widely-used baseline to form an efficient path tracer.

BSDF Sampling Multiple Importance Sampling
BSDF MIS

Both rendered with 512 spp, MIS shows much less noise than BSDF sampling.

  • Finite
    • Area Light
    • Point Light
    • Spotlight
  • Infinite
    • Directional Light
    • Environment Map Light (see next section)
Area Light Point Light Spotlight Directional Light
Area Light Point Light Spotlight Directional Light

Environment map forms a sphere around the whole scene and emits light. The program supports user parameters to rotate the sphere in X/Y/Z directions.

Environment map is sampled according to pixel brightness (i.e. the probability of choosing a pixel as the endpoint of a light path is proportional to its brightness). This approach is necessary, considering that sun is usually included in textures for natural environment; pixels occupied by the sun are orders of magnitudes brighter than others. While all environment map pixels emit light, most of them lit objects dimly as the indirect lighting from surrounding environment; on the other hand, the sun form a prominent light source in the scene (see examples below). Given the extreme brightness of the sun, fireflies would fill the rendering if we sample each pixel with equal probability.

Scene File

No Rotation Rotate by 180 Degrees around Y Axis
Envmap1 Envmap2

Four spheres (diffuse, microfacet, mirror, dielectric) lit by the same environment map with different rotations. Notice how the shadow boundary changes corresponding to the position of sun.
Note: fireflies in the images are caused by the hard-to-sample specular light paths through mirror and dielectric spheres, not by bad sampling of the environment map.

Bunny Under the Sun
Envmap

Depth-of-Field effect is achieved by replacing the pinhole camera model with lens. Given focal length and aperture size parameters, we are able to simulate camera rays that pass through random points on the lens. Focal length determines how far objects must be from the camera to be in focus. Aperture size determines how blurry objects that are out of focus will appear. If aperture is set to 0, the image won't have any DOF effects.

Currently, the shape of aperture is a square / circle. An interesting and straightforward extension would be to replace it with more complex shapes such as star, by stochastically sampling a mask image. This would lead to pleasing artistic effects :)

Scene File

F = 4.5 F = 5.0 F = 6.0
F=4.5 F=5.0 F=6.0

Varying focal length. Aperture = 0.15.

A = 0 A = 0.05 A = 0.15 A = 0.3
A=0 A=0.05 A=0.15 A=0.3

Varying aperture. Focal length = 5.0.

Simple BSDFs such as diffuse, mirror and dielectric only represents a small subset of materials existing in nature. Microfacet BRDF models surface as a collection of microfacets where each microfacet perfectly reflects incident light, and its formula describes the statistical distribution of facets. Microfacet BRDF is more or less physically-based and can represent a broader range of materials, with a tunable parameter for roughness.

Microfacet BRDF consists of three terms: the Fresnel term (F), the normal distribution function (D), the shadowing-masking term (G). F term describes the ratio between reflected and transmitted light using the Fresnel Law. D term expresses the distribution of microfacets through PDF of the facet-normals. G term considers the portion of incident light blocked by nearby microfacets from light to surface and surface to eye.

Beckmann and GGX are two different means of modeling D and G term. According to this post and this post, GGX has a sharper peak and a larger tail than Beckmann; Beckmann is better for glossy materials, while GGX is suitable for rough materials. Most of the production renderers use GGX microfacet BRDF / BSDF models nowadays.

Scene File

Microfacet BRDF with Beckmann model. Spheres in the first row has varying roughness with interior index of refraction (IOR) = 1.5; spheres in the second row has varying interior IOR with roughness = 0.15.

Microfacet BRDF with GGX model. All other parameters are the same as above.

When it comes to sample the BRDF, a straightforward way to sample Beckmann / GGX distribution is to first sample the half-vector (i.e. normal of the microfacet), then mirror the incident direction towards the half-vector to obtain the outgoing direction. As this only importance sample partial terms of BRDF, a better way of sampling exists for GGX model: visible normal sampling. All these sampling methods are implemented in my renderer and compared under the same setting.

Scene File

BeckMann GGX GGX + Visible Normal Sampling
Beckmann GGX GGX+VNS
Beckmann GGX GGX+VNS

Lucy (the statue) with microfacet BRDF (roughness = 0.05), applying different models and sampling method. The second row is the zoomed-in version of a local path in the first row.
Notice how the highlight in the chest differs between Beckmann and GGX. One can also observe slightly less fireflies for GGX with visible normal sampling turned on (recommend to look at images at their full resolution).

We only mentioned microfacet BRDF above, but to consider translucent materials we need a microfacet BSDF as well. The formula changes a bit when it comes to transmission, but the idea is generally the same. Extension from microfacet BRDF to BSDF is similar to going from mirror to dielectric material. With microfacet BSDF implemented, we are now able to express glass with different levels of roughness.

Scene File

Microfacet BSDF with Beckmann model. Similarly, first row demonstrates varying roughness and second row shows varying interior IOR. Notice the reasonable range of roughness for Microfacet BSDF differs from that of BRDF.

Microfacet BSDF with GGX model. All other parameters are the same as above.

Scene File

Roughness = 0.1 Roughness = 0.5
Roughness0.1 Roughness0.5

Lucy with microfacet BRDF and Bunny with microfacet BSDF.
Both materials with roughness 0.1 / 0.5 on the left / right.

Homogeneous Participating Media

My renderer supports homogeneous participating media filling arbitrary mesh shape. We construct a separate BVH tree for volume mesh intersection.

Scene File

HG (g = -0.5) HG (g = 0) / Isotropic HG (g = 0.5)
g=-0.5 g=0 g=0.5

Our volume rendering model applies the Henyey-Greenstein phase function.
With different values of parameter g, it demonstrates varying forward (g > 0) or backward (g < 0) scattering characteristics.

Heterogeneous Participating Media

My renderer supports rendering heterogeneous participating media from an OpenVDB file. It requires the user to specify the bounding box to position the medium.

Scene File

HG (g = -0.5) HG (g = 0) / Isotropic HG (g = 0.5)
g=-0.5 g=0 g=0.5
Transmittance Estimation

When light transmits between two points inside the medium, a part of it is scattered away. Transmittance describes the portion of light that survives. This quantity can be evaluated analytically for homogeneous medium, but needs estimation in the heterogeneous case. In fact, the accuracy of transmittance estimation crucially affects the noise level. As much as sampling light paths matters for rendering equation integral estimation, sampling points to query the medium is the focus of research for better transmittance integral estimation.

I implemented a bunch of transmittance sampling methods, ranging from the simple / classical to the complex / state-of-the-art ones. These methods include:

(I've actually conducted a whole project to analyze and compare between these transmittance estimation techniques, but can't disclose more details due to a signed NDA.)

Noise performance of different trackers under the same SPP. In this experiment, the only light source is a point light and maximum path length is limited to two bounces, so as to maximally demonstrate the effect of transmittance accuracy on image noise level.
State-of-the-art methods (power-series CMF, unbiased ray marching and debiasing method) exhibit similar noise level, and is generally better than other classical methods. However, the most-performant unbiased ray marching contains much complex sampling and computation and therefore is a lot time-consuming than other methods.

Emissive Participating Media

Scene File

Emissive volume is supported in my renderer. We treat such volume as blackbody emitters, transforming per-grid temperature value (from OpenVDB file) into radiance. As the computation also involves the wavelength of the light, this functionality is only supported in a physically accurate way under the spectral rendering mode (see next section). One can still render emissive volume in RGB mode, with slight color difference.

RGB Mode Spectral Mode (Correct)
RGB Spectral

Notice the flame in RGB mode looks slight more pale.

A naive way to sample emissive participating media is to simply record along the light path. Intuitively, if the emission across volume is relatively proportional to medium density, this strategy works fine; however, if the radiance and density distribution are remarkably different, this naive sampling method will lead to heavy noise.

It is clear that some kind of importance sampling is necessary: a direct idea would be to sample the light path endpoint according to 3D volume grid radiance. This is the improvement I implemented in code (see comparison below). However, this is clearly suboptimal as we fail to consider transmittance (as an indication of medium density). For example, we might sample an endpoint with high emission but very much far away; in that case, transmittance between current path vertex and the selected point can be small, leading to a low path contribution. In other words, the optimal endpoint selection distribution varies according to the current path vertex location. A better strategy from Simon et al. constructs endpoint sample probability on-the-fly on a coarser grid. This is left for future work.

Naive Sampling Straightforward Importance Sampling
Naive SIS

Two emission sampling methods as mentioned above.
In this example, the emissive part of the fire also has high density, so the radiance is sampled well by light paths in the naive method. Importance sampling has no advantage in noise level over it, while being more time-consuming due to additional sampling process.

Traditional RGB mode renders the scene in red, green and blue components. In real world, the light we see is a combination of light waves from different wavelengths across the whole visible light spectrum (380-750 nm). To be more specific, this combination is an integral over the continuous visible light domain, while RGB representation discretizes the quantity with loss of information.

Spectral rendering models the light transport with real wavelength representation, estimating an additional integral over the light spectrum outside the original rendering equation one. This double-integral expression is physically accurate but requires more computation to estimate. Designing efficient data structure and sampling method for wavelength-based quantities is a challenge for modern renderers. Please refer to PBRT V3 Book for more technical details.

The spectral rendering implementation in this renderer mostly refers to the code of PBRT V3 and PBRT V4. PBRT V3 represents spectrum quantities as 60-dim vectors that sample evenly across visible light spectrum (i.e. the wavelength set to sample is pre-determined and static across the program), which is extremely memory-and-computation-inefficient. PBRT V4 stores only object quantities as dense spectrum, while each ray carries around a randomly-sampled 4-channel sparse spectrum; whenever a ray interacts with an object, it samples the corresponding dense quantity with its small wavelength sets. This method achieves better storage and computation usage.

My renderer supports convenient switching between RGB and spectral rendering mode through CMake option -DNORI_SAMPLED_SPECTRUM=ON/OFF.

With spectral mode enabled, we are able to set wavelength-dependent index of refraction (IOR) value for dielectric materials and render dispersion effect. When a ray intersects such a dielectric object, it should be split into multiple rays toward different directions. In our path tracing algorithm, in order to avoid an exponential growth in the total number of rays, we only keep one ray at intersection, and the channel-specific path PDFs are adjusted (one divided by 4, other three set to 0) to keep final image unbiased.

Dispersion with Dielectric Material

Scene File

RGB Spectral
RGB Spectral

Diamond rendered in RGB and Spectral rendering mode.

Dispersion with Rough Dielectric Material (Herowavelength Sampling)

Though we manage to render dispersion, the refracted rays now carry valid radiance value for only one wavelength; this results in color noise. If you observe the spectral image above carefully, the noise looks colorful. A solution to color noise in dispersion is Herowavelength Sampling, which samples new light direction with one wavelength (i.e. the herowavelength) and computes the probabilities of other wavelengths taking this direction; the probabilities are then used to importance sample across channels. However, such approach is not feasible for dielectric materials with output distribution being a delta function, as the probabilities of other wavelengths are all 0s. Luckily, we have microfacet BSDF (or rough dielectric) that fits into the herowavelength sampling framework and can be used to generate dispersion effect.

Scene File

Diamond Diamond + Sphere
Diamond Diamond+Sphere

Dispersion effect rendered with rough dielectric materials and herowavelength sampling. Notice that in the left image, the noise mostly go back to grayscale.
Color noise still appears when certain light paths are viable for some but not all carried wavelengths, but such phenomenon is limited compared to dielectric materials.

Photon mapping is a two-pass rendering algorithm. The program first emits photons from light into the scene, then shoots camera rays to gather photons and estimate incident radiance; photons and camera rays together form a "connection" from camera to light. Unlike path tracing, the algorithm is biased. Bias is induced by the kernel density estimation of photons to yield radiance. Still, photon mapping is consistent, meaning that it would approach the correct result with increasing number of emitted photons.

Photon mapping is especially effective in generating "difficult-to-sample" light paths, such as caustics. Photons are reused across multiple camera rays, making the algorithm computationally efficient. On the other hand, these photons should be stored in memory throughout the second pass, so the memory size limits the maximum number of photons. Another downside is that bias shows up in many forms, such as darker edges, blotchy flat areas and over-blurring. Final gathering is proposed to remedy the blotchy issue: at the point of density estimation, we trace several rays to push the estimation one bounce further and gather all of them. Separating caustics-related photons into an additional caustics photon map to generate sharper caustics is also a common approach, usually combined with final gathering. See result comparisons below.

Scene file

Path Tracing Naive Photon Mapping Final Gathering with Caustics Map
PT PM PMC
PT PM PMC

Comparison of three algorithms in two scenes. After improving photon mapping with final gathering and caustics map, the blotchy effects disappear.
Path tracing: 512spp; photon mapping: 1,000,000 photons; final gathering and caustics map: 100,000 photons, 5 randomly-shot rays to gather at each diffuse point.

Progressive Photon mapping

Progressive photon mapping (PPM) is a multi-pass algorithm with a first ray-tracing pass followed by any number of photon-tracing passes. Similar to storing photons in photon mapping, in PPM we store where camera rays hit the scene to be visible points. In each following photon tracing pass, we compute radiance estimate based on photons emitted this iteration. By progressively shrinking the density estimation kernel and aggregating gathered radiance over all iterations, both noise and bias decrease in the final rendered image (i.e. achieve convergence). Final gathering and caustics photon map are not required for PPM to yield satisfying result, so the code is clean and simple.

PPM needs to store not photons but visible points, so the memory issue still exists. To finally circumvent the problem, stochastic progressive photon mapping is proposed. The extension from PPM to SPPM should be straightforward, and is left for future work.

Scene file

Iter = 5 Iter = 20 Iter = 50
Iter=5 Iter=20 Iter=50

Progressive photon mapping with varying numbers of photon-tracing pass.
Each pixel has 16 visible points (i.e 16 spp), and 10,000 photons are emitted in each iteration. Bias diminished with increasing iterations.

Photon = 1000 Photon = 10000 Photon = 10000
Photon=1000 Photon=10000 Photon=100000

Progressive photon mapping with varying numbers of photons emitted in each pass.
Each pixel has 16 visible points (i.e 16 spp), and the program run 20 iterations. Bias diminished with increasing iterations.

Volumetric photon mapping

In order for photon mapping to support rendering volumes, we need to deposit photons on either surface and volume; therefore, one surface photon map and one volume photon map are constructed separately. When a single photon traverse in the scene, it is stored in one of the two maps depending on the current scattering type. The kernel density estimation for surface and volume are similar, except that in the 3D volume structure we need to query photons from the surrounding sphere instead of circle; this leads to the change of denominator in the estimation formula: from $\pi r^2$ to $\frac{4}{3}\pi r^3$. For simplicity, we use naive photon mapping algorithm as the base, so no final gathering or caustics map is applied.

Scene file

Path Tracing Volumetric Photon Mapping
PT VolPM

Same volume rendered with path tracing vs volumetric photon mapping. Total number of photons used is 10,000,000.

Environment map

Photon mapping can render all supported light source types (please refer to the light source subsection). To achieve this, each light type should uniquely define how photons are emitted according to its own characteristics. This is relatively straightforward for finite light types, but involves trick and creativity for infinite ones (directional and environment map light). See samplePhoton(...) function inside each light class for details.

Formula of energy carried by emitted photons.

Scene file

Path Tracing Naive Photon Mapping Photon Mapping with Caustics
PT PM PM

Validate the correctness of environment map light source in photon mapping algorithm.

Combination with Spectral Rendering

When it comes to storing photons / visible points under the spectral rendering mode, representing them with dense spectrum would be a hugh burden for memory and is totally unrealistic. Instead, we pre-sample a bunch of wavelength-quadruplets, and build one photon map for each quadruplet to store photons carrying specifically this sparse wavelength (i.e. we trace photons multiple times, one for each set). During the ray tracing pass, we trace one ray for each photon map (i.e. each ray starts at the same origin and direction, but carries corresponding sparse wavelength of the photon map). In other words, we rely on the randomly pre-sampled wavelength-quadruplets to cover evenly over the spectrum, as a substitute for the "idealistic" dense spectrum photon map.

The extension from RGB to spectral mode for photon mapping involves quite some mundane coding and not much technical insight. Currently, photon mapping, progressive photon mapping and volume photon mapping all support spectral rendering mode.

(More scenes to be added soon)

War in Snow Globe

This is (a slightly modified version of) the piece submitted to the rendering competition of ETHZ 2022 CG course, given the theme "out of place". It is the collective work between me and @ZijunH. The presentation video is on Youtube.

This image depicts an indoor scene centered on a snow globe with a war scene placed inside. The warm and bright light shining through the globe, as well as the cozy indoor atmosphere, strongly contrasts the bombed building with flames presented inside the glass sphere. It is created to symbolize destruction in peace and appeal to people to resist war.

Annotated the techniques showcased by the rendering.

The scene is assembled in Blender and exported as Nori-style XML file through plugin. Some meshes are self-modeled, others are from online resources. We provide a side-by-side comparison with the scene rendered by Blender Cycles. Notice that the scene is slightly modified.

Left: Blender      Right: Mine

Follow the instructions on Nori website.

Reference

Open-Source Renderer

Blog

Paper

Libraries

Assets

This list is a bit long, as I am really interested and ambitious in implementing existing state-of-the-art algorithms to render different visual effects.

Let me quote the words of Prof. Lingqi Yan here: Computer Graphics is AWESOME!

Performance Analysis and Optimization

  • Render time is recorded in .exr file, use it to compare between algorithms

Bug Fix

  • Spectral Rendering + Rough Dielectric material -> crash after running for a few minutes

Extension on Existing Algorithms

New Functionalities

  • BSDF
    • Conductor
    • Disney BSDF
  • Subsurface scattering

(Nice to have)

  • MipMap for textures
  • Stratified sampling
  • Equiangular sampling of single scattering
  • Denoising

(Hopefully not too ambitions)

  • Render atmosphere
  • Many light sampling
  • Bidirectional path tracing
  • Path guiding
  • Metropolis Light Transport

About

My own physically-based renderer built upon Nori with extended functionalities.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published