International Journal for Numerical Methods in Engineering (IJNME)
Fan Feng, Shiying Xiong, Ziyue Liu, Zangyueyang Xian, Yuqing Zhou, Hiroki Kobayashi, Atsushi Kawamoto, Tsuyoshi Nomura, Bo Zhu
[paper][Wiley Online Library][video]

Cellular structures manifest their outstanding mechanical properties in many biological systems. One key challenge for designing and optimizing these geometrically complicated structures lies in devising an effective geometric representation to characterize the system’s spatially varying cellular evolution driven by objective sensitivities. A conventional discrete cellular structure, e.g., a Voronoi diagram, whose representation relies on discrete Voronoi cells and faces, lacks its differentiability to facilitate large-scale, gradient-based topology optimizations. We propose a topology optimization algorithm based on a differentiable and generalized Voronoi representation that can evolve the cellular structure as a continuous field. The central piece of our method is a hybrid particle-grid representation to encode the previously discrete Voronoi diagram into a continuous density field defined in a Euclidean space. Based on this differentiable representation, we further extend it to tackle anisotropic cells, free boundaries, and functionally-graded cellular structures. Our differentiable Voronoi diagram enables the integration of an effective cellular representation into the state-of-the-art topology optimization pipelines, which defines a novel design space for cellular structures to explore design options effectively that were impractical for previous approaches. We showcase the efficacy of our approach by optimizing cellular structures with up to thousands of anisotropic cells, including femur bone and Odonata wing.
IEEE Transactions on Visualization and Computer Graphics (TVCG)
Fan Feng, Jinyuan Liu, Shiying Xiong, Shuqi Yang, Yaorui Zhang, Bo Zhu
[paper][video]

We propose a new incompressible Navier–Stokes solver based on the impulse gauge transformation. The mathematical model of our approach draws from the impulse–velocity formulation of Navier–Stokes equations, which evolves the fluid impulse as an auxiliary variable of the system that can be projected to obtain the incompressible flow velocities at the end of each time step. We solve the impulse-form equations numerically on a Cartesian grid. At the heart of our simulation algorithm is a novel model to treat the impulse stretching and a harmonic boundary treatment to incorporate the surface tension effects accurately. We also build an impulse PIC/FLIP solver to support free-surface fluid simulation. Our impulse solver can naturally produce rich vortical flow details without artificial enhancements. We showcase this feature by using our solver to facilitate a wide range of fluid simulation tasks including smoke, liquid, and surface-tension flow. In addition, we discuss a convenient mechanism in our framework to control the scale and strength of the turbulent effects of fluid.
@ARTICLE{9707648,
author={Feng, Fan and Liu, Jinyuan and Xiong, Shiying and Yang, Shuqi and Zhang, Yaorui and Zhu, Bo},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={Impulse Fluid Simulation},
year={2022},
volume={},
number={},
pages={1-1},
doi={10.1109/TVCG.2022.3149466}}
We implemented the rendering program used for the final picture step by step, in the following order:
(1) Participating media path tracer.
(2) Photon mapping.
(3) Volumetric photon mapping (beam radiance estimation).
(4) Volumetric photon mapping with final gathering (not used for rendering).
(5) A parallel version of (3).
And (5) is the final program for rendering.
(##) Skybox
Our y axis is the z axis in blender and our z axis is the -y axis in blender and our x axis is -x axis in blender. The fov in blender is horizontal when the aspect ratio is larger than one so we need to change it to
horizontal when we export it out. Blender uses the hfov when aspect ratio is greater than one and uses vfov when aspect ratio is smaller than one, so we changed the python script for exporting the scene.
(##) Photon Mapping
We first implemented a surface photon mapping `PathTracerPhoton` in `integrators/path_tracer_photon.cpp`. We design the photon structure to be a derived class of `Sphere`, because at first we chose to use a fixed-radius photon model, therefore every photon is a sphere. Therefore, we can easily build a bounding box hierarchy of photons by adopting the existing code. Apart from the information of `Sphere`, `Photon` stores two additional values: photon power $\Phi_i$ and the incident direction $\omega_i$.
For this first photon mapping integrator, we implemented the photon sampling algorithm in `Scene` for simplicity. The sampling code consists of two functions: first, `void emit_photons(int N,const float &radius);` keep emitting photons until there are enough valid photons sampled.
When a photon is sampled, it is traced recursively by the other function `int recursive_photon(Sampler &sampler, const float &radius, const Ray3f &ray, const Vec3f &phi, const Vec3f &phi_orig,const int more_bounces);`
In `recursive_photon()` we use the Russian Roulette strategy, trying to keep the photon power near constant. There is a technical issue that in the equations in slides, power $\Phi$ is a scalar, however in our code it's a `Color3f`. To deal with this, we use the method suggested by *Smal, Niklas, and Maksim Aizenshtein. "Real-time global illumination with photon mapping." Ray Tracing Gems. Apress, Berkeley, CA, 2019. 409-436.* : use the largest number in 3 RGB channels as the power.
The photons are store in a `BBH` hierarchy. And in rendering process, we write a function in `BBH` and `BBHNode` to query photon radiance near a certain point: `Color3f BBHNode::photon_radiance(const HitInfo &hit, const Vec3f &w, const float &r)`. Basically it recursively queries (in the tree hierarchy) every photon that includes point `hit.p`, because every photon has a radius, and calculate the radiance.
Then, the function `Li()` in the integrator `PathTracerPhoton` calls a recursive function `Color3f recursive_Li(const Scene &scene, Sampler &sampler, const Ray3f &ray, int more_bounces)`, which terminates at the first diffusive material the path landed, and query the radiance near that point.
At this point, we get some results with severe splotch artifacts, like this:
When tuning parameters, we realized that the rendering result is highly sensitive to the fixed radius `r`, therefore we decided to switch to the other strategy: k-nearest-neighbors. That is, for some parameter $n$, we query the nearest $n$ photons of some point $x$, and set radius `r_i` to the distance of the farest photon in them. Therefore we need to implement a knn query method in `BBH`. The algorithm is based on a max-heap, that is, a `std::priority_queue`. We push all possible photons in the priority_queue, and pop when its size is larger than $n$. Here "possible" means that if the distance of a bounding box to $x$ is larger than the max distance in the priority_queue, we do not try to search in the subtree rooted at that node.
However, we didn't write another point-cloud-like photon structure, because we expect that later we will still assign radii to all photons in the volumetric photon mapping, and for simiplicity. Instead, we just set the photon radius to a very small number.
Finally we obtained a running photon mapping program. However at this point we still have a issue: the result picture is darker than the reference, like this:
Later we found that the reason of this is that `Material::eval()` function in our code has already taken care of the cosine term. We didn't take this into count, and mulplitied the term again. After fixing this, the photon mapping can generate correct pictures:
And to validate the correctness of knn search, we change the knn search to a naive algorithm: sort all photons and take the nearest $n$. That naive version generates a exactly same picture, that means our knn search is correct:
So we didn't use this program for final rendering.
(##) Parallelization
We use `OpenMP` to parallelize the program. We put all pixel indexes into a vector, and use a `omp for` to iterate in it. The only thing special, is that we need to generate a different sampler for every pixel. This is done by the `seed(seedx,seedy)` function.
We also parallelized the photon emission part, by parallely tracing multiple photons, and guard the `add_child()` by `omp critical`. The scalability of the parallelized program is good.
]]>This is a website that I created with four other teammates for Market Place Competition held by Red Ventures. I was the one who initiated the team and formed team. I divided the tasks to different parts to teammates according to their specializations. I also planned out and enforced the timeline in order to finish the project within one-week time limit.




Partial code is available on my github project page.
]]>Concepts involved in making the project: -geometric primitives -scan conversion -clipping -transformations -compositing -texture sampling -gradients -antialiasing -filtering -parametric curves -geometric stroking
The project is implemented in C/C++.
Partial code is available on my github project page.
