Jekyll2025-10-07T20:59:39+00:00https://sking8.github.io/feed.xmlFan FengFan FengCellular Topology Optimization on Differentiable Voronoi Diagrams2022-05-05T00:00:00+00:002022-05-05T00:00:00+00:00https://sking8.github.io/TopoVoronoiCellular Topology Optimization on Differentiable Voronoi Diagrams

International Journal for Numerical Methods in Engineering (IJNME)

Fan Feng, Shiying Xiong, Ziyue Liu, Zangyueyang Xian, Yuqing Zhou, Hiroki Kobayashi, Atsushi Kawamoto, Tsuyoshi Nomura, Bo Zhu
[paper][Wiley Online Library][video]

topo_voronoi

Abstract

Cellular structures manifest their outstanding mechanical properties in many biological systems. One key challenge for designing and optimizing these geometrically complicated structures lies in devising an effective geometric representation to characterize the system’s spatially varying cellular evolution driven by objective sensitivities. A conventional discrete cellular structure, e.g., a Voronoi diagram, whose representation relies on discrete Voronoi cells and faces, lacks its differentiability to facilitate large-scale, gradient-based topology optimizations. We propose a topology optimization algorithm based on a differentiable and generalized Voronoi representation that can evolve the cellular structure as a continuous field. The central piece of our method is a hybrid particle-grid representation to encode the previously discrete Voronoi diagram into a continuous density field defined in a Euclidean space. Based on this differentiable representation, we further extend it to tackle anisotropic cells, free boundaries, and functionally-graded cellular structures. Our differentiable Voronoi diagram enables the integration of an effective cellular representation into the state-of-the-art topology optimization pipelines, which defines a novel design space for cellular structures to explore design options effectively that were impractical for previous approaches. We showcase the efficacy of our approach by optimizing cellular structures with up to thousands of anisotropic cells, including femur bone and Odonata wing.

Video / Results


]]>
Fan Feng
Impulse Fluid2022-02-23T00:00:00+00:002022-02-23T00:00:00+00:00https://sking8.github.io/ImpulseFluidImpulse Fluid Simulation

IEEE Transactions on Visualization and Computer Graphics (TVCG)
Fan Feng, Jinyuan Liu, Shiying Xiong, Shuqi Yang, Yaorui Zhang, Bo Zhu
[paper][video]

impulse_fluid

Abstract

We propose a new incompressible Navier–Stokes solver based on the impulse gauge transformation. The mathematical model of our approach draws from the impulse–velocity formulation of Navier–Stokes equations, which evolves the fluid impulse as an auxiliary variable of the system that can be projected to obtain the incompressible flow velocities at the end of each time step. We solve the impulse-form equations numerically on a Cartesian grid. At the heart of our simulation algorithm is a novel model to treat the impulse stretching and a harmonic boundary treatment to incorporate the surface tension effects accurately. We also build an impulse PIC/FLIP solver to support free-surface fluid simulation. Our impulse solver can naturally produce rich vortical flow details without artificial enhancements. We showcase this feature by using our solver to facilitate a wide range of fluid simulation tasks including smoke, liquid, and surface-tension flow. In addition, we discuss a convenient mechanism in our framework to control the scale and strength of the turbulent effects of fluid.

Video / Results


Citation

@ARTICLE{9707648,
  author={Feng, Fan and Liu, Jinyuan and Xiong, Shiying and Yang, Shuqi and Zhang, Yaorui and Zhu, Bo},
  journal={IEEE Transactions on Visualization and Computer Graphics}, 
  title={Impulse Fluid Simulation}, 
  year={2022},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/TVCG.2022.3149466}}
]]>
Fan Feng
Bioluminescence Scene Rendering2021-11-18T00:00:00+00:002021-11-18T00:00:00+00:00https://sking8.github.io/RenderingCompetition (#) Rendering Algorithm Collaborated by Fan Feng and Mengdi Wang We implemented the rendering program used for the final picture step by step, in the following order: (1) Participating media path tracer. (2) Photon mapping. (3) Volumetric photon mapping (beam radiance estimation). (4) Volumetric photon mapping with final gathering (not used for rendering). (5) A parallel version of (3). And (5) is the final program for rendering. (##) Skybox Our y axis is the z axis in blender and our z axis is the -y axis in blender and our x axis is -x axis in blender. The fov in blender is horizontal when the aspect ratio is larger than one so we need to change it to horizontal when we export it out. Blender uses the hfov when aspect ratio is greater than one and uses vfov when aspect ratio is smaller than one, so we changed the python script for exporting the scene.
Blender Mine
(##) Participating Media
Blender Mine
We also verified with all sigma values equal to zero that no volume shown and also all absorption equal to 1 that the sphere appears black. (##) Photon Mapping We first implemented a surface photon mapping `PathTracerPhoton` in `integrators/path_tracer_photon.cpp`. We design the photon structure to be a derived class of `Sphere`, because at first we chose to use a fixed-radius photon model, therefore every photon is a sphere. Therefore, we can easily build a bounding box hierarchy of photons by adopting the existing code. Apart from the information of `Sphere`, `Photon` stores two additional values: photon power $\Phi_i$ and the incident direction $\omega_i$. For this first photon mapping integrator, we implemented the photon sampling algorithm in `Scene` for simplicity. The sampling code consists of two functions: first, `void emit_photons(int N,const float &radius);` keep emitting photons until there are enough valid photons sampled. When a photon is sampled, it is traced recursively by the other function `int recursive_photon(Sampler &sampler, const float &radius, const Ray3f &ray, const Vec3f &phi, const Vec3f &phi_orig,const int more_bounces);` In `recursive_photon()` we use the Russian Roulette strategy, trying to keep the photon power near constant. There is a technical issue that in the equations in slides, power $\Phi$ is a scalar, however in our code it's a `Color3f`. To deal with this, we use the method suggested by *Smal, Niklas, and Maksim Aizenshtein. "Real-time global illumination with photon mapping." Ray Tracing Gems. Apress, Berkeley, CA, 2019. 409-436.* : use the largest number in 3 RGB channels as the power. The photons are store in a `BBH` hierarchy. And in rendering process, we write a function in `BBH` and `BBHNode` to query photon radiance near a certain point: `Color3f BBHNode::photon_radiance(const HitInfo &hit, const Vec3f &w, const float &r)`. Basically it recursively queries (in the tree hierarchy) every photon that includes point `hit.p`, because every photon has a radius, and calculate the radiance. Then, the function `Li()` in the integrator `PathTracerPhoton` calls a recursive function `Color3f recursive_Li(const Scene &scene, Sampler &sampler, const Ray3f &ray, int more_bounces)`, which terminates at the first diffusive material the path landed, and query the radiance near that point. At this point, we get some results with severe splotch artifacts, like this: When tuning parameters, we realized that the rendering result is highly sensitive to the fixed radius `r`, therefore we decided to switch to the other strategy: k-nearest-neighbors. That is, for some parameter $n$, we query the nearest $n$ photons of some point $x$, and set radius `r_i` to the distance of the farest photon in them. Therefore we need to implement a knn query method in `BBH`. The algorithm is based on a max-heap, that is, a `std::priority_queue`. We push all possible photons in the priority_queue, and pop when its size is larger than $n$. Here "possible" means that if the distance of a bounding box to $x$ is larger than the max distance in the priority_queue, we do not try to search in the subtree rooted at that node. However, we didn't write another point-cloud-like photon structure, because we expect that later we will still assign radii to all photons in the volumetric photon mapping, and for simiplicity. Instead, we just set the photon radius to a very small number. Finally we obtained a running photon mapping program. However at this point we still have a issue: the result picture is darker than the reference, like this: Later we found that the reason of this is that `Material::eval()` function in our code has already taken care of the cosine term. We didn't take this into count, and mulplitied the term again. After fixing this, the photon mapping can generate correct pictures: And to validate the correctness of knn search, we change the knn search to a naive algorithm: sort all photons and take the nearest $n$. That naive version generates a exactly same picture, that means our knn search is correct:
Tree search Naive search
We validate the photon mapping first by the three sphere lights:
Reference Mine
Reference Mine
Reference Mine
All these results are rendered with 200 photons and 50 nearest photons in estimation. We can see that we lose the highlight under light source, because photon mapping is biased. However, if we compare our three photon mapping pictures, we can see a similar radiance distribution on floor, which proves that our algorithm is consistent.
large medium small
Further we validate our result with the Jensen box scene. 200 photons, 50 in estimation:
Mine Reference
100000 photons, 50 in estimation:
Mine Reference
500000 photons, 500 in estimation:
Mine Reference
The reference picture sampled more rays per pixel, so their result looks more smooth. (##) Volumetric photon mapping Next we move on to volumetric photon mapping. That is integrator `PathTracerVPT` in `integrator/path_tracer_vpt.cpp`. We move the photon sampling function into `PathTracerVPT` to avoid confusion. Instead of a single BBH of surface photons, we need to handle the volumetric photons in addition. Note that for the beam estimation, we need to assign a radius to each volumetric photon. That is done by first storing all volumetric photons in a BBH, then apply a knn search to each of them to decide the radius, and insert the photon with the newly calculated radius to the second BBH. The beam radiance estimation is done by performing a ray-BBH intersect query, and summing up the radiance of all intersected photons. We can compare the results of volumetric photon mapping against volume path tracer:
Volume Photon Path Tracer
The red ball is filled with participating media. The Volume Photon Mapping is done with 200 photons and 50 in estimation. It's worth to point out that photon mapping is not good at handling background (because background is not sampled), therefore the far regions are darker than the volume path tracer. However, the media is smoother than path tracer, and the radiance level is the same.
Volume Photon Path Tracer
This comparison shows the advantage of volume photon mapping. They're done in approximately the same time. Two algorithms give a similar level of radiance, and the volume photon renders a picture with better quality. (##) Final Gathering Then we try to implement a final gathering in `FinalGatherVPT`. That is done by slightly modify the `recursive_Li()` function in the integrator. Because we're expecting ~10,000 light sources in our final scene, we can only use material sampling here (light sampling will cost too much time), however the result is not satisfactory. A lot of fireflies emerge from the material sampling: So we didn't use this program for final rendering. (##) Parallelization We use `OpenMP` to parallelize the program. We put all pixel indexes into a vector, and use a `omp for` to iterate in it. The only thing special, is that we need to generate a different sampler for every pixel. This is done by the `seed(seedx,seedy)` function. We also parallelized the photon emission part, by parallely tracing multiple photons, and guard the `add_child()` by `omp critical`. The scalability of the parallelized program is good. ]]>
Fan Feng
AR Prostate Surgery Module2019-08-12T02:53:32+00:002019-08-12T02:53:32+00:00https://sking8.github.io/ARProstateBiopsyModuleThis is a project that I developed in the summer of my junior year.
I made the 3D animation of patient and doctor interaction.
I also made realistic looking models for ultrasound machine and clothes for patient.
Then the work was imported to Unity in order to add some interaction elements such as video controller and also 3D sound.

]]>
Fan Feng
Pusheen Credit Card Application Website2019-08-12T02:53:32+00:002019-08-12T02:53:32+00:00https://sking8.github.io/PusheenWebsiteLink to Website

This is a website that I created with four other teammates for Market Place Competition held by Red Ventures. I was the one who initiated the team and formed team. I divided the tasks to different parts to teammates according to their specializations. I also planned out and enforced the timeline in order to finish the project within one-week time limit.

main

sub

detail

apply

Partial code is available on my github project page.

]]>
Fan Feng
2D Design2019-02-27T02:53:32+00:002019-02-27T02:53:32+00:00https://sking8.github.io/2DdesignFan FengDrawings2019-02-27T02:53:32+00:002019-02-27T02:53:32+00:00https://sking8.github.io/drawingFan FengWater Color2019-02-27T02:53:32+00:002019-02-27T02:53:32+00:00https://sking8.github.io/waterColorFan Feng2D Graphics Engine2019-02-27T02:53:32+00:002019-02-27T02:53:32+00:00https://sking8.github.io/2DGraphicsEngineThis is the 2D graphics engine project that I developed for a 2D graphics class. I enjoyed the class a lot, because I finally was able to take a class in computer graphics after completing all the other prerequisite. This class helped me build a strong foundation in 2D graphics. I learned the whole graphics pipeline from inputting vertex data, transformation using matrix, clipping edges to scan line conversion. I did not use any graphics libraries through the whole project and built everything from scratch. I really appreciate by the ability of coding to produce beautiful images with high performance.

Concepts involved in making the project: -geometric primitives -scan conversion -clipping -transformations -compositing -texture sampling -gradients -antialiasing -filtering -parametric curves -geometric stroking

The project is implemented in C/C++.

Partial code is available on my github project page.

gradient blend modes lion poly quad clock rings spock quad

]]>
Fan Feng
AR Laparoscopic Training System2019-02-27T02:53:32+00:002019-02-27T02:53:32+00:00https://sking8.github.io/ARLaparoSurgeryTrainingThis is a research project I produced with students in our school and in University in Arizona.
The purpose of the project is to train laparoscopic surgery for medical students in an AR system.
I created precise 3D models for laparoscopic instruments and generated several clips of animation to train neural network which enables tracking of the medical instruments.
I also tried a method using auto encoder to track the pose of prism.

]]>
Fan Feng