WebGPU is a new graphics and compute API designed by the “GPU for the Web” W3C community group. It aims to provide modern features such as “GPU compute” as well as lower overhead access to GPU hardware and better, more predictable performance. WebGPU should work with existing platform APIs such as Direct3D 12 from Microsoft, Metal from Apple, and Vulkan from the Khronos Group.
WebGPU is designed for the Web, used by JavaScript and WASM applications, and driven by the shared principles of Web APIs. However, it doesn't have to be only for the Web. Targeting WebGPU on native enables writing extremely portable and fairly performant graphics applications. The WebGPU API is beginner friendly, meaning that the API automates some of the aspects of low-level graphics APIs which have high complexity but low return on investment. It still has the core pieces of the next-gen APIs, such as command buffers, render passes, pipeline states and layouts. Because the complexity is reduced, users will be able to direct more focus towards writing efficient application code.
From the very beginning, Google had both native and in-browser use of their implementation, which is now called Dawn. Mozilla has a shared interest in allowing developers to target a shared “WebGPU on native” target instead of a concrete “Dawn” or “wgpu-native”. This is achieved, by a shared header, and C-compatible libraries implementing it. However, this specification is still a moving target.
This repository contains a collection of open source C examples for WebGPU using Dawn the open-source and cross-platform implementation of the work-in-progress WebGPU standard.
- Supported Platforms
- Get the Sources
- Building for native with Dawn
- Running the examples
- Project Layout
- Examples
- Dependencies
- Credits
- References
- Roadmap
- License
This repository contains submodules for external dependencies, so when doing a fresh clone you need to clone recursively:
$ git clone --recursive https://github.com/samdauwe/webgpu-native-examples.git
Existing repositories can be updated manually:
$ git submodule update --initThe examples are built on top of Dawn, an open-source and cross-platform implementation of the work-in-progress WebGPU standard.
Build the examples using the following commands:
$ cmake -B build && cmake --build build -j4To build and run the examples inside a Docker container, follow the steps as described below.
Build the Docker image:
$ bash ./build.sh -docker_buildRun the Docker container:
$ bash ./build.sh -docker_runFinally, build the samples
$ bash ./build.sh -webgpu_native_examplesThe build step described in the previous section creates a subfolder "x64" in the build folder. This subfolder contains all libraries and assets needed to run examples. A separate executable is created for each different example.
$ ./hello_triangle├─ 📂 assets/ # Assets (models, textures, shaders, etc.)
├─ 📂 doc/ # Documentation files
│ └─ 📁 images # WebGPU diagram, logo
├─ 📂 docker/ # Contains the Dockerfile for building Docker image
├─ 📂 external/ # Dependencies
│ ├─ 📁 cglm # Highly Optimized Graphics Math (glm) for C
│ ├─ 📁 dawn # WebGPU implementation
│ └─ 📁 ... # Other Dependencies (cgltf, cimgui, stb, etc.)
├─ 📂 screenshots/ # Contains screenshots for each functional example
├─ 📂 src/ # Helper functions and examples source code
│ ├─ 📁 core # Base functions (input, camera, logging, etc.)
│ ├─ 📁 examples # Examples source code, each example is located in a single file
│ ├─ 📁 platforms # Platform dependent functionality (input handling, window creation, etc.)
│ ├─ 📁 webgpu # WebGPU related helper functions (buffers & textures creation, etc.)
│ └─ 📄 main.c # Example launcher main source file
├─ 📄 .clang-format # Clang-format file for automatically formatting C code
├─ 📄 .gitmodules # Used Git submodules
├─ 📄 .gitignore # Ignore certain files in git repo
├─ 📄 build.sh # bash script to automate different aspects of the build process
├─ 📄 CMakeLists.txt # CMake build file
├─ 📄 LICENSE # Repository License (Apache-2.0 License)
└─ 📃 README.md # Read Me!This example shows how to set up a swap chain and clearing the screen. The screen clearing animation shows a fade-in and fade-out effect.
Illustrates the coordinate systems used in WebGPU. WebGPU’s coordinate systems match DirectX and Metal’s coordinate systems in a graphics pipeline. Y-axis is up in normalized device coordinate (NDC): point(-1.0, -1.0) in NDC is located at the bottom-left corner of NDC. This example has several options for changing relevant pipeline state, and displaying meshes with WebGPU or Vulkan style coordinates.
| Render | Depth | Texture |
|---|---|---|
![]() |
![]() |
![]() |
Minimalistic render pipeline demonstrating how to render a full-screen colored quad.
This example shows how to render a static colored square in WebGPU with only using vertex buffers.
Basic and verbose example for getting a colored triangle rendered to the screen using WebGPU. This is meant as a starting point for learning WebGPU from the ground up.
This example shows rendering a basic triangle.
This example shows some of the alignment requirements involved when updating and binding multiple slices of a uniform buffer.
This example shows how to render points of various sizes using a quad and instancing. You can read more details here.
This example provides example camera implementations
Dynamic uniform buffers are used for rendering multiple objects with multiple matrices stored in a single uniform buffer object. Individual matrices are dynamically addressed upon bind group binding time, minimizing the number of required bind groups.
This example shows how to render and sample from a cubemap texture.
This example shows how to bind and sample textures.
This example shows how to upload a 2D texture to the GPU and display it on a quad with Phong lighting.
This example demonstrates procedural 3D texture generation using fractal Perlin noise. A 128x128x128 R8 noise volume is created on the CPU and uploaded as a 3D texture. An animated depth value sweeps through the volume, revealing smoothly changing cross-sections rendered with Phong lighting on a quad.
This example shows how to render an equirectangular panorama consisting of a single rectangular image. The equirectangular input can be used for a 360 degrees viewing experience to achieve more realistic surroundings and convincing real-time effects.
This example demonstrates the use of blending in WebGPU. It shows how to configure different blend operations, source factors, and destination factors for both color and alpha channels. The example displays two overlapping images with various blend modes that can be selected through a GUI.
This example demonstrates order independent transparency using a per-pixel linked-list of translucent fragments.
This example shows the use of reversed z technique for better utilization of depth buffer precision. The left column uses regular method, while the right one uses reversed z technique. Both are using depth32float as their depth buffer format. A set of red and green planes are positioned very close to each other. Higher sets are placed further from camera (and are scaled for better visual purpose). To use reversed z to render your scene, you will need depth store value to be 0.0, depth compare function to be greater, and remap depth range by multiplying an additional matrix to your projection matrix.
Visualizes what all the sampler parameters do. Shows a textured plane at various scales (rotated, head-on, in perspective, and in vanishing perspective). The bottom-right view shows the raw contents of the 4 mipmap levels of the test texture (16x16, 8x8, 4x4, and 2x2).
This example shows how to render a single indexed triangle model as mesh, wireframe, or wireframe with thick lines, without the need to generate additional buffers for line rendering.
Uses vertex pulling to let the vertex shader decide which vertices to load, which allows us to render indexed triangle meshes as wireframes or even thick-wireframes.
- A normal wireframe is obtained by drawing 3 lines (6 vertices) per triangle. The vertex shader then uses the index buffer to load the triangle vertices in the order in which we need them to draw lines.
- A thick wireframe is obtained by rendering each of the 3 lines of a triangle as a quad (comprising 2 triangles). For each triangle of the indexed model, we are drawing a total of 3 lines/quads = 6 triangles = 18 vertices. Each of these 18 vertices belongs to one of three lines, and each vertex shader invocation loads the start and end of the corresponding line. The line is then projected to screen space, and the orthogonal of the screen-space line direction is used to shift the vertices of each quad into the appropriate directions to obtain a thick line.
This example demonstrates drawing a wireframe from triangles in 2 ways. Both use the vertex and index buffers as storage buffers and the use `@builtin vertex_index)` to index the vertex data. One method generates 6 vertices per triangle and uses line-list to draw lines. The other method draws triangles with a fragment shader that uses barycentric coordinates to draw edges as detailed here.
WebGPU doesn't let you set the viewport’s values to be out-of-bounds. Therefore, the viewport’s values need to be clamped to the screen-size, which means the viewport values can’t be defined in a way that makes the viewport go off the screen. This example shows how to render a viewport out-of-bounds.
Demonstrates using the stencil buffer for masking. It draws the 6 faces of a rotating cube into the stencil buffer, each with a different stencil value. Then it draws different scenes of animated objects where the stencil value matches, creating a cube-shaped window into different worlds.
These samples show how implement different features of the glTF 2.0 3D format 3D transmission file format in detail.
Demonstrates basic GLTF loading and mesh skinning, ported from webgl-skinning. Mesh data, per-vertex attributes, and skin inverseBindMatrices are taken from the JSON parsed from the binary output of the .glb file. Animations are generated programmatically, with animated joint matrices updated and passed to shaders per frame via uniform buffers.
A clustered forward shading renderer using WebGPU compute shaders for light culling, with PBR (Physically Based Rendering) materials loaded from the Sponza glTF scene. Uses a 32x18x48 cluster grid to efficiently assign up to 1024 lights to screen-space tiles, with configurable debug visualizations for depth, depth slices, cluster distances, and lights per cluster. Ported from this JavaScript implementation to native code.
This example shows how to achieve multisample anti-aliasing(MSAA) in WebGPU. The render pipeline is created with a sample count > 1. A new texture with a sample count > 1 is created and set as the color attachment instead of the swapchain. The swapchain is now specified as a resolve_target.
This example shows how to create a basic reflection pipeline.
This example shows how to sample from a depth texture to render shadows from a directional light source.
This example demonstrates primitive picking by computing a primitive ID from vertex_index (since primitive_id builtin requires experimental extensions). Each primitive's unique ID is rendered to a texture, which is then read at the current cursor/touch location to determine which primitive has been selected. That primitive is highlighted in yellow when rendering the next frame.
This example implements a bloom post-processing effect using a separable two-pass Gaussian blur. A glTF model's emissive (glow) parts are first rendered to an offscreen texture, then blurred vertically and horizontally via fullscreen passes with additive blending. The final scene composites the blurred glow on top of the Phong-lit model and a cubemap skybox. Includes ImGui controls for toggling bloom and adjusting blur intensity.
Implements a high dynamic range rendering pipeline with RGBA16Float offscreen textures, a Cook-Torrance specular BRDF with Schlick Fresnel approximation, and environment cubemap reflections. The scene is rendered to a multi-render-target G-buffer, tone-mapped with an adjustable exposure parameter, then optionally enhanced with a separable 25-tap Gaussian bloom filter using additive blending. Includes ImGui controls for switching between glTF model types, adjusting exposure, and toggling bloom and skybox display.
Uses the instancing feature for rendering (many) instances of the same mesh from a single vertex buffer with variable parameters.
This example demonstrates using Timestamp Queries to measure the duration of a render pass.
This example demonstrates using Occlusion Queries.
This example shows how to use render bundles. It renders a large number of meshes individually as a proxy for a more complex scene in order to demonstrate the reduction in time spent to issue render commands. (Typically a scene like this would make use of instancing to reduce draw overhead.)
Physical based rendering as a lighting technique that achieves a more realistic and dynamic look by applying approximations of bidirectional reflectance distribution functions based on measured real-world material parameters and environment lighting.
Adds image based lighting from an HDR environment cubemap to the PBR equation, using the surrounding environment as the light source. This adds an even more realistic look to the scene as the light contribution used by the materials is now controlled by the environment. Also shows how to generate the BRDF 2D-LUT and irradiance and filtered cube maps from the environment map.
Physically Based Rendering with Image Based Lighting using OBJ model loader.
A physically based glTF 2.0 model viewer with Image Based Lighting. Loads a glTF binary (.glb) model and an HDR environment map, generates IBL textures (irradiance, prefiltered specular, BRDF LUT) on the GPU, and renders the model with a metallic-roughness PBR workflow. Features orbit camera controls (tumble, pan, zoom), normal mapping, emissive, occlusion, alpha mask/blend modes with transparent mesh depth-sorting, and environment skybox rendering with PBR Neutral tone mapping.
These examples use a deferred shading setup.
This example shows how to do deferred rendering with webgpu. Render geometry info to multiple targets in the gBuffers in the first pass. In this sample we have 2 gBuffers for normals and albedo, along with a depth texture. And then do the lighting in a second pass with per fragment data read from gBuffers so it's independent of scene complexity. World-space positions are reconstructed from the depth texture and camera matrix. We also update light position in a compute shader, where further operations like tile/cluster culling could happen. The debug view shows the depth buffer on the left (flipped and scaled a bit to make it more visible), the normal G buffer in the middle, and the albedo G-buffer on the right side of the screen.
This example Renders the scene into three off-screen render targets (position, normal, albedo+specular) which are then sampled in a composition pass that evaluates 6 animated point lights per pixel using Blinn-Phong shading. Debug visualization modes allow inspecting individual G-Buffer channels.
This example extends deferred shading with shadow mapping from three animated spot lights. Shadow maps are stored in a layered depth texture array, with each layer rendered in a separate depth-only pass. The composition pass applies PCF-filtered shadow sampling combined with spot light attenuation. Debug visualization modes allow inspecting shadow maps and individual G-Buffer channels.
Adds multi sampling to a deferred renderer using manual resolve in the fragment shader. The G-Buffer pass renders to multisampled color attachments (position, normal, albedo) that are automatically resolved. The composition pass supports two modes: MSAA mode reads multisampled textures directly with per-sample shading, while the non-MSAA mode uses the resolved textures. Debug visualization modes allow inspecting individual G-Buffer channels and specular highlights.
A WebGPU port of the Animometer MotionMark benchmark.
A GPU compute particle simulation that mimics the flocking behavior of birds. A compute shader updates two ping-pong buffers which store particle data. The data is used to draw instanced particles.
Mass-spring-damper cloth simulation using compute shaders. A 60×60 grid of particles connected by structural and shear springs is simulated with Verlet integration. The cloth drapes over a sphere and can be affected by wind. Normals are recalculated on the GPU each frame.
This example shows how to blur an image using a compute shader in WebGPU.
Uses a compute shader to apply different convolution kernels (and effects) on an input image in realtime.
Attraction based 2D GPU particle system using compute shaders. Particle data is stored in a shader storage buffer and updated on the GPU using attraction/repulsion forces. Particles are rendered as instanced billboard quads with additive blending and animated gradient coloring.
Particle system using compute shaders. Particle data is stored in a shader storage buffer, particle movement is implemented using easing functions.
This example demonstrates rendering of particles simulated with compute shaders.
A simple N-body simulation based particle system implemented using WebGPU.
N-body gravity simulation using compute shaders. Uses two compute passes (force calculation with shared memory tiling and integration) to simulate 24,576 particles attracted to six fixed points. Particles are rendered as instanced camera-facing billboard quads with a gradient color ramp.
Simple GPU ray tracer with shadows and reflections using a compute shader. No scene geometry is rendered in the graphics pass.
A classic Cornell box, using a lightmap generated using software ray-tracing.
WebGPU demo featuring realtime path tracing via WebGPU compute shaders.
This example uses multichannel signed distance fields (MSDF) to render text. MSDF fonts are more complex to implement than using Canvas 2D to generate text, but the resulting text looks smoother while using less memory than the Canvas 2D approach, especially at high zoom levels. They can be used to render larger amounts of text efficiently.
The font texture is generated using Don McCurdy's MSDF font generation tool, which is built on Viktor Chlumský's msdfgen library.
Generates and renders a complex user interface with multiple windows, controls and user interaction on top of a 3D scene. The UI is generated using Dear ImGUI and updated each frame.
This example demonstrates multiple different methods that employ fragment shaders to achieve additional perceptual depth on the surface of a cube mesh. Demonstrated methods include normal mapping, parallax mapping, and steep parallax mapping.
This example shows how to use a post-processing effect to blend between two scenes. This example has been ported from this JavaScript implementation to native code.
WebGPU interpretation of glxgears. Procedurally generates and animates multiple gears.
This example shows how to upload video frame to WebGPU.
giraffe by Taryn Elliott. lake by Fabio Casati, CC BY 3.0
Minimal "Shadertoy launcher" using WebGPU, demonstrating how to load an example Shadertoy shader 'Seascape'.
WebGPU implementation of the Gerstner Waves algorithm. This example has been ported from this JavaScript implementation to native code.
This example shows how to render an infinite landscape for the camera to meander around in. The terrain consists of a tiled planar mesh that is displaced with a heightmap. More technical details can be found on this page and this one.
A WebGPU example demonstrating pseudorandom number generation on the GPU. A 32-bit PCG hash is used which is fast enough to be useful for real-time, while also being high-quality enough for almost any graphics use-case.
This example shows how to make Conway's game of life. First, use compute shader to calculate how cells grow or die. Then use render pipeline to draw cells by using instance mesh.
A binary Conway game of life. This example has been ported from this JavaScript implementation to native code.
A conway game of life with paletted blurring over time. This example has been ported from this JavaScript implementation to native code.
WebGPU demo featuring marching cubes and bloom post-processing via compute shaders, physically based shading, deferred rendering, gamma correction and shadow mapping. This example has been ported from this TypeScript implementation to native code. More implementation details can be found in this blog post.
Real-time metaball rendering using marching cubes on the CPU, with tri-planar texture mapping on the GPU. Animated blobs that merge and split are rendered with selectable textures (lava, slime, water) inside a point-lit dungeon environment loaded from glTF, with instanced light sprites and an interactive orbit camera. This example has been ported from this JavaScript implementation to native code.
WebGPU demo featuring an implementation of Jos Stam's "Real-Time Fluid Dynamics for Games" paper. This example has been ported from this JavaScript implementation to native code.
This example shows how to map a GPU buffer and use the function wgpuBufferGetMappedRange. This example is based on the vertex_buffer test case.
This example shows how to efficiently draw several procedurally generated meshes. The par_shapes library is used to generate parametric surfaces and other simple shapes.
This example shows how to render tile maps using WebGPU. The map is rendered using two textures. One is the tileset, the other is a texture representing the map itself. Each pixel encodes the x/y coords of the tile from the tileset to draw. The example code has been ported from this JavaScript implementation to native code. More implementation details can be found in this blog post.
This example demonstrates how to render a torus knot mesh with Blinn-Phong lighting model. A small sphere represents the orbiting light source position. The scene includes diffuse texturing, ambient lighting, and specular highlights using the Blinn-Phong BRDF with point light attenuation and gamma correction. Lighting parameters such as shininess, light flux, and ambient color can be adjusted interactively via the GUI.
A simple WebGPU implementation of the "Pristine Grid" technique described in this wonderful little blog post. The example code has been ported from this JavaScript implementation to native code.
This example shows a voxel-based terrain rendering technique using WebGPU compute shaders. The terrain is rendered using a height map and color map, similar to the classic Comanche game. The example code has been ported from this JavaScript implementation to native code.
This example demonstrates shadow mapping using a depth texture array. Multiple lights cast shadows on a scene with a plane and rotating cubes. The example code has been ported from this Rust implementation to native C99 code.
A real-time interactive water simulation using WebGPU. Simulates realistic water physics, reflections, refractions, and caustics in a tiled pool scene. Features include interactive ripples, a draggable floating sphere, dynamic lighting, and camera controls. Based on Evan Wallace's WebGL Water demo.
Aquarium is a complete port of the classic WebGL Aquarium to modern WebGPU, showcasing advanced rendering techniques and efficient GPU programming.
This example shows how to render volumes with WebGPU using a 3D texture. It demonstrates simple direct volume rendering for photometric content through ray marching in a fragment shader, where a full-screen triangle determines the color from ray start and step size values as set in the vertex shader. This implementation employs data from the BrainWeb Simulated Brain Database, with decompression streams, to save disk space and network traffic.
Just like all software, WebGPU Native Examples and Demos are built on the shoulders of incredible people! Here's a list of the used libraries.
- basisu: Single File Basis Universal Transcoder.
- cglm: Highly Optimized Graphics Math (glm) for C.
- cgltf: Single-file glTF 2.0 loader and writer written in C99.
- cimgui: c-api for Dear ImGui
- cJSON: Ultralightweight JSON parser in ANSI C.
- ktx: KTX (Khronos Texture) Library and Tools
- rply: ANSI C Library for PLY file format input and output
- sc: Portable, stand-alone C libraries and data structures. (C99)
- stb: stb single-file public domain libraries for C/C++
A huge thanks to the authors of the following repositories who demonstrated the use of the WebGPU API and how to create a minimal example framework:
- WebGPU - Chrome Platform Status
- Changelog for WebGPU in Chromium / Dawn 94
- Changelog for WebGPU in Chromium / Dawn 96
- Changelog for WebGPU in Chromium / Dawn 98
Open-source under Apache 2.0 license.






