WebGPU first impressions

Shader based gamma correction Not that Coming Attractions (2010) needs any adjusting, but is maybe fitting to be sampling from since just as with 35mm film and WebGL, a fair bit of setup is necessary for WebGPU playback.

On a planet filled with mad men, lunatics, and crazy rock 'n roll musicians, Safari is WebGL2 capable by default across devices finally. And while that may be news rewarding enough for web graphics programming, a separate GPU-level replacement API is brewing: WebGPU. Unlike WebGL, all major vendors have been concerned with the design from the beginning. An origin trial Chrome feature today, both Babylon.js and Three.js were quick to add support, and so has Deno since version 1.8.

Although theoretically most browsers already provide experimental implementations, I was unable to get the official examples running on Safari Technology Preview. It looks like the relevant flag has been renamed, or temporarily removed, or my nonsense chipset is to blame? Also, I found Firefox nightly generally works as expected, but image and video to texture conversion proved problematic. That leaves Chrome Canary for now, which is a little flakey still and having to restart often seems part of the experience.

Chrome going blank

Living on the edge The WebGPU API is so new, there are no MDN pages for it yet and Chrome will at times go blank even!

Because the API is unstable, I can query for if gpu is defined on the navigator interface to get started, but further checks apply in order to reach a safe place in practice:

// Not enough to guarantee smooth execution
if (navigator.gpu === undefined) {
  throw new Error("demo: WebGPU unsupported")

// Grab a drawing context renamed from `gpupresent`
const context = document.createElement("canvas").getContext("webgpu")

if (!(context instanceof GPUCanvasContext)) {
  throw new Error("demo: failed to obtain WebGPU context")

// Assuming top-level `await` on Chrome
const adapter = await navigator.gpu.requestAdapter()

if (!adapter) {
  throw new Error("demo: failed to obtain adapter")

And whereas with WebGL most methods are attached to the drawing context, WebGPU needs an instance of GPUDevice to be assembling the program with:

const device = await adapter.requestDevice()

if (!device) {
  throw new Error("demo: failed to obtain device")

// Ready to configure the drawing context, but most methods
// belong to the device instance anyway
const format = context.getPreferredFormat(adapter)

context.configure({ device, format })

WGSL is the GLSL equivalent shading language. The syntax is Rust-like, but complicated. No surprise people are humorously complaining about it. Anyways, going through the steps required is relatively straightforward, keeping in mind WebGPU is quite descriptive.

// Basic WGSL gamma adjuster
const shader = `
  struct VertexIn {
    [[location(0)]] position: vec3<f32>;
    [[location(1)]] uv: vec2<f32>;

  // Common for both vertex and fragment stages and later
  // shader module entry points
  struct VertexOut {
    [[builtin(position)]] position: vec4<f32>;
    [[location(0)]] fragUV: vec2<f32>;

  fn vmain(input: VertexIn) -> VertexOut {
    return VertexOut(vec4<f32>(input.position, 1.0), input.uv);

  // binding + group = GPUBindGroup!
  [[binding(0), group(0)]] var the_sampler: sampler;
  [[binding(1), group(0)]] var the_texture: texture_2d<f32>;

  fn fmain(input: VertexOut) -> [[location(0)]] vec4<f32> {
    var A = vec4<f32>(1.0);
    var g = vec4<f32>(5.0 / 4.0);
    // So much context for a tiny slice of math
    var c = vec4<f32>(textureSample(the_texture, the_sampler, input.fragUV));

    // Leave things be past the vertical split
    if (input.position.x < 240.0) {
      return A * pow(c, g);

    return c;

OK, assuming a device has been succesfully obtained and the WGSL compiles error free, putting the filter together involves: (a) Vertex and UV data, (b) An image, a texture, a texture sampler, (c) A rendering pipeline, (d) A command encoder, (e) A bind group or collection of resources.

Thankfully, the mostly configuration-style boilerplate feels less cumbersome than when scripting for WebGL. First, try loading in the pixel data out of an image blob:

// The @toji recommended image loading technique
const request = await fetch("image.png")

const blob = await request.blob()
const source = await createImageBitmap(blob)

Next, create a buffer holding vertex and UV data and declare a descriptor for it:

const vertices = new Float32Array([
  -1.0, -1.0, 0.0, 1.0,
   1.0, -1.0, 1.0, 1.0,
   1.0,  1.0, 1.0, 0.0,
  -1.0, -1.0, 0.0, 1.0,
   1.0,  1.0, 1.0, 0.0,
  -1.0,  1.0, 0.0, 0.0,

const vertexBuffer = device.createBuffer({
  mappedAtCreation: true,
  size: vertices.byteLength,
  // Aw, a bitmask made of CONSTANTS
  usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,

// What? No assignment? Bizarre, whatever works! 🤷🏻
new Float32Array(vertexBuffer.getMappedRange()).set(vertices)

// A lot of describing going on...
const vertexBufferDescriptor = [
    attributes: [
        format: "float32x2",
        offset: 0,
        shaderLocation: 0,
        format: "float32x2",
        offset: 8,
        shaderLocation: 1,
    arrayStride: 16,
    stepMode: "vertex",

Then, add the texture and corresponding sampler:

const { height, width } = source
const textureSize = { depth: 1, height, width }
const texture = device.createTexture({
  dimension: "2d",
  size: textureSize,
  // Fails to run without all of these!
  usage: GPUTextureUsage.TEXTURE_BINDING
    | GPUTextureUsage.COPY_DST
    | GPUTextureUsage.SAMPLED

// Missing on FF
device.queue.copyExternalImageToTexture({ source }, { texture, mipLevel: 0 }, textureSize)

const sampler = device.createSampler()
const pipelineDescriptor = {
  // WGSL code is shared between vertex and fragment shader modules
  vertex: {
    module: device.createShaderModule({ code: shader }),
    entryPoint: "vmain",
    buffers: vertexBufferDescriptor,
  fragment: {
    module: device.createShaderModule({ code: shader }),
    entryPoint: "fmain",
    targets: [{ format }],
  primitive: {
    topology: "triangle-list",

And finally, encode the rendering pipeline and pass it on to the command encoder for processing:

const renderPipeline = device.createRenderPipeline(pipelineDescriptor)
const renderPassDescriptor = {
  colorAttachments: [
      loadValue: { r: 0, g: 0, b: 0, a: 1 },
      storeOp: "store",
      view: context.getCurrentTexture().createView(),

const commandEncoder = device.createCommandEncoder()
const passEncoder = commandEncoder.beginRenderPass(renderPassDescriptor)

passEncoder.setVertexBuffer(0, vertexBuffer)

// The shader specified texture resources
const textureBindGroup = device.createBindGroup({
  layout: renderPipeline.getBindGroupLayout(0),
  entries: [
      binding: 0,
      resource: sampler,
      binding: 1,
      resource: texture.createView(),

passEncoder.setBindGroup(0, textureBindGroup)

// All set, phew! Live sketch

Overall, I like how expressions can be shared between fragment and vertex stages, but with WGSL being nearly as cryptic as GLSL, I only wish the documentation were easier to follow. It should be interesting to see what cool things people come up with once WebGPU is widely available in compute terms if nothing else. 🤓