Exploring slit-scan type visuals on canvas

Fooling about with panorama mode on the iPhone, I came across the stylish, glitch worthy exploits of twisting the camera slightly a portion of the way before punching out a relatively short snap. Looking up strip photography and slit scanning gave me some insights as to why and how that happens to be on the technical side.

assets/david-johansen.jpg

Twisted panorama Flaring closeup of David Johansen from The New York Dolls' 1973 Midnight Special appearance, purely camera, no post

So initially developed to generate panoramic pictures and for capturing the finish line at the track (photo finish), but of course used creatively far and wide, the underlying process consists of building an image incrementally out of fragments or strips sampled at successive intervals.

The USS Enterprise going into warp drive, the original Doctor Who opening titles, Douglas Trumbull’s stargate sequence for 2001: A Space Odyssey are perhaps familiar examples of applying this kind of distortion on film.

wobble.js On the fly, in-browser, slit-scan type video processing

The mechanics involved, although difficult to work out using analogue means as might have been the case back in the day, are fairly straightforward to emulate programmatically. In a nutshell,

  1. Store as many incoming frames as there are subdivisions in the would be target,
  2. Out of each frame in store, extract relevant part at coordinates matching index,
  3. Stitch those back together into a single figure,
  4. Repeat in sync with video playback.

Given the handy set() and subarray() methods on the TypedArray prototype, processing video this way in JavaScript is rather painless. For example,

// Resolution or divide source into how many strips?
const depth = 100 

// For accumulating consecutive video frames
const store = []

// Accepts and returns an `ImageData` like object, of which
// `data` of type `Uint8ClampedArray` is the only required property
function filter(input = { data: [] }) {
  // Copy/save input data, note how `push()` returns new array length
  const clone = new Uint8ClampedArray(input.data)
  const storeSizeMaybe = store.push(clone)

  // Limit store length within resolution
  if (depth - storeSizeMaybe < 0) {
    store.shift()
  }

  // Calculate range in pixels for each strip
  const stripSize = Math.floor(clone.length / store.length)

  store.forEach((frame, i) => {
    // Find strip onset and pull out data up to next index
    const stripFrom = i * stripSize
    const strip = frame.subarray(stripFrom, stripFrom + stripSize)

    // Edit in place
    input.data.set(strip, stripFrom)
  })

  return input
}

In context,

// Create a rendering context for hosting the end result
const canvas = document.createElement('canvas')
const target = canvas.getContext('2d')

// And another one for hosting raw input
const buffer = canvas.cloneNode().getContext('2d')

// Set up video source
const master = document.createElement('video')

// Using a webcam feed would work just as well
master.setAttribute('src', 'path/to/video.mp4')

// To be called repeatedly
function update() {
  // Copy incoming video frames off screen
  buffer.drawImage(master, 0, 0)

  const source = buffer.getImageData(0, 0, canvas.width, canvas.height)
  const result = filter(source)

  // Display results
  target.putImageData(result, 0, 0)

  // Repeat
  window.requestAnimationFrame(update)
}

And to set things off,

document.addEventListener('click', () => {
  // The promise based way of handling video playback
  const isPlaying = master.play()

  if (isPlaying !== undefined) {
    isPlaying.then(() => {
      // Ready for processing
      window.requestAnimationFrame(update)
    }).catch(console.log)
  }
})

// Attach display target onto page
document.body.appendChild(canvas)

Module and demo code in full, thewhodidthis/wobble →

Reference