This audio-controlled Hockney Delayer sketch works much like the Directional Delayer I posted earlier: a buffer holds a number of video frames, as the screen is made out of a grid of cells, each selecting its bit from a frame in that buffer. How further back?, being mapped to the audio amplitude. (I also added a bit of ‘jitter’, moving and slightly enlarging the cells according to the same audio amplitude.)

My intention here was to translate something like David Hockney’s collages (earlier post) to video. I’m not sure it works. Here’s another example, splicing together different video files. Both share the same audio track, again hastly assembled from TB Arthur’s free sound library, by the way. You can check the code in my Processing 3.x Github repository.