I had a few Processing sketches that I’ve made and never took the trouble to document or record in some way lying around. So here we go: this is a short video with minimal editing that showcases what I called an Audio-controlled Directional Delayer. You can check the code in my Processing 3.x Github repository.
What it does is to render each frame as a set of rows or columns copied from a specific frame in a 150-frame buffer (or more, if you want). From how far back in that buffer will that row or column be retrieved is mapped to the audio input level. Sometimes a high amplitude will also trigger a mode change (horizontal/vertical).