Experiment

Super Volume – A Tactile Art

This is the second iteration of the Super Volume project. You can visit the project homepage here, where you will find links to all the outputs from the project so far. This page will focus on the video essay A Tactile Art.

In July 2023, whilst attending a videographic workshop at Bowdoin College in Maine, I ran a small participant experiment exploring the embodied process of watching, listening, and responding to depictions of volume manipulation in films. This experiment invited participants to first view a short clip from the film Berberian Sound Studio (Peter Strickland, 2012). The clip in question features Toby Jones playing Gilderoy, an English sound engineer imported to Italy to complete the sound editing and mix on a horror film. In the clip below, we see Gilderoy begin playback of an elaborate tape loop and subsequently push the volume up on three faders on his mixing desk. For the experiment video, I replaced the original film soundtrack with some more easily identifiable sounds in the hope that these might be less distracting for the participants than the film’s original soundtrack.

Using a small midi-capable controller featuring three-volume faders, the participants were then asked to re-watch the clip and attempt to mirror Gilderoy’s manipulation of the faders. Finally, the participants were asked to choose three sound effects from a pre-selected corpus, which they subsequently mixed in real time in any way they wished. Their performance with the midi controller was filmed, and the experiment was followed up with a short, semi-structured interview.

The video essay presented here is very much an in-the-moment response, completed within 48 hours of finishing the participant experiment. The work is influenced by Michel Serres research concerning the senses (2008) and by a paper by Eliot Bates in which he argues “that in order to understand the production of affect, or perhaps the affect of production, we need to pay attention to bodies, to the senses, to the practices of audio engineering and musicianship” (2009). In this video essay then I choose to pay attention to these bodies, and very specifically on the hands. Watching the videos of my participant’s listening and responding to their own audio mixes, I was fascinated by how expressive their hands were and how affective they were in these moments. The elegant, performative quality of the hand movements drew me towards an initial poetic engagement with the footage over any attempt to intellectualise the feedback the participants provided me (though this will feature in further outputs from the project).

The decision to use Penderecki’s De natura sonoris no. 2 was almost entirely arbitrary, as I only stumbled across it when searching for the sound of an orchestra tuning up (which I initially thought might make a good background soundtrack for the work). As you can see in the final work, the music lent a guiding hand to the editing of the video, suggesting synchronisations with the performances and cuts in the edit, as well as lending a sense of uncertainty as to the relationships between the moving hands and the music; are they somehow acting upon each other?

Where Bate’s research centers on the expert audio engineer, I am observing here the novice, who may be mixing sound (at least in this fashion) for the first time. My goal though with this experiment was not to set out to learn anything new about audio engineering per se, but rather to see what I might learn about watching and listening to films. This experiment then attempts to create an embodied intervention between the viewer and the film, as Laura Marks suggests, to find “a model of a viewer [listener] who participates in the production of the cinematic experience” (2000). My aim was to place my participants somewhere between the on-screen action of volume manipulation and the reciprocal manipulation which this might precipitate during the post-production sound mixing process. And in this in between space they are both engineers and performers, novices and experts, viewers and listeners and, crucially, producers of something brand new.

References
Bates, E., 2009. Ron’s right arm: Tactility, visualization, and the synesthesia of audio engineering.
Marks, L.U., 2000. The skin of the film: Intercultural cinema, embodiment, and the senses. Duke University Press.
Serres, M., 2008. The five senses: A philosophy of mingled bodies. Bloomsbury Publishing.

B-Film Creative Practice Colloquium

I was lucky enough to be able to attend this brilliant event co-organised by Jemma Saunders, Nina Jones and Ella Wright at The University of Birmingham. Aside from some really interesting (and provocative) discussions about the precarity of being a practice based PGR we were also tasked with an audio-visual exercise during the afternoon session. The prompt was a 60 second clip from Underworld’s music video for Rez (chosen by Dr Richard Langley), and how we responded to it was entirely up to us. Below is my effort followed by a few thoughts on the process (Warning – first video does contain flashing images).

We had a few hours to tackle this but I’ll happily admit to wandering off almost immediately to have a look at the department’s Moviola (my first time seeing one in person).

When I did settle down to ‘work’ I ended up reflecting on the discussions we’d had that morning and I wanted my video to try and comment on those somehow. Questions about rigour, about written statements (this being one!), about the origins and precursors of what we (some of us) call videographic criticism, and also about the limits of what we might consider scholarship. So this is, in the end, a product of that thinking.

I’ve left the music track untouched and replaced the visuals with edits from Michael Snow’s 1971 film La Région centrale. The footage is sped up about 3000% with the colour inverted to better match the original visuals and feed into a 90’s VJ aesthetic. But I also wanted to try and replicate a little of what Snow did with the soundtrack of his film which is entirely composed of the computer tones being fed to the motion control camera he used to shoot the footage. In my video I try to imagine how Underworld’s ‘computer tones’ might have directed Snow’s camera (and what chaos might have ensued!)

On the train home from the colloquium I realised there was another angle on this; taking the Underworld track, slowing it down, and putting it under Snow’s film (see below). The music track is at 1% original speed and I was enjoying it so much I ended up letting it run for 3 minutes. Thanks to all at B-Film for a great day (follow them on Twitter)

Super Volume – 56K Edition

A short video experiment inspired by my time at the working conference “Video Essay. Techniques and Methods”, organised by the Video Essay Research Group at Lucerne University. This video strings all 36 clips that currently exist in the Super Volume database together into one, drastically compressed video! You can read more about the Super Volume project here.

I was lucky enough to spend some time in Switzerland at a working conference just before Christmas 2022. The conference was something of a revelation to me in that we were encouraged to use the time to work on our projects, collaborate, walk, have a snooze. It really was that good. I took full advantage of the time to clarify the next part of the Super Volume project (tentatively entitled ‘Turn it Up?’) which will take the form of a participant experiment. But I was also inspired by the presentations of some of the other participants at the conference, in particular Guli Silberstein’s video Excerpt, and the embodied research being done by Alice Lenay.

The ’56K’ in the title of this video refers to the fact that (purely by coincidence) the data rate is now so drastically reduced (using ffmpeg) that it would have been possibly to stream it using a late 90’s 56K modem. I was interested in finding out was how far I could push the data compression before it became impossible to recognise the human movement within the clips. As it turns out, I reached the lowest limit that the software would allow me to go to (approx 48kbps for the video and between 10 and 20 kbps for the audio), and even at this drastically reduced bitrate I think most of the clips still feature recognisable human movement.

U.S. Robotics Sportster Message Plus

These 56K modems were the last analogue gasp of the internet before Broadband arrived. And I feel like the Super Volume project is itself quite an analogue undertaking, embedded in films which tend to evoke a pastness. After all, the body movement demonstrated in these clips; the tangible connection with a physical volume knob or fader involving hand, wrist and even arm, is simply not replicated in our interaction with volume controls on phones, remote controls etc. And I wonder does that make modern volume manipulation a less enticing prospect for the insert shot?

First/Final Minutes

A new video essay experiment which owes much to the ‘First and Final Frames‘ project created by Jacob T. Swinney. Check out the original ‘First and Final Frames’ video here. SPOILER ALERT – this video includes the last minute of a small selection of films (list at the bottom of the page).

I have been thinking for some time about how good the Videographic PechaKucha is as a deformative excercise (if you want to know more about the PechaKucha you can read this excellent piece by Jason Mittell, and watch a selection curated by The Video Essay Podcast here). The instructions for making one are reasonably straightforward;

Our videographic variant consisted of 10 video clips of precisely six seconds each, coupled with a continuous minute-long audio segment, all from the same film.

From ‘Scholarship in Sound & Image: A Pedagogical Essay’ by Christian Keathley and Jason Mittell

And yet the permutations arising from this brief description are many. I enjoy immensely that it deals with an uninterrupted minute of audio and have been mulling on how I might adapt or adjust the format in some experimental fashion in keeping with the sound led goals of this lab. First/Final Minutes is something of that response, where I’ve paired a film’s first minute of sound with the last minute of picture, or vice versa (basically whichever combination I found most interesting). I’ve selected the minutes based on when I felt meaningful sound or picture was starting/ending. Generally the selections avoid credits, opening or closing, but it is very much my personal take where the cuts happen.

The exercise reminds me of (and perhaps is also partly inspired by) one of the first pieces of undergraduate film writing I did about Peter Weir’s ‘Dead Poets Society‘. As part of a rambling, and somewhat aimless discussion about the film I mentioned that Ethan Hawke’s character Todd has to be encouraged to stand up at the start of the film, but chooses to stand on the table by the end. It’s funny to me now that this is a perfect videographic moment, and yet at the time I was limited to one VHS player and one TV screen, so I never got to see the moments occur side by side.

First/Final Minutes features the following combinations of sound and picture.

Terminator 2 – First minute sound, last minute picture. Any sync in this is entirely accidental (and a little creepy)

The Worst Person in the World – First minute picture, last minute sound. The songs lyrics seemed to suddenly mean something else as I watched and listened.

Minority Report – First minute sound, last minute picture. Cruise’s expression shifts dramatically for me in this one.

All the President’s Men – First minute picture, last minute sound. The teletype and the helicopter.

Berbarian Sound Studio – First minute sound, last minute picture & first minute picture, last minute sound. There is so much I love in this; the tape reels melding, the tape click which starts and stops the clips, the journey Toby Jone’s character makes through the 2 minutes.

The Double – First minute sound, last minute picture. Again, sync is accidental and ominous.

The films included are those which happened to be on my editing computer at the time. I imagine many more interesting combinations await out there.

Super Volume Supercut

If you’ve arrived here from the Super Volume page thank you for watching, I hope you enjoyed it (otherwise you can watch it here). You’ll find more information about the project in this post. An ongoing (hopefully growing) list of the films and television shows included in the supercut can be found at the bottom of this page.

Super Volume is a randomly generated supercut collecting instances of volume manipulation from film and television. Each time the web page is refreshed (or the Stop/Randomise button is pressed) the existing database of clips is reshuffled and a new version of the supercut is loaded into the video player. There is no rendered or final version of this supercut, rather it is designed to be added to as and when new clips are found and loaded into the database. This version of Super Volume is accompanied by (but not set to) ‘Gut Feeling’ by Devo. A ‘mute’ option is included on the page so that viewers can listen to music of their own choosing (or no music at all) whilst they watch.

Super Volume is the first part of a larger project to explore the specific relationship between the on screen action of volume manipulation, and the reciprocal manipulation which this might precipitate during the post-production sound mixing process, namely the turning up (or turning down) of volume. Functionally then, this supercut is as much about providing me (as a researcher) with easy access to the material I am working with, as it is about creating a new videographic work. But Super Volume is also an exploration of the process of creating a videographic work, of defining and designing an appropriate presentational mode, where publishing on Vimeo (where my previous videographic works all reside) might prove to be a limiting factor.

“Whether one thinks about the supercut as a database or a collection of images and sounds, it implies a process of aggregating and sorting that has no beginning and no end and that could continue indefinitely as long as there were new additional sources.”

Allison De Fren, 2020

There is no theoretical limit to the number of clips which can be added to the Super Volume database, but the intent here is not to be exhaustive, rather to be incremental. This particular mode of presentation means that the supercut is a perpetual ‘work in progress’, with no effective beginning or end, and no set length. My goal is that the project will grow through future collaboration as I develop other strands of research alongside it. The generative nature of this presentation mode also serves to curtail my creative impact on the final video. The process of careful editing and synchronisation which is often so intrinsic to the videographic form, and the supercut in particular, is stripped away here. The task of editing is reduced to ‘topping and tailing’ clips and nothing more (and as you will see, for the most part these clips have been quite tightly edited to the action). Whilst this does nothing to show off my editing skills it does make it considerably easier for me to watch (and re-watch) the piece. I have very little creative stake in how the work comes together, so I am inured against the agonising process of second guessing my editing decisions as I watch it back. Rather, I can adopt something of a duel position as both a cinephiliac, indulging my fetishisistic impulse to collect these clips, and as a dispassionate viewer, able to “maintain an objective distance” (De Fren, 2020) from this particular object of study, precisely because I have no control over the form it will take each time I hit play. Of course the lack of any agency on my part in the editing of the piece does increase the potential for it to seem ‘cold’ (Bordwell in Kiss, 2013). What labour might have gone into the sourcing of the clips, and the coding of the video player is not replicated in the careful editing, sorting, and syncing of the final videographic work. It remains to be seen whether other viewers take anything away from the viewing experience, or feel the need/desire to explore the potentially endless random variations available to them.

I have already watched Super Volume a lot, at first to refine the code for the video player (more on the technical aspects of this below), and then to explore just what the random nature of the supercut might reveal to me. I intend to write more on this in the future, but initially I find it interesting to note how differently each hand approaches this seemingly simple task, disconnected as they are here from any narrative context (and body) which might clue us into their motivations. Some are hesitant, questioning, whilst others are definite, casual, happy even? Most are white and male, which I could suggest is indicative of the nature of the films and television shows where I have harvested these clips from. But implicit in that suggestion is my own potential bias, directing me to specific sources in search of these clips. Either way I am in need of a much broader sample of clips before I can make any meaningful analysis, and I am hopeful that the open and ongoing curation of the project will help with this.

The Super Volume player is based on this code by Ben Moren. Ben was kind enough to help me out with a few tweaks to the code despite it being quite a few years since he wrote it. Getting a video to play back in a modern web browser is a reasonably straightforward process, but finessing the functionality for Super Volume took quite a while (largely because I am not a coder). The Super Volume player is actually 2 video players stacked on top of each other (thus it takes a little longer to load than a normal web page). The transition between videos clips is the 2 players swapping places, one moving to the ‘front’ (in Z space) whilst the other loads the next video. The supercut will always start with the same 2 video clips so to avoid obvious repetition, I have added two short, blank videos as clips 1 and 2. Ben’s original player code already handled the random shuffling of the clips into an array for playback, but it needed a small tweak to stop it playing the clips more than once in what was essentially an endless loop. Now the video player will stop after the final clip in the database is played (though the music will continue). The Stop/Randomise button re-loads the webpage, running the code again resulting in a newly shuffled array of video clips, and a new version of the supercut.

Special thanks to Alan O’Leary, Ariel Avissar, and Will DiGravio who provided invaluable usability testing and feedback on the first version of this project.

This is the first part of a larger project which I anticipate will run for at least 12 months and more on that will be published through this blog as it comes to fruition. If you have any thoughts, comments, suggestions, or questions about Super Volume please email me at cormac@deformativesoundlab.co.uk

FilmNumber of Clips
Upstream Color (2013)1
Talk Radio (1988)1
Spider-Man: Into the Spider-Verse (2018)2
Back to the Future (1985)4
Bill & Ted’s Excellent Adventure (1989)3
Ali G Indahouse (2002)1
Studio 666 (2022)1
Guardians of the Galaxy (2014)1
Caddyshack (1980)1
Berberian Sound Studio (2012)8
Airheads (1994)2
Bill & Ted Face the Music (2020)1
Bohemian Rhapsody (2018)1
Blow Out (1981)1
Deadwax (2018)1
Spiderhead (2022)1
The Gray Man (2022)1
The Last Word (2017)1
Things Behind the Sun (2001)1

References

de Fren, A., 2020. The Critical Supercut: A Scholarly Approach to a Fannish Practice. The Cine-Files15.
http://www.thecine-files.com/the-critical-supercut-a-scholarly-approach-to-a-fannish-practice/#_edn10

Kiss, M., 2013. Creativity Beyond Originality: György Pálfi’s Final Cut as Narrative Supercut. Senses of Cinema, (67).
https://www.sensesofcinema.com/2013/feature-articles/creativity-beyond-originality-gyorgy-palfis-final-cut-as-narrative-supercut/

Other reading/watching

Meneghelli, D., 2017. Just Another Kiss: Narrative and Database in Fan Vidding 2.0. Global Media Journal: Australian Edition, 11(1), pp.1-14.
https://www.hca.westernsydney.edu.au/gmjau/wp-content/uploads/2017/04/GMJAU-Just-Another-Kiss-Narrative-and-Database-in-Fan-Vidding-2.pdf

Tohline, M., 2021. A Supercut of Supercuts: Aesthetics, Histories, Databases. Open Screens, 4(1), p.8. DOI: http://doi.org/10.16995/os.45

From the Next Room

A new video experiment listening to a film playing in another room.

This was a trick we used to use in the recording studio; listening to a music track we were mixing from another space, essentially overhearing the music as it played in another room. It was an easy way to ‘disconnect’ from the process of working critically with the material, whilst at the same time engaging a different mode of listening, encouraged by the physical distance from the material, and the manner in which that distance reshaped the sound.

I set out to re-create this experience for a film and I felt that this scene from The Shining worked quite well, given how rooms feature so prominently in the film. The image I have used for background is taken from For All Mankind (Season 2, Episode 10), one of my favourite things on TV right now.

I decided against recording this overheard conversation for real (though I have done soundtrack re-recording in the past) but instead chose to use an Impulse Response to simulate the space of the room. An Impulse Response is essentially a sample of the acoustic characteristics of a particular space. To capture a response, a burst of sound or a sweep tone is played into the space and re-recorded. This recording is then processed to leave only the sonic characteristics of the space. The resultant Impulse Response can then be applied to any sound to give the impression of it existing in the sampled space. For this video I used a response from the wonderful free library A Sonic Palimpsest made available by The University of Kent. The image above shows a picture of the 2nd floor landing in the Commissioner’s House at Chatham where this particular impulse response was recorded. IR Dust is a software plugin which allows me to apply the response to the sound in my Resolve editing session.

I spent quite a bit of time experimenting with various impulse responses from this pack to tune the sonic spatialisation to my visual perception of the room (something which film sound professionals do all the time). Whilst I am happy with the ‘fit’ (marriage?) between sonic space and visual space, I’d be interested to hear how it sounded to you.

Noisy Jaws

This video (mis)uses the rather excellent RX suite of audio restoration tools created by iZotope

The following description taken from the iZotope website just about covers it;

Deconstruct lets you adjust the independent levels of tone and noise in your audio. This module will analyze your audio selection and separate the signal into its tonal and noisy audio components. The individual gains of each component can then be cut or boosted. 

iZotope.com

For the soundtrack to this video I’ve used the aptly named Deconstruct module to cut (lower) the tonal elements in the soundtrack as much as the software would allow me, whilst boosting the noisy elements by a small amount (too much boost would risk introducing digital distortion). I find the results fascinating; hearing where the software finds tonality (voices are affected, though Roy Scheider’s less than the others) and where it finds noise (opening and pouring the wine). Robert Shaw’s singing is reduced to a buzz whilst the tender moment between the Brody’s is lost in the cacophony of the dock.

To select the clips for this video I used Nicholas Rombes 10/40/70 as my parameter, taking 2 minute clips from the 10, 40, and 70 minute marks in the films running time (see Jason Mittell’s Deformin’ in the Rain for more on this). In another piece (Videographic Criticism as a Digital Humanities Method) Jason refers to the film as viewed in the video editor as an “archive of sounds and moving images”. For this video I’ve extended that archive a little by using the original mono mix of the Jaws soundtrack, sourced from one of the Laserdisc releases of the film. and synced up with a more recent Blu-Ray image track. I suppose that in itself is a deformative practice of sorts…

Internal Logic

This experiment uses Python code adapted from a number of sources (see below) to create a film ‘trailer’ based on the sound energy present in the films soundtrack.

In her chapter on ‘Digital Humanities‘ in The Craft of Criticism (Kackman & Kearney, 2018), Miriam Posner discusses the layers of a digital humanities project; source, processing, & presentation. With this experiment/video I’m stuck on the last one. I get the sense that there is a way to present this work that might expose the “possibilities of meaning” (Samuels & McGann, 1999) in a way which is more sympathetic to the internal logic of the deformance itself. A presentation mode (videographic or other?) which is self-contained, rather than relying on any accompanying explanation. So, more to be done with this one. Very much a WIP.

A brief note on the process

The code for this experiment is adapted from a number of Python scripts which were written to automatically create highlight reels from sports broadcasts (the first one I found used a cricket match as an example). Links to the various sources I’ve used are below.

Become a video analysis expert using python

Video and Audio Highlight Extraction Using Python

Creating Video Clips in Python

The idea is that the highlights in any sporting event will be accompanied by a rise in the ‘energy’ in the soundtrack (read as volume here for simplicity) as the crowd and commentator get louder. The Python script analyses the soundtrack in small chunks, calculating the short term energy for each. The result is a plot like this, which shows a calculation of the sound energy present in the center channel of Dune’s sound mix.

I’m not entirely clear what scale the X axis is using here for the energy (none of the blogs get into any sort of detail on this) but as the numbers increase, so does the sound energy. The Y axis is the number of sound chunks in the films sountrack with that energy (in this case I set the size of the chunks to 2 seconds). To create the ‘trailer’ I picked a threshold number based on the plot (the red circle) and the code extracted any chunks from the film that had a sound energy above this figure. Choosing the threshold is not an exact science so I tried to pick a figure which gave me a manageable amount of video to work with. A higher threshold would mean less content, a lower threshold would result in more. Note – The video above features the films full soundtrack, but the clip selections were made on energy calculations from the center channel alone.

I am not a coder!

I’m not going to share the code here. It took an age to get working (largely as a result of my coding ignorance) and I can’t guarantee that it will work for anyone else the way I’ve cobbled it together. If anyone is interested in trying it out though, please get in touch, I’m more than happy to run through it on a Zoom or something.

Kackman, M. and Kearney, M.C. eds., 2018. The craft of criticism: Critical media studies in practice. Routledge.
Kackman, Mary Celeste Kearney

Samuels, L. and McGann, J., 1999. Deformance and interpretation. New Literary History, 30(1), pp.25-56.
https://www.jstor.org/stable/20057521

Dodge This

This experiment goes back to some of my earliest thinking about sound and the (still) image. This was only made possible thanks to a new version of the audio software PaulXStretch which turned up recently.

This frame scan from The Matrix I’m using here was posted on Twitter some time ago (many thanks to John Allegretti for the scan). I couldn’t resist comparing this full Super35 frame with the 2.39 : 1 aspect ratio that was extracted for the final film. (I also love seeing the waveform of the optical soundtrack on the left of the frame).

The smallest frame is a screen shot from a Youtube clip, the larger is a screen shot from the Blu Ray release. The center extraction is nice and clear, but it really highlights to me the significant amount of the frame which is unused. (See this post for some excellent input from David Mullen ASC on Super 35 and center extraction). This frame was still on my mind when I spotted that a new version of PaulXStretch was available.

PaulXStretch is designed for radical transformation of sounds. It is NOT suitable for subtle time or pitch correction. Ambient music and sound design are probably the most suitable use cases. It can turn any audio into hours or days of ambient soundscape, in an amazingly smooth and beautiful way.

https://sonosaurus.com/paulxstretch/

This got me thinking again about the still image; those that I’d been looking at for years in magazines, books, posters. Those that I’d fetishised over, collected, and archived. The images that meant so much to me and were central to my love of film, but that were also entirely silent. So with the help of PaulXStretch, I have taken the opportunity to bring sound back to this particular still image. The soundtrack to this video is the software iterating on the 41.66667 milliseconds of audio that accompanies this single frame from the film.

But wait, there’s more…

When I first had this idea, about 12 months ago, I wanted to try and accomplish it with an actual film frame. I found a piece of software (AEO-Light) which could extract the optical soundtrack information from a film frame scan, and render it to audio. So I went and bought myself some strips of film.

These are quite easy to come by on Ebay but there was a fatal flaw in my plan (which I didn’t realise until some time later). On a release print like the frames I have here, the physical layout of the projector, and specifically the projector gate, means that there is no space for the sound reader to exist in synchronous proximity to the frame as it is being projected. The optical sound reader actually lives below the projector gate, which means that the optical soundtrack is printed in advance of the picture by 21 frames (this is the SMPTE standard for the creation of release prints, but how that translates to the actual threading of a projector is a little more up in the air according to this thread). So in this material sense, where the optical soundtrack is concerned, sound and image only come into synchronisation at the very instant of projection.

If you’ve made it this far and want to know more about projector architecture then I highly recommend this video (I’ve embedded it to start at the ‘tour’ of the film path). Enjoy.