Software

From the Next Room

A new video experiment listening to a film playing in another room.

This was a trick we used to use in the recording studio; listening to a music track we were mixing from another space, essentially overhearing the music as it played in another room. It was an easy way to ‘disconnect’ from the process of working critically with the material, whilst at the same time engaging a different mode of listening, encouraged by the physical distance from the material, and the manner in which that distance reshaped the sound.

I set out to re-create this experience for a film and I felt that this scene from The Shining worked quite well, given how rooms feature so prominently in the film. The image I have used for background is taken from For All Mankind (Season 2, Episode 10), one of my favourite things on TV right now.

I decided against recording this overheard conversation for real (though I have done soundtrack re-recording in the past) but instead chose to use an Impulse Response to simulate the space of the room. An Impulse Response is essentially a sample of the acoustic characteristics of a particular space. To capture a response, a burst of sound or a sweep tone is played into the space and re-recorded. This recording is then processed to leave only the sonic characteristics of the space. The resultant Impulse Response can then be applied to any sound to give the impression of it existing in the sampled space. For this video I used a response from the wonderful free library A Sonic Palimpsest made available by The University of Kent. The image above shows a picture of the 2nd floor landing in the Commissioner’s House at Chatham where this particular impulse response was recorded. IR Dust is a software plugin which allows me to apply the response to the sound in my Resolve editing session.

I spent quite a bit of time experimenting with various impulse responses from this pack to tune the sonic spatialisation to my visual perception of the room (something which film sound professionals do all the time). Whilst I am happy with the ‘fit’ (marriage?) between sonic space and visual space, I’d be interested to hear how it sounded to you.

Noisy Jaws

This video (mis)uses the rather excellent RX suite of audio restoration tools created by iZotope

The following description taken from the iZotope website just about covers it;

Deconstruct lets you adjust the independent levels of tone and noise in your audio. This module will analyze your audio selection and separate the signal into its tonal and noisy audio components. The individual gains of each component can then be cut or boosted. 

iZotope.com

For the soundtrack to this video I’ve used the aptly named Deconstruct module to cut (lower) the tonal elements in the soundtrack as much as the software would allow me, whilst boosting the noisy elements by a small amount (too much boost would risk introducing digital distortion). I find the results fascinating; hearing where the software finds tonality (voices are affected, though Roy Scheider’s less than the others) and where it finds noise (opening and pouring the wine). Robert Shaw’s singing is reduced to a buzz whilst the tender moment between the Brody’s is lost in the cacophony of the dock.

To select the clips for this video I used Nicholas Rombes 10/40/70 as my parameter, taking 2 minute clips from the 10, 40, and 70 minute marks in the films running time (see Jason Mittell’s Deformin’ in the Rain for more on this). In another piece (Videographic Criticism as a Digital Humanities Method) Jason refers to the film as viewed in the video editor as an “archive of sounds and moving images”. For this video I’ve extended that archive a little by using the original mono mix of the Jaws soundtrack, sourced from one of the Laserdisc releases of the film. and synced up with a more recent Blu-Ray image track. I suppose that in itself is a deformative practice of sorts…

Dodge This

This experiment goes back to some of my earliest thinking about sound and the (still) image. This was only made possible thanks to a new version of the audio software PaulXStretch which turned up recently.

This frame scan from The Matrix I’m using here was posted on Twitter some time ago (many thanks to John Allegretti for the scan). I couldn’t resist comparing this full Super35 frame with the 2.39 : 1 aspect ratio that was extracted for the final film. (I also love seeing the waveform of the optical soundtrack on the left of the frame).

The smallest frame is a screen shot from a Youtube clip, the larger is a screen shot from the Blu Ray release. The center extraction is nice and clear, but it really highlights to me the significant amount of the frame which is unused. (See this post for some excellent input from David Mullen ASC on Super 35 and center extraction). This frame was still on my mind when I spotted that a new version of PaulXStretch was available.

PaulXStretch is designed for radical transformation of sounds. It is NOT suitable for subtle time or pitch correction. Ambient music and sound design are probably the most suitable use cases. It can turn any audio into hours or days of ambient soundscape, in an amazingly smooth and beautiful way.

https://sonosaurus.com/paulxstretch/

This got me thinking again about the still image; those that I’d been looking at for years in magazines, books, posters. Those that I’d fetishised over, collected, and archived. The images that meant so much to me and were central to my love of film, but that were also entirely silent. So with the help of PaulXStretch, I have taken the opportunity to bring sound back to this particular still image. The soundtrack to this video is the software iterating on the 41.66667 milliseconds of audio that accompanies this single frame from the film.

But wait, there’s more…

When I first had this idea, about 12 months ago, I wanted to try and accomplish it with an actual film frame. I found a piece of software (AEO-Light) which could extract the optical soundtrack information from a film frame scan, and render it to audio. So I went and bought myself some strips of film.

These are quite easy to come by on Ebay but there was a fatal flaw in my plan (which I didn’t realise until some time later). On a release print like the frames I have here, the physical layout of the projector, and specifically the projector gate, means that there is no space for the sound reader to exist in synchronous proximity to the frame as it is being projected. The optical sound reader actually lives below the projector gate, which means that the optical soundtrack is printed in advance of the picture by 21 frames (this is the SMPTE standard for the creation of release prints, but how that translates to the actual threading of a projector is a little more up in the air according to this thread). So in this material sense, where the optical soundtrack is concerned, sound and image only come into synchronisation at the very instant of projection.

If you’ve made it this far and want to know more about projector architecture then I highly recommend this video (I’ve embedded it to start at the ‘tour’ of the film path). Enjoy.

Singin’ Will Wall

These experiments are inspired by Hollis Frampton’s 1971 film Critical Mass and were made possible using the software HF Critical Mass created by Barbara Lattanzi

I think I only watched Critical Mass because it auto-played on YT after I’d finished (nostalgia), another Hollis Frampton film, also from 1971. When I tried to find out more about Frampton’s process making Critical Mass I came across Barbara Lattanzi’s site, and the HF Critical Mass software she created “…as an interface for improvising digital video playback.” These 3 videos were made with Version 2 of the software.

I originally thought I might pick one of the musical numbers from Singin’ in the Rain’ (the film seemed like an obvious choice given it’s centrality to deformative videographic practice!) for this first experiment, but as I scrubbed through the film I hit on this scene, which not only has it’s own ‘built in’ loopability, but also appeals to my sonic self. The HF Critical Mass software gives you control over the length of the loop it will make, and the speed with which the loop with advance through the video (amongst many other controls) and I set these specifically for each video. In this case the loop length was defined by the door slam and the clapperboard, essentially bookending the loop. I’m not sure if this is the first time I noticed the sound engineer’s exaggerated movements, but the looping did highlight the synchronicity between Lina’s head turns, and his sympathetic manipulation of the recording controls.

I wanted to see how this would work on some quick fire dialogue, and I always share this scene with my research students, so it was an easy pick. Finding the loop length here was harder, and I’m a little surprised how consistent some of the rhythms in the delivery are, and how many lines actually get a complete delivery ‘in loop’ (Should I be surprised? Or is a rhythmic line delivery, with consistent pauses, inherent to this kind of monologue). The looping again highlights the movement within the scene, and the camera moves also get ‘revealed’ in the looping. Favourite moment is definitely the (total accident) at the end where ‘unoriginal’ becomes ‘original’.

This is a scene which I’ve always loved listening to but I think I’m listening to it differently with this video. The looping, in combination with the staggered progress of the video, seems to hold the sounds in my memory just long enough that I feel I can grasp them a little more clearly. Each loop seems to exist as its own short sonic motif, almost self-contained, even as it contributes to the whole.