When I discovered electronic music during the Centre Acanthes 2000/Ircam , my favorite topic was real time sound processing in frequency domain. Hans Tutschku taught the wonders of AudioSculpt in Avignon, before Benjamin Thigpen taught Max/MSP in Helsinki. Now, the Computer Music Journal just published an article I wrote about spectral sound processing in real time and performance time ( whereas real-time treatments are applied on a live sound stream, performance-time treatments are transformations of sound files that are generated during a performance ). If you are interested in graphical sound synthesis, phase vocoder , and sonograms (or spectrograms) , I hope you will enjoy this tutorial. The great news is that you can download the article for free on the page of the Computer Music Journal, Issue 32, Volume 3 . Max/MSP/Jitter patches You can readily apply the described techniques in the development environment Max/MSP/Jitter . For a hands-on approach, make sure you download t...
In this post, I'm giving insights into the creation of the Spectral Stretch Max for Live Device . There are many ways to achieve audio time stretching without transposition . Some time-based methods build on Pierre Schaefer 's Phonogène . Another approach consists in processing the sound in spectral domain, using a phase vocoder . In this case, the audio samples are converted to spectral data through a Fast Fourier Transform (FFT). Then, even if we focus on extreme time stretching, the details of the phase vocoder implementation have important consequences on the sound quality and the tool's flexibility for live usage. Before introducing the Max for Live device Spectral Stretch, let's have a look at a selection of four possible algorithms: Paulstretch Max Live Phase Vocoder Interpolation between recorded spectra Stochastic Re-synthesis from a recorded sonogram Paulstretch Paul's Extreme Sound Stretch , also known as Paulstretch , is an algorithm designed ...
In April 2008, I was invited by composers Eric Chasalow and Maxwell Dulaney to give a 2-day seminar on spectral sound processing techniques at Brandeis University Music Department . A topic the music students particularly enjoyed was the frozen sound , the audio equivalent of the cinematic " freeze frame shot ". I taught the nuts and bold of the real-time stochastic spectral freeze technique (the stochastic component is aimed at breaking the ice - with the audience). On this video , discover 5 variations on a Max/MSP/Jitter freeze tool: Downloads Note: the Max patches available here have been completely revamped since this article & video were initially published. New Spectral Freeze Max MSP Jitter patches (link updated Nov. 2019) A Tutorial on Spectral Sound Processing with Max/MSP and Jitter: Computer Music Journal, Fall 2008 Syllabus The program for just two 3-hour workshops was quite ambitious! April 14th Overview of the topic : “ Spectral processing...
Comments
Post a Comment