Quantcast
Channel: Computer Music Blog
Viewing all articles
Browse latest Browse all 35

How to Render Synchronous Audio and Video in Processing using Beads

$
0
0

Rendering video in Processing is easy. The MovieMaker class makes it incredibly easy to render Quicktime video files from a Processing sketch. Unfortunately, Processing doesn’t supply tools for rendering audio using the MovieMaker class. Hence, rendering the output from a multimedia program can really be a headache.

I’ve spent a lot of time working on this problem in the last few months. I tried screen-capture software, but even the professional screen capture apps aren’t suited to the task. They cause glitches in the audio, drop frames and slow down the sketch itself. I also tried rendering using external hardware. Unfortunately, the only affordable device for capturing VGA output averages a mediocre 10 frames per second, and the frame rate is unacceptably inconsistent.

So the solution had to come from code, and in the end, the solution is pretty simple. Admittedly, this solution still slows down your sketch, but if you lower the resolution, you can get acceptable, synchronized audio and video which can be combined in any video editor.


Synchronizing MovieMaker Based on the Audio Stream

The solution is to render video frames based on the position in the audio output buffer. Simply monitor the position in the audio stream, and render a video frame every so many samples.

There are three basic code changes that are necessary to get this working. First, calculate the number of audio samples that will occur per frame of video. For this to work, the frame rate must be relatively low. 12 works well for me.


int MovieFrameRate = 12;
float AudioSamplesPerFrame = 44100.0f / (float)MovieFrameRate;

Then set up your audio recording objects as detailed in my free ebook: Sonifying Processing: The Beads Tutorial.


AudioFormat af = new AudioFormat(44100.0f, 16, 1, true, true);
outputSample = new Sample(af, 44100);
rts = new RecordToSample(ac, outputSample, RecordToSample.Mode.INFINITE);

Finally, call this subroutine in your draw function, and make sure to finalize the audio and the video when the program ends.


// this routines adds video frames based on how much audio has been processed
void SyncVideoAndAudio()
{
  // if we have enough audio to do so, then add a frame to the video
  if( rts.getNumFramesRecorded() > MovieFrameCount * AudioSamplesPerFrame )
  {
    // we may have to add multiple frames
    float AudioSamples = rts.getNumFramesRecorded() - (MovieFrameCount * AudioSamplesPerFrame);
    while( AudioSamples > AudioSamplesPerFrame )
    {
      mm.addFrame();
      MovieFrameCount++;
      AudioSamples -= AudioSamplesPerFrame;
    }
  }
}

After your program completes, you just need to stitch the audio and video together using any old video editor at your disposal.

Here’s an example sketch rendered using this method.



And here is the source code for that sketch: Video_Audio_Sync_Test_03

I hope this saves you some time and money!


Viewing all articles
Browse latest Browse all 35

Latest Images

Trending Articles





Latest Images