We closed this forum 18 June 2010. It has served us well since 2005 as the ALPHA forum did before it from 2002 to 2005. New discussions are ongoing at the new URL http://forum.processing.org. You'll need to sign up and get a new user account. We're sorry about that inconvenience, but we think it's better in the long run. The content on this forum will remain online.
IndexProgramming Questions & HelpSound,  Music Libraries › Synchronizing generative video + audio
Pages: 1 2 3 4 
Synchronizing generative video + audio (Read 29943 times)
Re: Synchronizing generative video + audio
Reply #15 - May 12th, 2007, 8:39am
this is very useful info/work. thanks for it!
Re: Synchronizing generative video + audio
Reply #16 - Jun 15th, 2007, 8:12pm

I'm quite new to processing and programming in general (I'm studying Music Tech), except thay I've made a few basic DSP tasks like sampling and some MIDI tasks all using C. I've been conquering the forums for the last few days. Its a nice place, thanks to all.

I'm preparing an audiovisual piece for my final project (investigating relations between sound and image), and trying to figure out audio/video sync stuff in processing these days. I figured out how liveInput. example works (SONIA) but I couldn't analyze a .wav file in the same way but read from the disk.  Or an incoming audio signal, say coming from i Tunes. This should be possible using OSC to connect processing to PD or MAX/MASP, etc. though. How does processing decide which audio (harware/software) input/output to pick the signal from?

Thanks a lot, I'll update my progress..


Re: Synchronizing generative video + audio
Reply #17 - Jun 15th, 2007, 8:24pm
if ur with windows, goto volume control properties and change which input you want..
if u pick line input it listens to your input from soundcard.. u can also pick to listen to what ur listening..

in macintosh i dont know
Re: Synchronizing generative video + audio
Reply #18 - Jun 16th, 2007, 12:54am

in macintosh,

sorry, I forgot to add..

I've tried Jack, which is a virtual driver (sort of) thing but that didn't work either.
Re: Synchronizing generative video + audio
Reply #19 - Jun 16th, 2007, 4:19pm
I've also tried pretty hard to get the "record source" on a Mac to be the output from iTunes or, more generally, the system output. But it seems that Macs just really can't self-monitor. One thing I can suggest is that instead of playing the files from iTunes, put the songs you want to use in the data folder of your sketch and then load and play them with whatever audio library you want to use. Of course, that doesn't help if you want to do your analysis with PD or Max. I don't have a suggestion for that because I don't use either or those programs.
Re: Synchronizing generative video + audio
Reply #20 - Jun 16th, 2007, 10:47pm


Using PD is plan B, but then I need to figure out OSC connection between processing and PD . I'll get help on that soon.

by the way,

  "/Note - the current FFT math is done in Java and is very raw. expect optimized alternative soon." (from Sonia's liveinput example)

then, if I do the fft in PD, and send the data to processing via OSC, would I have more precise control over the frequency ranges that I want the visuals to react to?


Re: Synchronizing generative video + audio
Reply #21 - Jun 17th, 2007, 2:00am

is it not possible to fft analyze a .wav file when processing is playing it back? rather than using liveinput.getSpectrum ?

Re: Synchronizing generative video + audio
Reply #22 - Jun 17th, 2007, 5:21pm

check connectLiveInput(..) function..

it lets u play a sample and be able to get spectrum..
Re: Synchronizing generative video + audio
Reply #23 - Jun 17th, 2007, 7:44pm

thanks a lot, this should do it.

Re: Synchronizing generative video + audio
Reply #24 - Aug 23rd, 2007, 5:46am


I'm working on a generative video+audio type of project and stuck at the point of exporting/saving the sequence as a .mov file. I tried recording the application on a DV cam but the resolution was so low that it was useless. Since my macbook can not handle 30 fps, the regular moviemaker object is useless, as Dave Bollinger says. (the resulting. mov file is much faster and shorter than it should be, because it skips a lot of frames while writing the .mov file but treats the file as 30 fps so all the frames are compressed as if the movie is created in 30 fps - does it makes sense?)

So, I've been trying to adapt the code of Dave Bollinger that he posted in this topic because it calculates how many audio samples should correspond to each frame, eliminating the sync problem. However,  I'm having some problems. First of all, I am using the minim library, and wondering if this code can actually be adapted to this library. In the 'analyze' section of Mr. Bollinger's code there is something like this:

void analyze() {
 int pos = (int)(frameNumber * chn.sampleRate / framesPerSecond);
 if (pos >= chn.size) {
   if (outputType==OUTPUT_TYPE_MOVIE)
 fft.getSpectrum(chn.samples, pos);

However, I do not know how to change the fft.getSpectrum(chn.samples, pos) so that I can use this 'int pos' with minim's fft.forward(chn.mix).

also, when I tried doing vice versa, like adapting my code so that I use ESS rather than minim, I came upon a problem concerning fft.spectrum[] values. Can some tell me what kind of values (between what range) do we recieve from fft.spectrum[] ? Its between -1, 1 in minim, but when I set the limits (using fft.limits()) to -1, 1 in ESS, I don't get the same results. I have to multiply (scale up) these values by at least 1000 to get results closer to minim. However, this 1x1000 ratio is not consistent for every parameter (like size, brightness, colour, etc) either.

finally, I get this error when finishing the movie:

at quicktime.std.StdQTException.checkError(StdQTException.java:38)

this does not happen when I use Mr. Shiffman's moviemaker example as it is. What do you think may be causing the problem?

Sorry for writing such a long problems list but I'm getting lost.

Any help would be praised.



Re: Synchronizing generative video + audio
Reply #25 - Aug 25th, 2007, 2:34am

Alright, I solved these problems, but I still lose a very very small amount of frame. I lose about 2 seconds worth frames in a 3 mins 30 secs long animation. (so the animation is about 2 secs shorter than it should be)

What do you think may be causing the problem? According to the code (from Dave Bollinger) that syncs audio and video ( frameNumber * sampleRate/framesperSecond ) both media should be exactly matching each other.

Any suggestions?

Re: Synchronizing generative video + audio
Reply #26 - Aug 25th, 2007, 9:39pm
just a thought, as i have no way of knowing for sure, but... are you sure that nothing in your process is using 29.97 fps instead of 30 fps?  i only ask because the ratio of 3:28/3:30 and 29.97/30 are awfully close, seems a  suspicious coincidence.
Re: Synchronizing generative video + audio
Reply #27 - Aug 26th, 2007, 11:00pm

I have actually given it a thought. What I have done, after writing the movie file, is to change the speed of the animation to match the duration of the audio file. (using a video editor) I have done this for only one of the animations so far, and it seems to work fine. (guessing that the lost frames are equally distributed throughout the sequence) This led me to your thought. It is quite possible that something is using 29.97 fps rather than 30, however I have no idea how this can happen. I specify everything as 30 fps in the code. I have not specified the number of key frames per second, guessing that the default would be 1 key frame for every 30 frames. I am very close to the deadline now and can give it a shot again if you think specifying the number of key frames would help. Otherwise, I'll depend on the cheating process that I utilized after writing the mov file.

by the way, could you suggest any reason for an error like this:

quicktime.std.StdQTException[QTJava 6.1.5g],-5000=afpAccessDenied,QT.vers:7138000
at quicktime.std.StdQTException.checkError(StdQTException.java:38)

I have three animations and I'm using your code to make movies out of them. Two of them works fine but I get this error with one of my sketches. Its quite strange becuase the way I adopted your code is the same for every sketch. What could be the difference in this last sketch that makes it not work, giving the error above?

thanks a lot.


Re: Synchronizing generative video + audio
Reply #28 - Aug 28th, 2007, 5:49pm
sorry, can't help with the QT error.

back to frame rates:  what you could try is redefine framePerSecond as a float (really should have been already) and do your "fudge factor" correction there.

in other words, "play" the audio a tiny bit too slow, (by specifying a video frame rate a tiny bit too big) so more video frames will be rendered.  then, theoretically, if its playing back at the reciprocal of that fudge factor, then everything will work itself out correctly.

if guesses about the cause are correct (an ntsc video rate somewhere in the chain) then this should be the correct factor:

float fpsFudgeFactor = 30.0 / 29.97; // normally 1.0f
float framesPerSecond = 30.0 * fpsFudgeFactor;
Re: Synchronizing generative video + audio
Reply #29 - Aug 28th, 2007, 6:20pm

quicktime error dissappeared when I copied the code to a new, clean sketch. could it be a bug?

this makes sense, I'll give it a try as soon as I finish these works. the deadline is awfully close. meanwhile I'll stick to the 'slowing down' tecnique Smiley which is less time consuming than writing all the movie files again.  

the thing that concerns me is that there should be nothing in the code to produce this result. everything is specified as 30 fps. Maybe it has something to do with the codec that I decided to write the movie file in. not sure though.

thanks a lot for your time dave, I'll try to put my works on the web somewhere when I manage/afford to make a web page for myself.


Pages: 1 2 3 4