Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't record a canvas that consists of objects that react to an audio file #28

Open
frizurd opened this issue Apr 28, 2020 · 8 comments
Assignees
Labels
enhancement New feature or request

Comments

@frizurd
Copy link

frizurd commented Apr 28, 2020

Hey there,
I really love the plugin and thank you very much for sharing it with us 🙏

I'm trying to record a local webpage that consists of multiple HTML canvases and an HTML audio element. The canvases react and move based on the audio file, I'm hoping to record the movement, and glue the MP3 file to the video afterward the creation of the video.

In the preparePage function I trigger the page to play the audio element which triggers the canvases to animate, which all works fine. But the actual video is not realtime/aligned with the audio file, the video skips a lot of frames in between. I feel like its only recording 1 FPS.

Is there some way of making this awesome plugin work for my use case or am I misunderstanding something?

Thank you in advance.

@tungs tungs self-assigned this Apr 29, 2020
@tungs tungs added the enhancement New feature or request label Apr 29, 2020
@tungs
Copy link
Owner

tungs commented Apr 29, 2020

Hi, thanks for filing this! I was wondering when I wrote the video handling code whether anyone would have this use case. Thanks for an actual real world case!

Currently the audio element isn't supported (though theoretically, it should be pretty easy to support by editing media-time-handler.js in timesnap).

You may have some luck just changing the audio element to a video element-- I believe audio files can work as video files, and there is some support for video elements. In a future version, I'll try to add audio support.

@frizurd
Copy link
Author

frizurd commented Apr 29, 2020

Hi, thanks for filing this! I was wondering when I wrote the video handling code whether anyone would have this use case. Thanks for an actual real world case!

Currently the audio element isn't supported (though theoretically, it should be pretty easy to support by editing media-time-handler.js in timesnap).

Thanks a lot for your time!

I tried this and it works. Adjusted the node names checkers to match 'audio' instead of 'video'.

I drew an MP3 player and it gets played and rendered correctly. But for some reason, the audio visualization doesn't get shown (connected via AudioContext/AnalyserNode). Drawing the animation on a canvas with the requestAnimationFrame function, data from the analyser. It works perfectly fine if I open it via the browser. Trying to figure out what the cause can be, do you have any suggestions?

Once again, thanks a lot for your time!

You may have some luck just changing the audio element to a video element-- I believe audio files can work as video files, and there is some support for video elements. In a future version, I'll try to add audio support.

I tried to do this first but it gives me the same problem.


Before I edited the media-time-handler.js file and was using the audio element, Timecut recorded both the audio visualization and the MP3 player correctly, even though it was sped up or in 1FPS.

@tungs
Copy link
Owner

tungs commented Apr 30, 2020

timecut and its underlying library timesnap work by implementing custom requestAnimationFrame functions that can be manually called on demand, essentially creating a virtual timeline. In your case, I suspect the audio is playing in real time, while the function being called in requestAnimationFrame is either chunking or missing data from that real time player.

I'm not very familiar with how AnalyserNodes work, but I suspect it'll be tricky to incorporate real time elements (from the AnalyserNode) with the virtual time elements (from timecut/timesnap). It might be possible to move everything to virtual time via a custom modification of AudioContext and/or AnalyserNode, but that would require some effort to look into. Do you have a sample project you can post here?

@tungs
Copy link
Owner

tungs commented Apr 30, 2020

I should also add that videos modified via timesnap aren't really "played," but rather the video is paused, and the seeked to the appropriate time for each frame. This approach won't work for audio elements that need to be playing for AnalyserNodes to be able to receive data. It might be possible to manually collect and send the data in virtual time, but even if it is possible, it would take a significant amount of effort to implement.

@frizurd
Copy link
Author

frizurd commented Apr 30, 2020

timecut and its underlying library timesnap work by implementing custom requestAnimationFrame functions that can be manually called on demand, essentially creating a virtual timeline. In your case, I suspect the audio is playing in real time, while the function being called in requestAnimationFrame is either chunking or missing data from that real time player.

I'm not very familiar with how AnalyserNodes work, but I suspect it'll be tricky to incorporate real time elements (from the AnalyserNode) with the virtual time elements (from timecut/timesnap). It might be possible to move everything to virtual time via a custom modification of AudioContext and/or AnalyserNode, but that would require some effort to look into. Do you have a sample project you can post here?

Yes, this is a very simple example of what I'm trying to record.

let audio = new Audio();
audio.src = '/audio/track.mp3';
audio.controls = true;
audio.loop = true;
audio.autoplay = false;


// Establish all variables that your Analyser will use
let canvas, ctx, source, context, analyser, fbc_array, bars, bar_x, bar_width, bar_height;

function initMp3Player() {
    document.getElementById('audio').appendChild(audio);

    window.AudioContext = window.AudioContext || window.webkitAudioContext;
    context = new AudioContext();

    analyser = context.createAnalyser(); 
    canvas = document.getElementById('visualizer');
    ctx = canvas.getContext('2d');
    source = context.createMediaElementSource(audio);
    source.connect(analyser);
    analyser.connect(context.destination);
    frameLooper();
}

function frameLooper() {
    window.requestAnimationFrame(frameLooper);
    fbc_array = new Uint8Array(analyser.frequencyBinCount);
    analyser.getByteFrequencyData(fbc_array);
    ctx.clearRect(0, 0, canvas.width, canvas.height); 
    ctx.fillStyle = '#00CCFF'; 
    bars = 100;
    for (var i = 0; i < bars; i++) {
        bar_x = i * 3;
        bar_width = 2;
        bar_height = -(fbc_array[i] / 2);
        ctx.fillRect(bar_x, canvas.height, bar_width, bar_height);
    }
}

@frizurd
Copy link
Author

frizurd commented May 3, 2020

It took me a minute but I found a way to do it.

Preprocess the audio by using an OfflineAudioContext
I've added an onseeking event listener to the audio file and draw on the canvas whenever triggered. Control how many times you draw on the canvas via the Timecut FPS variable.
Get the frequency data from the OfflineAudioContext from a given time on every onseek event.

@tungs
Copy link
Owner

tungs commented May 3, 2020

Awesome! Glad to hear that you got it working. If you eventually want to share the end result and the code, I'm interested in seeing it.

@dweekly
Copy link

dweekly commented Nov 12, 2020

It took me a minute but I found a way to do it.

Preprocess the audio by using an OfflineAudioContext
I've added an onseeking event listener to the audio file and draw on the canvas whenever triggered. Control how many times you draw on the canvas via the Timecut FPS variable.
Get the frequency data from the OfflineAudioContext from a given time on every onseek event.

I'd be interested to see, too, @frizurd!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants