osci-render icon indicating copy to clipboard operation
osci-render copied to clipboard

[Blender Plugin] Add functionality for frame-by-frame 'baking' at some point once WavShaper is ready

Open DJLevel3 opened this issue 1 year ago β€’ 6 comments

Context

I will try to implement this myself in a few hours, but I'm developing an audio plugin called WavShaper and I want to add the functionality to animate the shapes instead of using static shapes. My idea for implementation is to use an audio file like I'm using, but instead of choosing one cycle of the same animation I read one frame of animation per cycle of audio (so, 1 second @ 10Hz = 10 frames).

Suggestion

To that end, I have so far been going through the audio from osci-render by hand and clipping out each frame, but in the future I want to be able to use the audio from osci-render. This may be possible in a Lua plugin, not sure on that, but what I'm imagining is a button (either in osci-render or in the Blender plugin) to go through frame by frame and save 4800 samples for each frame to the same audio file. There is definitely a feature in Blender to bake some kind of animation, that can almost definitely be used with osci-render.


Some additional ideas to make it even more useful in other applications:

  1. Automatically maximize output volume (code below)
  2. Allow setting frequency of frames
  3. Allow setting duration or number of cycles recorded for each frame
  4. Automatically set some settings based on others?

Code to maximize the output volume

Written in C++, but should translate really easily to Java. The math syntax is super similar.

// Normalization factor (maximum sample absolute value)
double norm = 0;

// Floating-Point Calculation and Recording Pass
for (int frame = 0; i < nFrames; frame ++) {
    for (int sample = 0; sample < sampsPerFrame; i ++) {

        // ----- Calculate the sample and store it in sampleArray[frame][sample][channel] -----

        // Left Channel
        if(std::abs(sampleArray[frame][sample][0]) > norm) {
            norm = std::abs(sampleArray[frame][sample][0]); // Math.abs() in Java
        }

        // Right Channel
        if(std::abs(sampleArray[frame][sample][1]) > norm) {
            norm = std::abs(sampleArray[frame][sample][1]); // Math.abs() in Java
        }
    }
}

// Reduce unnecessary floating-point divisions, they're way slower than multiplications
norm = 1 / norm;

// Floating-Point Normalization Pass
for (int frame = 0; i < nFrames; frame ++) {
    for (int sample = 0; sample < sampsPerFrame; i ++) {
        sampleArray[frame][sample][0] *= norm;
        sampleArray[frame][sample][1] *= norm;
    }
}

// ----- Convert the samples to whatever format (int16, int24, float32, etc) is needed -----

// ----- Store the converted samples to the output file -----

DJLevel3 avatar Sep 09 '22 16:09 DJLevel3

Might be possible with a Lua script, I'll have to look into it

DJLevel3 avatar Sep 09 '22 16:09 DJLevel3

This is a great idea! Thanks for the suggestion and code snippets. I don't think this would be very hard to do and would just need storing of the line data of each frame - something that's easy to do.

Could you explain why you can't just record the audio live from Blender as it plays back the animation rather than changing each frame manually? Is the performance not good enough when you do this?

I'm thinking of a way of cycling through frames and then being able to configure different settings for each frame, like the frequency.

jameshball avatar Sep 09 '22 19:09 jameshball

Sorry I'm late! I need one frame per 4800-sample cycle exactly because WavShaper constructs a shape out of the first 4800 samples of an audio file (0.1s @ 48000 sample rate). I want to add multiple frames by reading multiples of 4800 samples, so I need frames to be 4800 samples long, 4800 samples apart, with absolutely no variance. If there's any lag, things break.

This means I need total control over when each frame starts and ends to the sample, and there is some lag when loading frames live from Blender. I've had it tank down to 5 frames rendered per second on particularly bad models which were intended to run at 25fps. These models were absurdly complex, the one I had the worst lag on was the front grille on a car, which had hundreds of square holes.

Also, if storing the data per frame ends up being too memory expensive (it probably won't) you can do two full rendering passes, one where you calculate the normalization and one where you apply it. This would be twice as time expensive, though.

DJLevel3 avatar Sep 10 '22 04:09 DJLevel3

Anyway I had family matters today so I didn't have time to actually do any more code today (22:50, Sep. 9 my time). I hopefully will tomorrow.

DJLevel3 avatar Sep 10 '22 04:09 DJLevel3

That makes sense thanks for the clarification! Are you working on this with the Java version on another branch or do you want me to work on this?

jameshball avatar Sep 10 '22 07:09 jameshball

I'll try my hand in a separate branch and if things go bad I'll let you know.

On Sat, Sep 10, 2022, 1:37 AM James H Ball @.***> wrote:

That makes sense thanks for the clarification! Are you working on this with the Java version on another branch or do you want me to work on this?

β€” Reply to this email directly, view it on GitHub https://github.com/jameshball/osci-render/issues/127#issuecomment-1242658368, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALOG5ZDTH3ZHFFCSRQ5CNTDV5Q3ENANCNFSM6AAAAAAQI2I7GQ . You are receiving this because you authored the thread.Message ID: @.***>

DJLevel3 avatar Sep 10 '22 12:09 DJLevel3

#235 closes this

jameshball avatar Apr 28 '24 21:04 jameshball