What is/How do I apply the Haas effect, make binaural beats...and other weird dsp stuff


Anyone got any tips or any info in regards to the haas effect, making binaural beats and any other info weird digital signal processing stuff…


Oh you’re back? :star_struck:

I had a thread on the last forum dedicated to making binaural beats that took me hours to make :sob:. What’s the haas effect, though?


Honestly it’s like trying to quit smoking, but before I fuck off just wanna help rebuild the information database…

But anyways I’m not sure but I think the haas effect is a pre-delaying of the signal before panning it to both the left and right stereo channels…but I’m not sure…@parricide also mentioned this weird effect that involves amplitude modulation or something I cant remember… binaural beats I’m guessing is playing with the stereo field but then there are phase cancellation issues as well as some other stuff…but I’m not too sure was wondering if anyone is more knowledgeable…I could Google it but too much stuff to sift through…so…here I am…


Haas Effect is a where the brain can’t distinguish two sounds as separate if there’s less than ~40ms between them. That’s why early reflections/short pre-delay makes things sound ‘bigger’ - we’re hearing separate sounds, but our brain strings them together.

It’s also a key element of stereo widening. Try duplicating a track, pan them hard left and right, and throw a 15-20ms delay on one. That’s the Haas Effect at work.


I wrote about this in a thread a while back and linked this article:

It was @gbsr who first mentioned the notion to me about delaying timings up to like 32ms and how your ear perceives the sound as one. Simple and great to keep in mind for drum timings or staggering a bass hit behind a kick, etc. Also helps to re-adjust quantization.

You basically use this thought everyday in music if you’re timing drums. By delaying the snare just slightly behind a kick you give both they’re space and they sound like one thing.

The hass effect would be taking two identical signals and hard panning them left and right and delaying even only one of them up to 40ms.

The plucky stuff that sounds like a card flicking through a bicycle tire in this is a form of hass effect:

I actually took one mono track, duplicated, staggered one and then did some fades in and out to have them come in and out separately, as an effect. I can’t remember if i pitched one over the other but I believe there are some volume differences.


A synthesizer like Sytrus is perfect for making binaural beats and isochronic tones. Because it has 6 oscilators both fm and rm matrices and and 6 main outputs with pan control, not including filter outputs.

For binaural beats you can pick two oscillators and off-shift one by say 15 hz, this way each note you play has the same difference, and output one to left stereo and one to right stereo. Or just pick the exact hz like 200hz and 215hz, but then its static at these freq and wont trigger notes on the piano roll.

Isochronic tones can be done with the same freq, or in addition to the binaural beats, this is done with modulating amplitude pulse at diff frequencies. So you can take two more oscillators and choose the hz you want (maybe you want it the same as the binaural beat freq of 15 Hz) and RM mod the main source (the binaural source).

I bet FM8 would be good for this too although I haven’t used it.

If you use FL studio I can send you a Sytrus patch.



Anyone got experience with coding mixers & VCAs? I’m getting some weird values with mine when I use them in specific ways, and I’m going to try to make this pseudocode make some sense:


dsp block():
    accumulator = 0
        integer for loop:
            accumulator += input[i]


dsp block():
    output(input) * vca_input

Basically what’s happening is that when I use them in tandem, I get leftover values when the VCA signal goes low, almost like one of them isn’t initializing back to 0 which makes no sense (they’re also immune to DC blockers, so it must be pretty bad). Any insight is appreciated, as I’m kind of new at even the most basic ass DSP like this.

Also, they work fine otherwise, and when used separately. But together, there’s just a leftover noise floor that is defying all logic for me. It also seems to jump from infrasonic to ultrasonic, which might explain why DC blockers aren’t doing shit.


What are you coding this in?


This thing here. It uses Java and lets you code up your own modules for VM


Have you tried using a debug print in order to find out what is happening in the program when you use the VCA?


There shouldn’t be any black magic there if you’re entirely in the box (things can get a little more wonky if you’re actually processing ADC signals) - VCA is, as you’ve succinctly written, just a scalar value applied to the waveform. A mixer just adds amplitudes. Pretty simple stuff programmatically as far as DSP goes.

I have no experience with Voltage Modular, does it allow for any sort of console output? When I troubleshoot stuff like this I like to see the actual numbers and what’s happening to them, because I’ve found that’s the easiest way to see where things go wrong. You might run the code itself through a Java interpreter with dummy values for your input, just to see what the code is doing.

I’d also test it with the simplest scenario, like a looping sine wave. You could even mix an offset so you null the signal. If you do that and get some console output going, I’d guess it’ll show you where the issue is. If I had to hazard a guess, it’d be either something in your control flow logic or something external to your code that you may have to work around.


I think it actually does have these, so that’s a good idea. The only real issue is that everything is compiling OK, so it might not pick up on anything strange, but it’s worth a try just on the off-chance that it can point something out in the process!

Yeah, thankfully for stuff like this, the ADC / DAC stuff is already configured or else I’d be completely lost (GUI jacks just have cool little methods like ‘getInput’, etc so you don’t have to worry about the details as much – blessing and a curse, of course). That’s actually a good idea to log the numbers, I think it takes an input as a double and spits out a double output, but from my other surface-level DSP journeys, I’m a little confused as to how they’re not using arrays for the DSP math since everything else seems to do that.

I think I can at least log the double values to GUI elements if nothing else, so even if it’s weird it should give me a genuine idea as to what’s going haywire. I definitely didn’t think about this for some reason!


Debug runs with the compiled code, so it will tell you if there is numerical errors.


Probably showing my ass here, but are we talking about an actual ADC or a virtual one? Are you processing external audio through your interface or are you using something generated in another VM module? If VM is doing some sort of trickery to emulate an analog environment, there could be some crud introduced from that layer.

They are using arrays (or a similar data structure) somewhere to store a shitload of doubles. It may be way down the stack, but I’ve only ever seen effective audio processing as a double buffer or ring buffer, and it has to be storing upcoming audio data somewhere. Every OS is different, but assuming Windows, the OS provides a couple of different options for native audio buffers depending on the API, so VM is likely using that and you don’t see it or have to deal with it. The doubles are just handing you the current thing in/out of the buffer without telling you what’s before and after it.

When I program, I find it helpful to remind myself that literally anything that happens on a computer is just data being transformed… data in > transform > data out, it’s all computers do. That data is always just memory (specifically cache) at the time of its transform.

The problem (and my guess is it’s the root of your larger problem) is that you’ve got miles of abstraction here - VM’s Java-based SDK on top of VM’s core code (which is likely C/C++) on top of an OS on top of a kernel. That means getting at what’s actually happening is hard, in that you don’t get to just look at the memory and see what’s happening to it, and anything in that stack can be affecting your data in a way that you can’t reason about because from your mile high view things are the bottom are opaque. Likewise you’re calling functions in the SDK that then call the underlying code in VM itself which then hooks into the OS and audio drivers - like that SDK is going to document a ton of functions that do a thing, but they don’t actually do anything, they just call underlying code that you can’t see which then hook into other things and you’ve got a virtual Plinko game where you’re dropping data through this mess of shit and hoping it comes out right at the bottom. /endrant

So what to do? Keep it simple. Start with sine waves since they have a predictable range. Use the most granular, simple functions you can. Log everything at every step. If you put in a 1 and expect a 2 out, confirm that’s happening. Slowly build up that function stack until you can prove and reason about what’s happening in between and see where it breaks.


You’re definitely right about this, and it’s weird that a lot of the things that make platforms like this easier to code on obscure the meaning behind it all (not that I necessarily want to do the binary to hex converting or any of that, but if I had a way to see inside of it I might have a better idea). The reason why there’s definitely a current double in the chamber (or, as far as the user can see, even if it’s a buffer) at any given time is because it handles audio the same way it handles CV (which is likely very true to life in the analog domain, since they’re technically interchangeable values).

I think you’re right about starting with the most basic components, too, so maybe even testing 0s and 1s in terms of CV will reveal more than what can be seen in the (normal) audio domain. This could explain why my logic modules seem to function without an issue unless I flip a boolean wrong or something, because it’s a little easier to debug highs and lows like that to get a feel for what’s going on.


Yeah, high level abstractions are great when they work; you can do a lot with a little code. I’d even argue they’re okay when they don’t work if you can see the underlying code, but troubleshooting a black box is a hair pulling experience.

To be fair, looking at an audio buffer isn’t much fun - 48k samples over a couple of seconds is a metric fuckton of data that’s hard to get any sort of picture of, so it makes sense to just poll every 100-200ms or something to figure out how it’s changing, or throw in asserts so it’ll break and alert if it drops over/under a value.

Sounds like you’re on the right track for figuring it out. I’ve certainly been there where I had to back way up to simple chunks of code to figure out why something was exploding. Just remember, troubleshooting is the real fun :stuck_out_tongue:


I think this one was a real ID10T error on my part, but you cracked it. Nice work!

So I’m guessing that while it’s operating on the double array (as you mentioned), you (the user) only has access to the current double in the array index (with no way of changing that, probably for safety reasons, although anything is possible with a delay buffer). So from what I can gather, when I told the VCA to output the input value when the voltage is high, I didn’t set a condition for what to do when it was low, so it grabbed onto the last double or something and just held onto it (which might explain the infrasonic and ultrasonic frequencies firing at random). Setting that at absolute zero seems to be the fix so it nullifies that last value.

Damn, a little closer to understanding the beast. And the dopamine cycle continues :D. I guess that makes sense as to why my seemingly arbitrary mixer accumulator thing works just fine. It was just the VCA!


This might even get me a step closer to using some of the cool Java / Processing DSP libraries in my projects, although the trick is going to be grabbing their arrays one index at a time in the DSP domain and doing accurate in and out audio processing with them. At least I feel like this might be doable with enough time and dedication, even if I never really learn true DSP algorithms. Sometimes libraries and a few tweaks can make for a good time.


Glad you got your module fixed, that’s a big win! :smiley:

I think this is really illustrative of what a deep, dark hole both audio programming and language abstraction in general can be. Knowing how things work at the very bottom is helpful for both design and troubleshooting, but you could easily take a 6 week detour into learning the details and never get anything created (though I’d argue it’d benefit everything you do afterwards) - just how much you need to know to get a project done is a moving target.

Case in point, I should point out that when I said ‘double buffer’, I wasn’t talking about a buffer of doubles (which I realized is how it could be taken in the context). A double buffer is actually two buffers (ie contiguous memory blocks, often an array of some kind) where the audio stream is loaded simultaneously and the program bounces between them to get current data. That lets you efficiently load/manipulate upcoming data in the second buffer while playing from the current buffer. When the current buffer gets (close) to the end, it starts reading from the second buffer that you just loaded and processed, and the current buffer becomes the load/process upcoming buffer, and the program swaps back and forth. You know that buffer size/latency in samples that you set in literally every audio program ever? That’s how big those buffers need to be so that you can get good data out before the swap and not run out of data to read (buffer underrun).

Double buffering of one shade or another is the most common implementation for most real-time audio engines, and I’d bet Voltage is using it at the bottom of the stack. Do you need to know that? Probably not, unless you start having buffer underruns that the Voltage stack doesn’t handle or other weirdness, then there’s nothing you can do other than throw up your hands and walk away or start digging deeper than what the Java API exposes.

You might look into the WAV format, which is simple, well documented, platform independent and easy to read and write. It’s a fun and educational exercise to write a program from scratch to handle them (try writing a couple seconds of a sine wave into a WAV file, then read it back into an array and maybe do something with it). Once you get a simple framework for that (probably 100-150 loc), you can add some basic DSP in the mix. It’s not real-time, but it’d show you quick and dirty what the DSP code is doing without the overhead and potential confusion of that whole crazy software stack.