The DSP Livecode / Prototyping Thread

Side note…

Anyone have any cool ideas for a digital Ouija Board? I spent all of last night designing the board, planchette and importing everything into Processing, but right now it only functions as “the real thing” – it’s obviously not weird enough.

My only thoughts so far are MIDI and OSC controllers (I’m not very imaginative), but if anyone wants to do something cool with it, I can toss up the assets and the source – or, most likely just use your suggestion instead.

(Although now that I think about it, I could probably go way further with some of those DSP libraries, but using them imaginatively and with some kind of cohesive purpose is the hard part)

Might just need to create a X/Y coordinate table and then fill in the blank. What would jesus do?

Got this little (VST) guy rolling if anyone wants a backdoor key. It should work on all of the main DAWs, and pumps out as little audio as possible to make clean IRs. It automates, too.

If anyone has cool suggestions, I’m drawing up a ‘synth hacker’ / jammer sort of plugin complete with latch gates, velocity automation, independent voice controls and other things that can allow you to use your synths in strange ways for fast patch designing. If there’s anything you’d like to see, I’d love to implement it and give you a free copy just for being awesome :slight_smile:

@KvlT Didn’t know where else to put this and you’re the dude around here who needs to see it:

From the guy that made Supercollider. Seems to be purpose-build for interactive sound programming, specifically over arrays.

2 Likes

Wow, this looks interesting. The syntax looks a lot like ChucK from what I can tell so far. Have you made anything with it yet?

I have not. I probably won’t. This is just the weirdo nerd corner of the site where we collect these things :grin:

I honestly don’t know what problem this solves for me. The best thing I can say given the wholly unfair cursory glance I gave it is that some of the underlying src is neat.

Top to bottom, here’s where I’m at:
Reaktor - for when you want to make a musical thing from scratch. It’s modular for actual nerds.

PureData/Max - for when Reaktor won’t do what you want but you still want some kind of visual representation (or you just want to start from first principles).

SonicPi/Supercollider - for when you don’t want/care about a GUI but want to tweak and twerk.

C (sometimes Cmajor if I’m lazy) - for when you absolutely, no holds barred want to fuck with a sound. Nothing beats directly injecting/abusing 44000 individual values every second. Ring buffers ftw.

Assuming I’m decent with those things, I have more tools that 99.999% of the musicians on the planet, and it still doesn’t make me an awesome artist. This shit is interesting to play with but much like one more free plugin, another coding environment isn’t going to be a magic bullet. I think I’m better off refining what I’ve got and trying to put my efforts into actually sitting down and making sounds instead of learning a new syntax that likely won’t turn anything on its head.

2 Likes

I definitely agree with this, but I have to admit that sometimes it’s great (for me, at least) to find more “hands-off” approaches or new ways of seeing things, even if 99% of my prototypes or ideas can easily be done inside of Reaktor, or even the DAW itself.

I’ll never get around to them all either unfortunately, but all of the frontends for Supercollider have been so much fun so far that it’s taking all I have in me not to explore just a little deeper, even if it’s just likely to lead to diminishing returns and stumbling around needlessly.

I definitely think Sonic Pi, Tidal Cycles and possibly even Foxdot at least allow music to continue to flow with as few snags as possible while at least interfacing with the beast that is Supercollider, so I might’ve ended up finding my perfect depth somehwere in the middle as well, without really having to take a deeper plunge. But it’s still very tempting :smiley:

2 Likes

For sure, I definitely think there’s benefits to alternate perspectives or approaching problems in a different light. What I’m not sure about is the benefit of a similar ‘low level’ audio language/engine verses the amount of time it takes to get familiar with it.

I think this all comes down to workflow. For me, messing with this stuff isn’t generally performative, it’s procedural - I throw a sound into the blender and if anything interesting comes out I cut it out and use it as a sample. I’ve started to think of it as ‘digital musique concrete’, as pretentious as that sounds. I use Ableton for composition which has a ridiculously complete set of tools for musically dicking with sounds and rhythms, so it really doesn’t make a lot of sense to recreate the wheel in other software until it’s something I can’t pull off in a DAW.

My “experiments” usually start with a dumb idea about what happens if you do X to Y. Then I try to fit my mental model of that into my bag of tools. Sometimes that question comes from the tools themselves, like hooking up random shit in Reaktor or PD, and I guess that’s where different frameworks may shine a light on things I haven’t thought about, but more often the ideas are more conceptual than musical, at least to start with (eigenvalues of sample sets as FM carriers, quaternions as simultaneous ‘rotations’ through different domains, and other such nonsense).

So what’s interesting to me from a sound design perspective are the broad X/Y questions and how they’re implemented. For a given framework (ie Supercollider or similar functional programming space), I’m not sure how couching the content in a syntactically different but ultimately similar framework would get me to a different problem space, again at the cost of actually learning that framework. Maybe I’m wrong that these things would expose a whole new set of things to think about, but I only have so much time in my ADD-filled day.

tl;dr - I currently got enough hammers and enough nails, I need to get better at not hitting my thumb.

1 Like

@KvlT putting this here because I think you might enjoy/get something out of it, and frankly you’re probably the only other person here that would give a shit (and I’m not even sure about that lol).

It’s a deconstruction of how to take random signals and interpolate them to make a random-value LFO. It’s Max-specific on the surface but it’s really easy to generalize once he breaks down what each part is doing (and it’s mostly implemented in JS anyway). He reimplements it later in a different way to contrast the original, which was cool. Once he explains the basic building blocks and how they’re used, I think you could set this up in just about any audio/scripting scenario.

Mostly what I enjoyed was the clear, concise explanation of how to approach a problem and the implementation of a solution. My brain immediately started pingponging to other ideas and I found it a bit inspiring. Maybe you will too.

2 Likes

Thanks so much for sharing! I can’t wait to check this out :smiley:

1 Like

If you get anything out of it, I’d say the whole channel is probably worth perusing as it’s in a similar vein. Most of his content is based around timing and rhythm, so there’s cool stuff like implementing and modulating swing from basic principles.

I’d be interested to hear if you think any of this Max stuff is easily applicable to other programs you work in.

Man, I’m about halfway through just watching and digesting it as best as I can (so far it looks like makeshift perlin noise?) and I can’t believe how fluid everything is in Max. I wish I could afford an environment like that - I’ve been trying to get PDLua to just accept and transform a few arguments / atoms for a while now, but it seems like you literally need to reload the entire plugin for any iteration to take hold, and even at times, it just loads weird or doesn’t actually use the built-in mini IDE because… probably FOSS problems, admittedly.

The only cheap option I’ve found for doing anything a litter deeper and having free access to the DAC is Plug’n Script, but admittedly I haven’t delved deeply enough into C++ in order to make use of some of the good libraries that are floating around on Github. It uses Angelcode, though, which might be a good way to actually integrate into that world. There’s a lot holding me back on stuff like this, but I really want to get a little deeper into it, too, despite the roadblocks.

lol I swear that wasn’t a pitch for Max, more just cool ideas and algorithms.

For me there’s two interesting things happening in there. First is using a phasor as a timekeeping device (so the period of the phasor corresponds to a bar of music or whatever). You can also visualize it as a ramp from 0->1 that repeats over and over as demonstrated in the video. When it finishes there is exactly one moment where [current value-previous value] < 0 (ie previous=1 at the top of the ramp, then it drops to 0 to start over so for one sample length the difference is -1). Part of the video is setting that up, checking for that moment and turning it into a pulse. When that pulse happens is the trigger to get a new random number (though you can use it to trigger all sorts of things), so you get a new LFO value every time that cycle repeats. There’s a lot of things to unpack in that and a lot of ways I’m thinking about utilizing it.

The other neat thing is interpolating between those values smoothly, which obviously has a ton of use for all kinds of glides in everything from LFOs to S&H to synths. I think the video demonstrates the process and math pretty well.

It’s really what’s kept me there, a nice balance of low level access and ease of use. gen~ is the real star of the show these days since it does per-sample processing instead of the usual 128-1024 sample blocks, which opens up things like filter design and leveraging history. And it supports codebox/JS so you can just type in your stuff instead of fiddling around with all the objects which makes more sense in my head some days. But the ease of use is really the thing, it feels more like a playground than real programming as you don’t have the overhead of framework and management but all the access and performance, and everything just works like you’d expect.

I’m pretty out of the loop on pd’s functionality; I don’t think it has anything akin to gen~, but I believe most of the core Max objects have analogs. I’m sure there’s plenty of fertile ground there even without Lua externals and the like.

I’m not familiar with Plug’n Script, but it looks like you can just go crazy with C/C++ without the headache of dealing with the fiddly bits. That’s pretty huge, because managing pointers and lifetimes and scope is really the hard part of C, functions and data types are dead simple. All of that is 10x harder if you’re having to do your own OS-level handles and buffers for audio, which Plug’nScript seems to cover. Sounds like Blue Cat might be a good place to dive in if you’re okay without the visual coding aspect.

The funny thing is, I’ve never really been too comfortable with the visual environments (which is kind of weird, since modular is purely-visual), but I find a lot of the time when using Reaktor, even for simple delay effects and things like that (nothing too advanced just yet!) I always wish I could just open up a code box and write some loops to get things moving faster and to be able to mix and match on the fly. The visual stuff gets so messy so fast that, unless the patch needs that level of visualization (multiplexers and demultiplexers / selectors can certainly benefit from this sort of thing, and probably a lot of others), I’d much rather be able to write my own little blocks of logic as part of it in order to understand it better and condense the trivial aspects instead of having them all spilling out everywhere. I know you can use expressions and all sorts of great stuff, but the flexibility of both really makes me wish I had Max.

Max’s implementation of that is pretty perfect, from what it looks like. That’s what got me interested in getting PDLua up and running in the first place, because I figured at least I’d be able to section off pieces of logic / loops that way as I work through the unsightly / clunky parts I don’t like and reach some sort of compromise, but I think Cycling really has the best of both worlds there. Unfortunately, $400 or however much is still pretty steep!

The documentation and examples for PDLua are a bit haphazard, but I think I just figured out what I was missing, so it looks like this might be the next best thing. Thankfully it runs great in select DAWs, and I think I might’ve been misunderstanding a few things (maybe I can pass some of these newbie mistakes on to the next person, since I seem to be the lord of making dumb mistakes along the way), but this is actually looking a lot more solid than I had originally thought; and without you reviving this thread, I probably wouldn’t have even tried again. I was still in ragequit mode, last time I checked.

I have checked out Max’s documentation when running the demo, though, and I can say for sure that the price tag definitely is funding the goods in that department. Sometimes the best (or at least perfectly-decent) tools are hindered by the worst documentation, but it looks like I finally have something to work with, which is a start!

Although, I will say that finding an alternative to gen~ / core will bite me in the ass later, but I’ll cross that bridge if / when I need to :smiley:. For now, at least I can push numbers around and encapsulate some ‘helper’ objects.

I feel this 100%. Visually setting up if and for control flow sucks and really adds to the spaghetti mess, and my brain immediately goes to code for that stuff. I know you can subpatch it or whatever to clean it up, but my brain just wants to type the loop like I’d do in any programming language because it’s clean and straightforward.

The docs are so very, very good. From the right-click quick help, all of which include interactive(!!!) examples of usage to the easily accessible and very complete reference docs, it teaches anyone with basic reading comprehension how to use the program. The online tutorials are also amazing, and for this tiny niche of audio work there’s a relatively large online community of forums, youtubers and Discords.

Funny story, I remember mentioning the quality of the documentation to some Cycling guys at a conference years ago. They all looked at each other and laughed like it was an inside joke, then one of them said “nobody would use it if we didn’t”. They’re very cognizant of the fact that Max is this huge, powerful program that does all this stuff but is totally opaque to a novice and good documentation is how you attract and retain customers. There’s an irony that while Pd is free and Max is expensive, Max will actually teach you how to use Pd better than Pd itself if you can connect the dots between the two.

I’m of two minds about this. On the one hand, I think it might be the best deal in audio software, bar none, if your goal is building and tinkering with low level sound architecture. There’s nothing else out there as powerful, deep, complete and easy to use. If I had to, I might very well pick Max and Reaper (for recording/composition) as my only two audio programs, because at the end of the day I’ll likely run out of creativity before I bump my head on Max’s limitations.

I think it’s also important to point out for posterity that Cycling is an awesome not-evil company that sells a product you actually get to own and doesn’t have any bullshit shenanigans with their pricing or software. I wish that wasn’t worth mentioning, but in today’s software landscape it’s a huge selling point for me.

On the other hand, $400 is a lot of groceries, rent, gas, whatever, and if you don’t have $400 sitting around, it might as well be $40k. It’s a barrier of entry that you just can’t get around and there are alternatives so it’s not the only game in town, just probably the best one if you have the means.

Awesome that you (maybe) figured out PDLua. It seems like a missing link that enables a ton of extra functionality. I’d be interested to hear how it shakes out, as well as if you dive into Plug’nScript.

1 Like

Putting this here so as not to clutter up ReWIR3D’s thread.

On the back of your ‘Road to 10k’ video, thinking specifically of the whole snip into SonicPi loop - I’m wondering what happens (and how best to make it happen) if you embed the snipping in the looping and just let it run. Hopefully I can explain this correctly:

  1. directory of samples. maybe they’re manged already, maybe they’re raw, same setup you’re using now.
  2. run the SonicPi looper over the directory, but in the init read the total number of files in the directory
  3. write the output of the looper back into the directory, maybe with a similar length to the original slices?
  4. potentially delete a file once it’s read or move it to another dir?

The idea here is you start with a sound or bank of sounds, maybe even simple stuff like a vocal. As the loop runs you’re deleting the source material and replacing it with the output. Might take a bit of math to make sure it doesn’t run away (ie total number of samples stays fairly stable, if a growing directory is an issue). I’d also potentially be concerned with the cpu/disk overhead of embedding the real-time slicing and writing into the loop - maybe that’s computer crushing, maybe not.

This is mostly me being me playing “what if?”…Does anything interesting come out? If you walk away for 8 hours, does it devolve into single tones due to wave interference or does it get weirder and weirder? What happens if you shove some random effects in the processing? Where do you hit the limits of the software?

I’m not sure how I might approach this. I have some ideas I might try to tackle over the weekend, but I’d love to hear if you’ve got any thoughts or have tried something like this before.

1 Like

This sounds like a lot of fun!

I’m not sure if Sonic Pi has all of that file access (although on a technical level, it absolutely should, but I guess it probably has a lot to do with its internal Supercollider bindings), but I would bet that if all else fails, this would definitely be achievable with Python libraries at the very least. I’d be really interested to hear what happens with this, too, and I could even see it being hilarious to add random audio files into the directory just to change the composition if it were to ever get too stale.

I can’t wait to see what you come up with. This sounds inspiring :fire:

I don’t have the time to dedicate to this right now and probably won’t until the weekend, but man, it’s been consuming my thoughts. At this point I don’t even give a shit how it sounds, now I just want to know how to implement it.

Preliminary research suggests that the overlap of easy-to-use/has good audio tools/has convenient file system access is almost zero. Basically it’s non-trivial because all the easy answers don’t robustly handle file system stuff. That makes it both frustrating and a high-quality problem in my book.

Yeah, the biggest issue with Sonic Pi is definitely the lack of file system access. Although, I think if you leveraged this with something like Python and a MIDI loopback device, you could maybe send a MIDI CC or regular note onto that bus to get Python to start removing files.

Sonic Pi also won’t algorithmically save audio streams (that I know of), and it requires manual user input, but something tells me that streaming directly into a decent Python library that handles audio well (that’s the trick though, innit) might be an option. Bonus points for also being able to use a MIDI signal from Sonic Pi to cap off the previous stream and start a new one, provided it can keep up.

I should also mention that Foxdot is a pretty good frontend for Supercollider, similar to Sonic Pi but as a Python library, although nothing really beats the streamlined / live-looping structure of Sonic Pi. I’d personally rather have Sonic Pi do as much of the sampling and audio handling as possible, and then send those additional file system commands to something else.

As busy as my weekend is looking, I always find time for shit like this, so let me know if there’s anything I can do to help get this thing rolling!

Yeah, Max has the same issue as SonicPi on both ends - for the most part can’t read/write without a dialog. I might be able to leverage some JS in gen~ to do it but I haven’t tried anything like that before so it’d be some research and testing. I may start there just because it’d open some doors to other things if it works.

The simple way to do this would be to just bash out an exe in C. Read the dir, load everything into buffers, delete the working files, sum/normalize the playback to a ring buffer, write out wavs to working dir every 5 sec or whatever, do that for however many iterations. Seems to be the outline. That’s a couple hours work maybe, but there’s zero frills - only console user input, no playback, no variable buffer/play length, no DSP, definitely no VSTs. It would have the advantage of being entirely non-interactive and stupid fast, like I could set it for 1M iterations and sip my coffee while it chews through it. Seems like a decent test but basically wasted time if it proves useful enough to want an interface and the rest of the bits and bobs, and I’d like to figure out how else I might approach this.

I know Supercollider can do it but I’ve never made much sense of their SC lang and my Haskell is terrible. Probably worthwhile to better learn one or both but not right now.

I like the Python idea, besides not really liking Python in general. It seems like it’d be pretty straightforward to implement and may be my next port of call if Max/JS doesn’t work out. I initially considered some kind of separate file watcher system running in tandem, even if it’s just on a timer. I don’t love a dual independent programs solution; sort of like the C idea, it seems like it’d work but isn’t what I’d want long term as it requires setup and monitoring every time - “oh you forgot to start the watcher? now your drive’s full lol”

What I’m maybe looking at is OSC. Max supports it out of the box which means it can talk to anything else that speaks the lingo - Live, Pd, etc, and any language where I can hoist an OSC framework. I don’t know that it’d be any more effective than MIDI for what we’re talking about but it’d give me a reason to dive into it as a control language which definitely has some good use cases.

Anyway, always open to suggestions and if you have any lightbulb moments.

Hilariously I haven’t done a single thing with OSC yet, just because of the ease of use of MIDI. What a great way to get into it, though; being able to pass strings and other data might actually keep things more tidy and manageable.

I’m kind of curious due to my lack of overall experience with Pure Data, but I’m wondering if you can at least hush some of the dialog boxes when recording, saving, loading, etc. If that’s the case, it looks like reading and writing from files can be automated, and you could even write an additional set of objects in C (or for dummies like me, Lua). That could very well be the best place to start, since it gives you the most flexibility without having to write a whole lot from scratch