I discovered in Max you can totally read in a directory and mess with what’s in it. There’s an object called polybuffer~ that does exactly what it says on the tin - makes an indexed array of buffers, and it even has a read directory built in. I have no idea how I missed it before. It somehow didn’t come up in any of the searches I did, but I’m sure that’s on me.
This actually came out of me doing a deeper dive on what I think of as Max’s ‘plumbing’ - all the (admittedly boring) data handling and routing; things like arrays and dictionaries and the associated bits. There’s probably 2x-3x the amount of objects dedicated to that than there is stuff for actually doing audio/video. I know it’s there, I’ve used a lot of it before, but only in isolated cases where I needed it, so I figured a more holistic understanding might be helpful.
Honestly, I use Max in little snippets, more akin to a 50 line Python script than a big 20k LoC program - one patch for one thing and if I need to mash stuff together I do it in the DAW and bring it back in to Max. It’s let me avoid dealing with the boring bits, but it’s probably inefficient and I’m likely missing out on some cool functionality because of my lack of understanding.
A lot of this is motivated by watching interviews that Cycling74 does on their youtube channel (called ‘Office Hours’), 1-2hr things where they grab a Max user and pick their brain about how they use the program. It really runs the gambit from live performers to educators to producers, but it’s wild how crazy and varied the results are, both in what they make and how they approach things. I did notice that almost all of them are leveraging data handling/structures in interesting ways so I figured it might be worth my time poking into.
@KvlT, I almost didn’t mention it because what you don’t know can’t hurt you, but Max is 50% off this week for Cyber-whateveryoucallitcapitalistchristmas, $199.
Wow, that’s one of the deepest price cuts I’ve seen in a long time, and makes it even more enticing. I’m putting that on my calendar for next year and hoping they run the same thing again
It’s amazing that they actually showcase some of the deeper concepts on their channel – apparently in the FOSS world (as I’m coming to find out), a lot of the really intuitive introductions to the deep stuff are found in those expensive books you ‘borrow’ from the ‘library’ (or maybe that’s just me), and then backtrack toward the official documentation where it finally becomes a little less-blurry and a million times more usable.
One weird trick for the more open-source crowd is to find a frontend that caters to what you’re after and then check out their less-robust (but often more ‘human’) documentation on the target software. Somehow it’s those people who remember that we’re not all insiders, especially when we first get started, probably because they’re spending most of their time trying to get people to use said frontend rather than to create the most robust and technical documentation possible.
I’m sort of amazed at the customization of things like ChucK and Csound for starters, with all of the guides for adding custom ugens / opcodes at your leisure. All of this makes me wish I had ever learned C, but I guess even writing a small ‘helper’ class or module (actually in C) for something like this is the best place to start. Judging by what I’ve seen around, it seems like Max is the only paid piece of software with just enough of the API exposed so that devs can do whatever they want with it, and that mentality is one really nice default asset when it comes to the world of FOSS. But, fuck, I’m going to have to buy these books one day
I don’t know if anyone was following this little sidequest, but right when I rode into MIDI output territory (where the bug lives on the VST version), it started screaming all sorts of crazy shit at me that didn’t seem to make any sense. Score another point for open-source, probably, because this is the unfortunate outcome of a small team working on proprietary software, likely without a fair quantity of bug-squashers on duty. You also can’t go any further with shit if it’s not open
Yeah, all this software comes out of academia and the docs reflect it. I’ve teasingly referred to all this software as “tools for people that perform at museums”, it’s a whole different ecosystem than your usual “I played guitar in my bedroom until my friends put a shitty band together and now I’m touring Europe”. It’s an insular little bubble you might brush up against if you go looking for weird music releases, but it doesn’t feel like there’s a lot of overlap with whatever passes for popular music, and most of the complex systems are by academics, for academics.
I think it’s great and important that really smart people have ingested the docs and kicked out frontends, alternate documentation and videos about getting your head around these systems. There’s obviously still a pretty massive learning curve, but it sure beats slogging through what amounts to a college textbook just to get started.
For me the neat thing isn’t so much the Max-specific stuff, as they usually demo and walk through but rarely deep dive on patches, it’s listening to these people talk about their creative process and what they were trying to develop. Shit like “I was trying to morph human speech with birdsong as a carrier” is just weird and wild to me, and they sometimes cite papers or technical terms or starting points to go look at how they got there. I think what I find so interesting about it is that there’s like four billion hours of interviews with Jimmy Page about writing Stairway to Heaven or how he approaches a guitar, but very few deep, technical conversations with weirdos out on the edge of music, and this is some of that. I guess it’s more inspirational than directly instructive.
I remember you mentioning it, but not the details. Was this in Cardinal or FL or something else?
If anyone’s into DSP, specifically reverb, this is worth a watch. Starting around 19 minutes he goes through all the common algorithms and demos implementations in Pd, finishing with his own take on an FDN verb and what motivated some of the design choices. A really great overview without being too dry and technical.
The Voltage one. The main plugin has a broken ‘MIDI out’ and it seems to also throw some really funky error messages back when you poke it from the API. It could certainly be something about Java (I don’t know why they insist on still using that as their scripting language) that I just don’t want to continue messing with and getting to the bottom of, but it seems to correspond with what they told me I was right about all along; it’s broken and they still haven’t fixed it in years
I’m hoping this is a sort of ‘new trend’; if someone like Ableton / Cycling '74 is willing to do things like that and people actually watch it, it could inspire more technical conversations with less of the dry / lecture format that oustiders seem to despise and more of the ‘here, let me show you’ sort of workflow. Max is obviously a great platform for that, too.
I’ve noticed that when checking out a lot of tutoruals on max; they go much deeper than most and somehow don’t lose the view count that every other piece of software does. Having 30K+ views on shaders and transformation matrices is something that doesn’t often happen, especially inside of a specialized environment, but the people who use Max seem to be really eager to learn more, or to at least ponder what more they can do with it.
Jeez, that seems like a huge glaring issue for an audio plugin. Sucks that it was ever a problem, but bonkers that they don’t address basic functionality issues.
Because for over 20 years that’s what programming classes in college used, so it became a lingua franca for a couple generations of programmers whether it was the right tool for the job or not. It’s garbage collected, Object Oriented, and at one point what you’d use in the corporate programming world and thus had a bunch of money around it, so it became a darling of the academic community and the people that went through those programs. I think it’s mostly been replaced with Python nowadays, but the Java generational abuse persists.
For sure, and I think they do a pretty good job of straddling that line. I guess it’s something about assumed knowledge of your audience and a ‘safe space’ to nerd out about technical stuff. I just watched the Office Hours with Josh Eustis from Telefone Tel Aviv and it was a hoot - cool stories about writing and performing that weren’t always specifically about Max but never spared the technical details, so it was both fun and informative. It’s got 2.2k views so I don’t know if that’s a huge win, but it’s something.
It’s easy for the listening public to forget or ignore the fact that their favorite artists are huge nerds about what they do. They don’t talk details in interviews because the general public’s eyes glaze over, but in my (limited) experience those people will happily talk shop if they’re given a venue to do so, and I think that’s what Office Hours is trying to achieve.
I love this! I hadn’t heard of this series before, this is exactly what i need in my life right now!
One time Josh and I spent like an hour geeking out talking about metric modulation right after he had just played one of the dopest sets I’ve ever seen in my life at an abandoned warehouse after hours rave in Houston winter at 36°F (no indoor heating). I didn’t know till after the whole set that an hour before his project file he prepped for the set had shit the bed so he winged/improv’d the whole thing.
anyway, my purpose of telling that story is i have been looking for a much more detailed deep dive into producer’s mind and workflow than just the general public responses. There’s another good podcast that does the same idea, getting into the nerdy shit, and Four tet shows his whole production process in Live and honestly that shit like… makes me grow as a musician a little bit, seeing how all you other dudes do shit. i’ll have to find that interview later…
edit: found it. i feel silly for not remembering the podcast’s name (Tape Notes :P)
I played with it a bit after Benn Jordan’s Aphex Twin video. It’s neat, really reminds me of Reaktor Blocks, FL’s Patcher or something like Cardinal/VCV for effects. Using the GUI you will pretty immediately get cool, weird sounds just by hooking things up and jiggling sliders.
Despite what the video says, it’s not doing anything particularly special - all its juicy bits are available in more modern/updated environments and even as VST effects nowadays. What’s nice about it is it has a bunch of premade algorithms (like Blocks) that you can just hook up without messing with or understanding the underlying code, more like a VST than a programming environment. And the algorithms are pretty well done and varied, way more interesting stuff than you’d get from a standard plugin and the ability to chain things makes for some wild shit.
The real downside for me is that it’s not real-time or DAW-hosted. I guess that’s a symptom of it being old as hell and not really used much. There’s some friction to using it, having to process, listen, repeat, then import into a DAW for more messing. It also means you can’t chain external effects into it. And no live coding, which is a bummer if you’re into that.
The other potential downside is it doesn’t actually expose any of the inner workings that I can see. You’re stuck with what they give you, which is mostly good but inherently limited if you want to push the limit or implement your own DSP.
I get it, if it’s 1995 and you see this you’d think aliens just landed and you’ve found the key to eternal audio bliss. But in the cold light of 2025, it’s a cool thing that’s maybe trumped by more modern offerings. OTOH, like you say it’s free and it does a ton of shit, so if you can get over it not being real-time/in the DAW, it’s probably a better option than like 90% of the plugin suites out there.
I’d love to hear if anyone’s actively using it and how they’re getting along with it.
It doesn’t get enough attention for what it is, but Patcher has seriously become what a lot of these environments aspire to be, including Bespoke. Just add Reaktor or PlugData and you’re in for the time of your life, and can skip the eurorack-style shit if you want to (although, add Cardinal, VM or CV3 and you’ve got instant crack in a box).
When @wayne finally checks out FL Studio, I hope he streams it
I’m seeing that the current v8 was released in 2023, before that was 2014. Still a long time without development, and certainly not constant development like a lot of environments have.
I agree about Patcher, that was the first thing that it reminded me of. On balance, one thing this has going for it that no other suite has is it’s offline - I’m pretty sure you can, without exaggeration, throw 10,000 flangers on a wav and it’ll happily chew threw it without complaint. Might take a while but when you remove the real-time requirement you basically remove any CPU/memory constraints if the program is architected correctly.
This is really cool, but man, that price tag lol. As he mentioned in the video it could come down since it’s early days.
I guess the question is what else would this be used for. It feels like analog synths are sort of a relic and byproduct of larger scale applications, like ADC/DACs are used in all sorts of things, logic gates and resistors and all the other bobs are general purpose, but the days of large production runs on audio-specific ICs mostly died with consumer electronics (TVs, radios, VCRs, etc) moving away from using them, which obviously made those ICs pricey and the synths that used them have mostly gone away. I’m just wondering where the demand and thus price pressure is for something like this that would force it down to a cost where it’d made sense to create niche musical equipment with it. Maybe there’s some scientific or industrial applications…
But the potential for a digitally programmable fully analog synth is sort of mind blowing. I’m not even sure how it’d work but there’s certainly some possibilities there if they can get the price right.
I agree, I think there’s probably some industrial/scientific apparatus uses for this where you can replace a 70 year old analog logic board that has components that haven’t been made in 50 years on it with this without having to retrain your workforce and retool an entire plant. Or like specific uses in radio/antenna work with analog signal processing/filtering, or as a design test bench in that kind of thing.
And maybe as a repair part for some vintage synths where the sky high prices (think CS-80s trading at 50k) justify these kinds of part costs. The CS 80 in particular has some ICs in it that are known to go bad and have not been produced in 50 years, and getting replacements is impossible - even Yamaha doesn’t have spares anymore. And when these things go they take the whole instrument with them. So replace those ICs with this and then we have a few more working CS-80s in the world.
But for this to be used in a reasonably priced consumer product? My guess is we’re at least 10-20 years away.
How long was it before FPGAs started to be used in stuff? It feels like that happened in music/video games around 2015, and they were around since the 80s.My only hope is that prices on these drop a little faster because of all the compute production capacity being put in place for AI right now. Hopefully when the AI bubble pops that leads to dropping prices on a lot of electronics including niche stuff like this.