I was basically just erasing whatever was out there, refactoring the chain (putting whatever in between those and the dac via connection methods) and then hitting bang again to get the whole chain set up again. This might lead to some kind of resource issue, so I’m guessing tasks like this are probably a lot better in purely textual languages (no matter how scary something like SC looks). Maybe I should just VAC ChucK in and start throwing max’s FX at it for a similar effect.
I’m on the latest version, with a fresh download, and since it’s just a script, I could easily make another version of it (you’d think I would have saved it by now, since I keep having to redo it). I just tried to replicate the error (recording the whole process) and seriously couldn’t get it to crash until I tried an absurd number (like 500) of array pushing and object creation (each), which is obviously well above the limit anyone would probably ever need. So I’m wondering if these current safeguards are working, provided I don’t push it past 100 or do anything too stupid with it. Most of this would be fun even with just a handful of chains.
(I should also see if I can save those as snippets or whatever they call them in the demo mode)
Ultimately it would be cool to auto-populate different chains with semi-random effects, so provided this continues to work, those safeguards might’ve actually made the difference. I could imagine even reverbs being really fun and less-clicky to create this way with those handy connect methods. I’m pretty sure I gave myself an RSI while doing that in Reaktor over the summer
Gotcha. I was mostly confirming you weren’t trying to recreate everything while it was running. Really weird that it would cause a crash, I’d be interested to see what the crash logs say happened. Really weird that you couldn’t easily recreate it.
I think the prevailing wisdom from both Cycling and the community is to build everything you can in Max and only drop down to gen and/or codebox when needed. You of course skipped over “lets build a simple sequencer” straight to the kooky stuff (I’d expect no less lol) so you’re outside the bounds of how I usually make stuff. I’ve used codebox for some little snippets I couldn’t easily make from objects, but I’ve never used it to instantiate stuff (which is brilliant and I’m going to leverage going forward). Off the beaten path for sure.
I made some progress on the sample slicer. I’ve got more to do on it but it’s pretty serviceable as is, at least I’m getting some cool sounds out of it. Here’s the basic version if anyone wants to play with it
Here’s a couple of nice little snippets. Nothing crazy but sort of shows the potential. Source material was an audio rip of The Terminator, so it had an hour and a half to sample from.
EDIT: Apparently Discourse doesn’t want to embed a Google Drive link. Any idea an easy way to embed audio from a cloud file share? I’ll have to look at it in the morning.
Damn, this is insane already. Dropping random audio snippets into this in real time is awesome.
I’m so glad I can open patches like these, because it seems like I got it wrong - copying my own shit (courtesy of demo mode) won’t go to the clipboard, so I guess I was just copying and pasting your code each time while thinking it was mine being copied / reusable.
I guess if I end up having to script everything out just so I can save my progress in demo mode, that’s still pretty fantastic for $0. They let me copy and paste from codeboxes, at least
Totally off-topic, but I think I’m getting somewhere with ChucK.
I didn’t really like the way people were modulating things inside of the examples (or the way you’re “supposed to do it”, so I found a really quick workaround to bypass their stupid timing system; just force everything to a really small buffer window and while loop it so it plays like an oscillator.
50 => int amt; // How many oscillators you want
SinOsc sins[amt]; // Construct sine oscillators
Chorus chors[200]; // Change your amount of choruses
SinOsc LFO(5); // LFO Frequency goes here inside the constructor
SinOsc corruptor(99); // Just another LFO with a cryptic name
Noise noise; // Noise for S&H later
GVerb verb; // A reverb
Echo ech; // Echo effect
float SNH; // Sample and hold value
float SNHTimer; // Sample and hold timer
float SNHPeriod; // Sample period
// SNH setup
0 => SNHTimer;
300 => SNHPeriod;
// Reverb tweakin'
5::second => verb.revtime;
3 => verb.dry;
// PATCH CABLES
// ---------------------------------------------------
sins => verb => dac;
LFO => blackhole;
corruptor => blackhole;
noise => blackhole;
// ---------------------------------------------------
for (0 => int i; i < 200; i ++){
i * 0.00005 => chors[i].modDepth;
}
while (true){ // Main latcher
SNHTimer + 1 => SNHTimer;
if (SNHTimer % SNHPeriod == 0){
noise.last() => SNH;
}
for (0 => int i; i < amt; i ++){
440 + (corruptor.last() * 100) + (i*0.22) + LFO.last() + (SNH * 500) => sins[i].freq;
0.015 => sins[i].gain;
}
0.0001::second => now; // This hotwires ChucK to basically run in oscillator mode
}
With that little pesky thing out of the way, you can just use literally anything as a modulation source, like a modular synth. I made 2 LFOs out of oscillators (by sending them to blackhole, a non-auditory dac) and even managed to make a makeshift S&H. Querying for a value is instantaneous this way, and feels a lot more natural than whatever the hell other people are doing with it.
If anyone wants to play with it, just check out the patch cable section and you can route anything through the dac – there’s a colossal array of choruses, an echo effect, and plenty more you can plug in if you check out the API. Or if you just want to do something weird, change a number; I have shit scaled all over the place, so just start changing things and you’re bound to get something cool out of it. Pretty fucking cool, using it this way at least.
TL;DR: I definitely found a new toy. Might even end up helping a little with iterating delay lines and stuff like that
Ahh, that’s a bummer but I guess it was too good to be true. At least you can still use it in a limited fashion. Just never close it and hope it doesn’t crash!
This is 100% on-topic. I love this as a demo since it’s not some trivial example and I actually have to parse what’s going on. Currently picking through the code vs the ChucK ref docs and learning a lot. Extra cool that it can run in a browser (though I should probably just install the thing).
If I’m getting it right, the top is all straightforward declarations for UGens and variables. The “ChucK Operator” => seems to be a patch cable? So 300 => sins[1] is analogous to
in Max, just wiring a number to the freq inlet of an oscillator. And it seems => is generic enough that you can use it the same way you’d use wiring in Max – if the arguments of the first and second objects are things that go together it just wires them up. So all your ‘patch cables’ are linking things together and then sending them somewhere (like the non-audio blackhole DAC, etc)?
Then you set an increasingly large mod depth for each Chorus and start the main loop: progress the SNH timer, resetting it when it hits the Period, then loop through the oscillators starting at 440, modified by the corruptor, then i*0.22 (what is this, just trial and error?), the LFO and the SNH. Am I on the right track as to what’s happening/your process?
I’m curious as to how the ChucK timing works and what you didn’t like about it. From the bit I read it sounded like it was the real selling point of the system, not having to internally manage timing/the whole “Strongly-Timed” thing.
Also curious what this looks like in SC since a lot of it looks to be super close. Maybe when my plate clears a bit I’ll try the same thing as the SonicPi > Max and try to do this in SC to compare/contrast. I’m curious how much of the heavy lifting ChucK is handling.
Mostly unrelated, ChucK suffers from the same ‘internet problem’ as Max: too-common names that makes searching hard. I constantly have to do “max msp [thing to search]” and it still picks up some guy named Max lol. “chuck audio” seems to help but it’s still not great…
Yeah, a lot of the scaling gets kind of weird when you modulate like this (I’m still trying to figure out what the ‘traditional’ methodology is supposed to be (aside from the envelope generators that are triggered via a ‘key_on’ method, that I also don’t really like all that much so far)), so I just amplify stuff until I can hear it. Sending things to the blackhole seems to make them modulate, so you can use them in flexible ways like this. I did (later) stumble upon an infinite-looping example like mine, so maybe this is a perfectly valid way to use the system as well, but I also plan on delving further into the standard library, various ugens and all of that because aside from the weird declarations, the language is super simple.
From what I can tell so far, most people are using it in a more traditional / MIDI / song-generating sort of fashion, so I think that’s why the appeal is so strong there. It might be the case that my way just kind of works for unhinged sound design, but at least knowing it’s flexible enough for both and extremely high-level makes it kind of a win for where I’m at right about now.
Also, I think experimenting with delay lines is going to be a hell of a lot of fun this way, since you can just pipe an entire array of them into the chain like it’s nothing. I bet there’s more power and flexibility in creating separate dac chains, hardpanning them (if that’s possible) and mixing them together. It’s very hands-off in that way, but it seems like a gentle introduction to prototyping.
Hell yeah, this would be awesome to see. I did take a peek at the Supercollider book and I’m getting a bit curious about that, too. I tend to try and absorb what I can from the manuals and guides for a long while before finally having the confidence to knuckle down, but SC is definitely on my radar. I can’t wait to get there, either.
Looks like recording is just a ugen, too. This is pretty cool:
// ___________.__ ___. .__
// \__ ___/| |__ ____ \_ |__ | | ____ ____ ______
// | | | | \_/ __ \ | __ \| | / _ \ / _ \\____ \
// | | | Y \ ___/ | \_\ \ |_( <_> | <_> ) |_> >
// |____| |___| /\___ > |___ /____/\____/ \____/| __/
// \/ \/ \/ |__|
150 => int amt; // Generic standard amount of ugens
WvOut rec; // .WAV recorder
SinOsc sins[amt]; // Construct sines
SinOsc LFOs[amt]; // Construct LFOs
GVerb verb; // Construct reverb
2000000 => float length; // Optional specified buffer length (change while loop)
Math.random() => int rand; // Random value for filename
1 => verb.dry; // Reverb shit
1 => verb.roomsize; // Reverb shit
1::second => verb.revtime; // Reverb shit
"Chuck Render " + rand => rec.wavFilename; // Uncomment this for WAV recording
sins => verb => rec => dac; // Standard DAC chain
LFOs => blackhole; // Send LFOs to the nether
while (true){ // The usual hotwiring of time
100 => float amp; // Static / mono LFO amplifier for frequency
for (0 => int i; i < amt; i ++){ // Real-time sine / LFO modifiers
440 + (i*12) + (LFOs[i].last() * amp ) => sins[i].freq; // Frequency shit
(amt * 0.00005) => sins[i].gain; // Gain shit
(i+1) * 0.05 => LFOs[i].freq; // LFO frequency modulation shit
}
length -1 => length; // Optional length modifier
0.0001::second => now; // Small window for infinite looping
}
Even though it just sounds like a toilet flushing, it’s kind of cool to be able to generate that many modulation sources with almost no effort. The reverb seems to be missing a proper dry / wet knob (the ‘dry’ setting seems to only go from 0-1, in relation to a static ‘1’ on the wet side) so I’m guessing this is where having a second chain might come in handy. Pretty cool, though.
It’s nice that they have a few inbuilt libraries, too. Standard stuff and math, which could absolutely become a gateway toward more lower-level waveform / oscillator generation.
But, for the sake of insanity, I’m firing up WSL and creating a virtual environment to see how well I could integrate something like this with Pedalboard for the ultimate easy-mode sandbox. The fact that ChucK (and even Supercollider) has a CLI makes this way too tempting not to try
What’s interesting is that SC has the concept of audio rate and control rate for every UGen, where anything designated control rate (kr) is like CV and not audible. The UGen docs have a great example: { Blip.ar(Blip.kr(4, 5, 500, 60), 59, 0.1) }.play;
So that’s a Blip UGen (Band Limited ImPulse) at audio rate (the .ar) so it makes noise, and the frequency param is another Blip at control rate (.kr) with some int parameters so it modulates the freq. That’s such an insanely useful convention that I can’t believe everyone doesn’t adopt it. I get sending stuff to blackhole/null, that seems like a reasonable solution as long as you can still modulate with it, but it’d be cool to just pass it something that says “I’m functioning as a modulator”.
Yeah, I think you’re probably right. I guess it applies to Max and Pd and SC and everything else. Some of those allow enough flexibility to get crazy quick, but at the end of the day most people are making some kind of rhythm-based music instead of just mangling audio so I guess it makes sense to have that as a cornerstone. Still, I want the option to go nuts with the timing if I choose.
This is the real deal. I love Max, and I can’t deny the power, but as cool as the visual thing is, there’s so much speed and expression through pure code. The flexibility of systems like ChucK and SC is mind boggling and just a totally different way of creating.
I dunno, man. Chuck Faust looks like he knows his way around a Buchla
I don’t know if this is of interest or use to anyone, but I just got done refactoring some very old Max patches for Lorenz and Rossler attractors using gen instead of some ugly kludges from the olden days. They seem pretty performant though there’s probably some more cleanup I could do.
Currently looking at using the numeric output to drive some of the sampler slicer parameters - scaling is weird depending on the attractor settings so trying to tame that; still not sure if it’ll be useful. The do make cool sounds if you dump them straight the the DAC and play around with the numbers.
Whoa, the syntax looks so oddly similar to Tidal Cycles that I can’t help but wonder if it’s either working off the back of Supercollider or just modeled to kind of replicate that workflow. I’m also kind of skimming around; he probably said something about it and I just missed it on first glace
I’m really surprised at how many environments people have piped into Max so far; I’m guessing building externals for it must be pretty straightforward, which probably makes sense. And only makes me want a real version of it more
Looked at this a bit more, it’s apparently using a project called Max_Worldbuilding_Package that adds connectivity for some VR environments and a generalized websocket into Max/M4L. Gibberwocky then hooks into that websocket directly from a browser to pass JS into Max and parse it real-time. So all the under-the-hood processing is in Max, it’s just using the browser to update the Max scene.
Worth pointing out that the whole thing is really old - Worldbuilding is 9 years out, Gibberwocky looks to be 7-8 years. Gibberwocky is from Graham Wakefield who is the guy that initially wrote gen~ for Max so I’d guess the system is solid as he definitely knows what he’s doing, but my guess is it was a quick “oh, I bet I could do this” thing and not a serious long term project. For all I know there are now built-in ways to get the same functionality.
Dude, no smarts needed. Just time and effort and curiosity. 99% of the stuff we talk about here is free - you just download/install/run like anything else, follow whatever tutorials they give you to figure out how it works, then just start trying shit and seeing what you like. When you get stuck you muscle through or go ask questions or read docs or put it down for a bit. Ain’t no magic here.
I want to be 100% clear about this for you or anyone else looking at this shit and saying “that sure seems cool, I wish I could do that” - you totally can. You don’t need a CS degree. You don’t need to be good at math. If you can use a DAW to make music, you already know what’s happening here, it’s just typed out instead of visual. God forbid you know how modular works, it’s basically the same thing, hooking up little units and setting values.
SonicPi is the place to start. It was literally made to teach kids this stuff. It’s got great tutorials and the internet and youtube are filled with info on it. And we’ll be happy to answer any questions. Just dive in and see what happens. Hell, do it on stream for extra internet cool points.
I couldn’t have said this better myself, and I’m hoping that this encourages others to get started!
You definitely don’t need to read the whole thread for that! In fact, you could easily just boot up Sonic Pi (great recommendation, by the way) and just run some of their examples for starters. If you come across something that makes absolutely no sense (likely functions / live loops if you’ve never played with code before), post here and we’ll try to demystify it.
You don’t even need to know how to code in order to get started; you might end up wanting to learn the basics of OOP (object-oriented) if you get serious about making your own scripts, but there’s actually a hell of a lot you can do without all of that, even.
It’s kind of like Reaktor; some people forget about the crazy library and instruments it comes with. Making your own tools is great, but exploring the ones already made is perfectly valid as well
I get that vibe with a lot of these little ‘bridge’ utilities. I’m about to try compiling FaucK (Chuck Faust’s favorite hybrid) to maybe give myself more ugens to play with, but I’ve come to expect that not everything is quite as flexible as it looks on the surface if it’s an outsider project.
Wow, it’s already proving to be quite the process. Might have to run it in WSL just to avoid some old version of visual studio that you can’t just up and download anymore. The things we do for fun
Hmm, ChucK’s timing scheme might actually be a really cool asset after all.
Apparently rather than creating 'live_loop’s and using multithreading in Sonic Pi (which I’ve had fall out of time when you load it down), ChucK uses concurrency and gives the user the ability to trigger and communicate between ‘shreds’, which are kind of like riffs or live loops inside the system that can pass messages to one another.
So rather than having your multithreaded loops running all the time no matter what, you can have them sort of micromanage one another, not to mention the part where you can take full advantage of OOP and all sorts of shit for better generative workflows. This might be a hell of a lot tighter for livecoding, in addition to being a really cool way to generate sounds
I also like the fact that you can boot up entire, separate (saved) scripts this way. This thing goes way beyond Sonic Pi’s capabilities when it comes to file management, rendering files out, etc
Well that was my rabbit hole for the day, trying to figure out how all these systems are handling timing and scheduling. My results? It’s a complex mess.
Apparently when these system’s docs talk about threads, none of them are talking about CPU threads. It’s an overlay of a clock system internal to the runtime. ChucK and SC for sure (and probably thus anything based on SC like SonicPi) both have the concept of threading to encapsulate functions or events happening at certain points under the internal timing. I found several references to SonicPi’s “thread death” message and it seems to be “we waited so long for that thing to fix it’s hair and put on makeup that we’re leaving, fuck 'em” and it just quits because you asked thread x to do 10x the work of the other threads. Common solutions seems to be splitting up the offending one further so the whole system stays in balance or refactoring your loops to smaller chunks.
I know Max and Pd both run all audio timing on the main program thread and offload GUI and network and MIDI to others like any modern program would. I don’t think there’s any idea of separating out discrete processes, just a world clock for the patch. SC’s scsynth server is single threaded, though there’s an option for a version called supernova that is node-graph based parallelism. I’m still reading through the white paper for it but it seems interesting. Haven’t gotten to the part about how they handle buffer writes yet which is my real question.
I’m still trying to figure out exactly what ChucK’s “strongly timed” thing means in practice. It’s obviously a play on the programming term “strongly typed”, but timing is a different beast from types. From what you found it sounds like a graph-based set of marshalling nodes, which is a pretty common pattern in modern software, but I don’t know how it plays out in practice. I guess it means that things execute on time or not at all? Does it throw the schedule out the window and drop output if it doesn’t clock in on time? Does it just fail? I know of another system that’s “strongly timed” - reality. Shit happens when it happens or it didn’t happen, but I don’t know how to square that with audio execution and what it means. I think maybe that’s the root question here - how does the system handle things when the timing fails.
Man, this stuff is complicated. My hat’s off to anyone building these systems.
Edit: The SC/Supernova white paper if you want to read it. It’s interesting.
From what I can tell (which isn’t much at all), ChucK claims to be sample-accurate and I don’t know if Sonic Pi actually guarantees any of that (although I’m sure Csound, Supercollider and others are also more in line with ChucK’s timing scheme). Of course, this begs the question as to whether the K-rate stuff abides by this (probably not, since that would be resource overkill), but I’m sure the conversion back to regular sample rates is also typically on tap.
One surface-level thing that I’ve been doing wrong with Sonic Pi is apparently not using their sync and cue system, which might’ve been why my live loops have been so inclined to fuck up, but I’d also love to learn a bit more about what’s actually going on. I sometimes wonder if the multithreading (and the excess of it) gets in the way sometimes with Sonic Pi, but I also want to experiment a little more to see if I can actually get things to run smoother over there as well. It’s very possible that I was missing out on their own version of a ‘shred’ system and things were kind of doomed to go off the rails without those little communicator functions in there; MIDI doesn’t have to be sample-rate in order to not go completely haywire, so I really wonder what’s at play there.
My use-case is kind of odd, though; I’m never trying to emulate anything and usually just want to find abstract combinations, so this only really bothers me when livecoding or something. I get this feeling ChucK is going to fare better when it comes to shoving MIDI signals out without getting tangled up within itself¹, and hopefully that means I can take over multiple virtual busses, use polyphony and all of that stuff while it keeps its “strongly-timed” nature in check.
[1] I’m also hoping it can receive midi clocks, which is apparently a neglected portion of Sonic Pi thus far
Damn, I think all of this dicking around has payed off a little; I finally seem to understand a little bit more of the development and debugging process thanks to prototyping ideas in like 50 different strung-together environments. It seems like the messy way paves the way for the clean version, and sometimes you really do want to bottle something up into a neat(er) package.
Last year (or maybe two years ago?) when I last tried to make a MIDI-generating module, I was completely lost with Java’s ridiculous libraries and constructs, sticking the MIDI operations in the DSP-ready / sample-rate sections (ouch) and not being mindful about callbacks and such. Now I’m basically just down to parsing this thing correctly so I can load it up with raw bytes on the fly. I should have listened to @Artificer about breakpoints and debugging back when I was struggling so hard, but I guess it was mostly a stubborn mental block.
Honestly I can’t believe how much debugging shit is here after poking around. I’m probably going to be able to fix a bug of theirs that’s been chilling for a long time (they confirmed I was right and never touched it), unless it’s some part of the API that’s got an issue.