Hacker Newsnew | past | comments | ask | show | jobs | submit | Slow_Hand's commentslogin

I use knobs everyday in my audio tools (with my track pad) and they're perfectly fine as long as they have three features:

1. Drag up/down to change value. 2. A modifier key to slow the drag for finer resolution changes when dragging. 3. The ability to double-click the knob and type in precise values when I know exactly what I want.

The problem with knobs on a GUI is when designers stay with them when there is a faster option. Like an opportunity to combine three knobs.

For example, the EQ on any SSL channel strip is a nightmare because they slavishly stick with a skeumorphic design of the original hardware. The hardware required mixers to use two hands to adjust gain and frequency at the same time, and then dial in Q on a third knob. Very tedious when you have a mouse.

When this is done right, you get something like FabFilter's Pro-Q graphic EQ. The gain and frequency controls are instead an X/Y slider that you can easily drag across a representation of the frequency spectrum. In addition you can use a modifier key to narrow/widen your Q. All with a single click and drag of your band.


> For example, the EQ on any SSL channel strip is a nightmare because they slavishly stick with a skeumorphic design of the original hardware.

True though I would put this very much in the "feature, not a bug" bucket. These tools are for people who have worked with the original hardware and want a very faithful emulation, including the look and feel. In the digital world with a modern PC there's not much purpose of a channel strip plugin in the first place, so the only people using one are doing so with intention.

It's a bit like saying that manual transmission cars could be controlled more easily if they were automatic transmission; it's completely true, but if you're buying a manual you want that experience.

Pro-Q is a great example of a digital-first tool (the automatic transmission equivalent), with lots of great visual feedback and a lot of thought put into a mouse+kb workflow. All of Fabfilter's stuff is like this actually, though sometimes to its detriment; the Fabfilter automation and LFO system feels very different from basically every other plugin. It's actually a more efficient workflow when you get used to it, but due to how different it is from everything else most people I talk to dislike it unless they've really bought into the Fabfilter suite.

Which kind of goes back to the original point: VSTs use knobs because it's what people are used to, and using something different might be a negative even if it's better!


I agree that the SSL channel strip GUI is deliberate because users want something that operates like the hardware. However, I would love the option to grab the freq knob and have it work like an x/y slider for freq/gain.

Sure it mismatches the GUI, but it gives users the option when they don't want to do a click/drag for freq, then gain, then freq, then gain, then Q. You know?

That tediousness is what keeps me from using the SSL channel strip altogether.

Re: channel strip plugins: The advantage to using them in DAWs is speed and economy. Having everything in one window (ala the Scheps Omni Channel) saves me a lot of clicks vs. when I have multiple plugins in different slots.

I do absolutely everything in the box with a laptop keyboard and track pad. My primary motive is being quick and precise, and the less plugin window management I have to do the better. The channel strip keeps the tools compact and my movements minimal.


Would love to see this, as someone who has been heavily using Live since 2006 and is finally getting into proper coding in middle-age. Having a way to augment Live in a text-based coding format would be greatly welcome.

While I'm not holding my breath, Ableton the company are transitioning into a steward-ownership model in which the stewards will have decision rights over the company. So I have hope that it will continue to grow in ways that are less affected by market considerations and that are a little more opinionated and specialized. Not to mention that Ableton own Cycling 74 (creators of Max/MSP).

So it's not out of the realm of possibility.


you can use Javascript with Max. It's a bit unwielding in its handling of multi-JS-file projects, but it can be done.

Not everything in Max is exposed to your code, but you really can do a lot from the JS side of things.


I had no idea! And I'm learning Javascript, so that's a nice coincidence.

I was deep into Max/MSP around 2010 and made a personal vow to leave it alone. The potential to reinvent the wheel and build tools instead of completing records was too much.

Now I'm in a more mature place, so I could see myself diving back into it eventually.


While I wouldn't call this scummy I do agree with your sentiment. It is technically stealing and those copyrights should be respected.

Full disclosure, I am a career musician AND have been known to pirate material. That said, I think this is a valuable archive to build. There are a lot of recordings that will not endure without some kind of archiving. So while it's not a perfect solution, I do think it has an important role to play in preservation for future generations.

Perhaps it's best to have a light barrier to entry. Something like "Yes, you can listen to these records, but it should be in the spirit of requesting the material for review, and not just as a no-pay alternative to listening on Spotify." Give it just enough friction where people would rather pay the $12/month to use a streaming service.

Also, it's not like streaming services are a lucrative source of income for most artists. I expect the small amount of revenue lost to listeners of Anna's Archive are just (fractions of) a penny in the bucket of any income that a serious artist would stand to make.


> It is technically stealing

It is technically not. Stealing means you have a thing, I steal it, now I have the thing and you do not. You can’t steal a copyright (aside from something like breaking into your stuff and stealing the proof that you hold the copyright), and then a song is downloaded the original copyright holder still have copy.

Calling piracy theft was MPAA/RIAA propaganda. Now people say that piracy is theft without ever even questioning it, so it was quite successful.


> Stealing means you have a thing, I steal it, now I have the thing and you do not.

that seems like an overly narrow definition… what about identity theft, or IP theft?

https://www.justice.gov/usao-ndca/pr/superseding-indictment-...


See my other comment. Identity theft is the bank being defrauded and passing the problem onto you. They are the victim, not you and it is their money that’s gone, not yours.

IP theft is more like espionage and possibly lost hypothetical revenue. Again, it isn’t larceny, burglary, etc. You still have the knowledge, it’s just that so does the perpetrator.

Moreover discussions of IP gets into whether it even makes sense to be able to patent algorithms which are at their core just mathematics. So before you can talk about stealing the quadratic formula you need to prove that the quadratic formula is something that can be property.


Mitchell & Webb's take on "identity theft" is worth a listen.

https://www.youtube.com/watch?v=CS9ptA3Ya9E


You may not be stealing the actual content, more so “making a copy”, but in doing that you’re taking away money the artist would have earned if you bought their album or streamed it on Spotify (admittedly that’a a very small amount for the artist but that’s another thing)

And if I stole something physical you had for sale, you wouldn’t make the money, so the end result is effectively the same.


The “if you bought their album” is the non-trivial part of that sentence. A pirate is not necessarily going to fork over $20 for an album if they couldn’t pirate. Chances are they will simply not buy the album. In either case the artist doesn’t get their $1.20 (6% to the artist the rest to the studio and distributors). So the result is really not the same because the artist and the pirate can both have the album in different ways and in both cases the artist doesn’t get their $1.20 unlike a physical good which cannot be cloned.

What this really is exposing is that most art is not worth the same. A Taylor Swift album is not worth the same on the open market as a Joe Exotic album. Pricing both at say $20 is artificial. Realistically most music has near zero actual value, hence why if you are a B tier or lower artist you won’t make much compared to an A tier artist on platforms like Spotify or YouTube which pay per listen/watch.


Can you post your social security number and other personal info here then? You will still have it afterwards!

Oh also, I don't see why I should ever pay for trains or movie tickets if there are seats available. I can just walk in! The event will happen anyway. Its not stealing.

Everyone should just download all art, music and literature for free. Musicians, artists and writers can all make money some other way while I enjoy the works of their efforts.


https://www.sciencelearn.org.nz/images/straw-man-arguments

What the music/movie industry was claiming in court was not theft. There is no statute that identifies piracy as theft. They were claiming copyright violation and wanted to collect damages for lost revenue.

You are bringing up “identity theft” which is also not theft. If you post your PII here and I use it to open a credit card in your name and then spend a bunch of the money using that card on buying goods and services, you are not the victim. What I do in that case is defraud the bank. They are the ones who are the actual victim and in the ideal world they would be the ones working with the authorities to get their money back.

Of course they would rather not do that so they invented a crime called identity theft and convinced everyone that it is ok for them to make you the victim. They make your life hell since they can’t find the actual criminal while you spend thousands of dollars trying to prove that you don’t owe thousands of dollars. But in reality you were not any part of the fraud. It is on the bank to secure their system enough to prevent this. But they have big time lawyer money and you don’t so here you are.


I've watched a lot of live coding tools out of interest for the last few years, and as much as I'd like to adopt them in my music making it's not clear to me what they can add to my production repertoire compared to the existing tools (DAWs, hardware instruments, playing by hand, etc).

The coding aspect is novel I'll admit, and something an audience may find interesting, but I've yet to hear any examples of live coded music (or even coded music) that I'd actually want to listen to. They almost always take the form of some bog-standard house music or techno, which I don't find that enjoyable.

Additionally, the technique is fun for demonstrating how sound synthesis works (like in the OP article), but anything more complex or nuanced is never explored or attempted. Sequencing a nuanced instrumental part (or multiple) requires a lot of moment-to-moment detail, dynamics, and variation. Something that is tedious to sequence and simply doesn't play to this formats' strengths.

So again, I want to integrate this skill into my music production tool set, but aside from the novelty of coding live, it doesn't appear well-suited to making interesting music in real time. And for offline sequencing there are better, more sophisticated tools, like DAWs or trackers.


Every generation of musicians for the past 8 decades has had the same thoughts. What live coding tools for synthesis offers you is an understanding of the nature of generational technology.

Consider this: there are teenagers today, out there somewhere, learning to code music. Remember when synthesisers were young and cool and there was an explosion of different engines and implementations?

This is happening for the kids, again.

Try to use this new technology to replicate the modern, and then the old sound, and then discover new sounds. Like we synth nerds have been doing for decades.


Music coding technology has been around a long time - think of tools like csound and pd and Max/MSP. They're great for coding synthesizers. Nobody uses them to do songs. Even Strudel has tools for basic GUI components because once you get past the novelty of 'this line of code is modulating the filter wowow' typing in numeric values for frequency or note duration is the least efficient way to interact with the machine.

Pro developers who really care about the sound variously write in C/C++ or use cross compilers for pd or Max. High quality oscillators, filters, reverb etc are hard work, although you can certainly get very good results with basic ones given today's fast processors.

Live coding is better for conditionals like 'every time [note] is played increment [counter], when [counter] > 15 reset [counter] to 0 and trigger [something else]'. But people who are focused on the result rather than the live coding performance tend to either make their own custom tooling (Autechre) or programmable Eurorack modules that integrate into a larger setup, eg https://www.perfectcircuit.com/signal/the-programmable-euror...

It's not that you can't get great musical results via coding, of course you can. But coding as performance is a celebration of the repl, not of the music per se.


It was said of synthesizers in the early days: “its not ‘real’ music” and its going to be said of every new music technology tool forever, because whenever someone invents a new way of making music, there will be detractors. This is a natural phenomenon of the subject and always will be.

I like your idea of celebrating the repl, its right up there with performance menu diving as a statement for how orthogonal things can get .. I’ve never enjoyed fishing for parameters, so having editor chops applied musically is .. refreshing .. in some strange ways.

Sure wish hardware manufactures would be motivated to not just throw a linked list and a couple of buttons at the parameter issue ..


> It was said of synthesizers in the early days: “its not ‘real’ music”

No it wasn't. Jean-Michel Jarre sold 80 million albums and is one of the most famous musicians of the 20th century.


JMJ happened in the middle of the synth era, not the beginning of it, and his rise to fame definitely heralded a new acceptance of synthesisers as instruments, its true, but there were dark days in the beginning when synths were not considered cool, one bit, and regarded as not real instruments because they were artificially attempting to recreate other instruments .. in the early days.

(Disclaimer: I've been in the MI business for decades, I've seen some things..)


Look into the JUCE framework for building your own tools. I was using MaxMsp for a while, but would always think to myself "This would be so much easier to accomplish in pure code". So, I started building some bespoke VST's.

There's a learning curve for sure, but it's not too bad once you learn the basics of how audio and MIDI are handled + general JUCE application structure.

Two tips:

Don't bother with the Projucer, use the CMAke example to get going. Especially if you don't use XCode or Visual Studio.

If your on a Mac, you might need to self-sign the VST. I don't remember the exact process, but it's something I had to do once I got an M4 Mac.


I haven't really found anything yet that Gemini can't do in python for this.

LLMs have absolutely killed any interest I use to have in the max/pd/reaktor wiring up boxes UI.

I have really gone further though and thought why do I even care about VST or a DAW or anything like this? Why not break completely free of everything?

I take inspiration from Trevor Wishart and the Composers Desktop Project for this. Wishart's music could only really be made with his own tools.

It is easy to sound original when using a tool no one else has.


> I haven't really found anything yet that Gemini can't do in python for this.

Python for audio apps? First I've heard of this. Is it a "Python acts as a thin wrapper over C" or something?

> I have really gone further though and thought why do I even care about VST or a DAW or anything like this?

Been there. I started making music on a Windows 95 PC, built up a studio over the years (including some DIY hardware), and eventually was using Logic as glorified multi-track recorder + effects rack. These days, I've kind of went back to my roots, and I'm doing a lot of sample chopping. Only difference is: I'm using my own sounds as source material.


Hear, hear. Dragging cables around with a mouse was a hard sell in the first place, but now it’s pretty much inconceivable.

I’ve been using LLMs to help build out audio-related projects that I didn’t think I’d get a chance to pursue until I retired.

Under the hood, are they crap? Maybe. Probably, even. But they function well enough to make my own weird music with, and they’re available for use now - not twenty years from now when I retire.

The era of custom software on tap is here. As someone who is primarily interested in making unique stuff, it’s a great time to be alive.


AudioKit for iOS/Mac is also interesting and easy to work with.


For a great example of some (non-live) coded music, I would recommend The Haywire Frontier by Nathan Ho [0]. The whole album was sequenced and synthesized entirely in SuperCollider with no samples, external hardware, or third-party plugins. It's really interesting and a crazy achievement, definitely worth a listen.

For live coding, Switch Angel is definitely someone I would actually go to see live, check out this video of hers [1].

[0] https://nathanho.bandcamp.com/album/haywire-frontier [1] https://youtu.be/iu5rnQkfO6M


This Nathan Ho album is a good example. Thanks for sharing.


> I've watched a lot of live coding tools out of interest for the last few years, and as much as I'd like to adopt them in my music making it's not clear to me what they can add to my production repertoire compared to the existing tools (DAWs, hardware instruments, playing by hand, etc).

Aside from the novelty factor (due to very different UI/UX) and the idea that you can use generative code to make music (which became an even more interesting factor in the age of LLMs), I agree.

And even the generative code part I mentioned is a novelty factor as well, and isn't really practical for someone who actually makes music as their end-goal (and not someone who is just experimenting around with tech or how far one can get with music-as-code UIUX).


Give it some time.

I feel like the newer (ish) tools such as Strudel, and also this here Loopmaster, have a much better toolset for producing stuff that actually sounds great (vs just purely the novelty of "look im coding beats"). Like, Strudel comes with an extensive sample bank of actually quality samples (vs relying on synthesis out of some sense of purity), and also comes with lots of very decent sounding effects and filters and the likes.

Combine that with the ability to do generative stuff in a way that Ableton, FL Studio or Renoise are never going to get you, I won't be surprised if people invent some cool new genres with these tools eventually.

Basically, your comment reads a bit like saying demoscene makes no sense because you can make any video better with Blender and DaVinci Resolve. And this obviously isn't true given the sheer overload of spectacularly great demos out there whose unique esthetic was easy to obtain because they're code, not video. (find "cdak" by Quite for an on-the-nose example).

I'm going to be surprised if this new wave of music coding tools will not result in some madly weird new electronic music genres.

Obviously there's plenty of stuff these tools are terrible for (like your example of nuanced instrument parts), but don't dismiss the kinds of things they're going to turn out to be amazing at.


What I like in comparison to sample-based software like Tidal and Strudel is exactly the ability to create very novel and interesting sounds using synthesis, something which I prefer. Genres like Techno rely a lot on timbre novelty.


Strudel has synthesis too. I don't immediately see how Strudel and Loopmaster's capabilities couldn't eventually converge.


Renoise just added support for a programmatic sequence generator called pattrns.

https://github.com/renoise/pattrns


This is actually cool. They have a web playground[0] as well.

[0]: https://pattrns.renoise.com/


Nice!


100% agree.

I think this format of composition is going to encourage a highly repetitive structure to your music. Good programming languages constrain and prevent the construction of bad programs. Applying that to music is effectively going to give you quantization of every dimension of composition.

I'm sure its possible to break out of that but you are fighting an uphill battle.


Quite the opposite actually. certain live coding languages give you the tools to create extremely complex patterns in a very controlled manner, in ways you simply wouldn't be able to do via any other method. the most popular artist exploring these ideas is Kindohm, who is sort of an ambassador figure for the TidalCycles language. Having used TidalCycles myself, the language lends itself particularly well to this kind of stuff as opposed to more traditional song/track structures. And yet it also constrains and prevents the construction of bad programs in a very strict manner via its type system and compiler.

It's also notable for being probably the only Haskell library used almost exclusively by people with no prior knowledge of Haskell, which is an insane feat in itself.


> Quite the opposite actually. certain live coding languages give you the tools to create extremely complex patterns

I think I must not be expressing myself well. These tools seem to be optimized for parametric pattern manipulation. You essentially declare patterns, apply transformations to them, and then play them back in loops. The whole paradigm is going to encourage a very specific style of composition where repeating structures and their variations are the primary organizational principle.

Again, I'm not trying to critique the styles of music that lend themselves well to these tools.

> And yet it also constrains and prevents the construction of bad programs in a very strict manner via its type system and compiler.

Looking at the examples in their documentation, all I see are examples like:

    d1 $ sound "[[bd [bd bd bd bd]] bd sn:5] [bd sn:3]"
So it definitely isn't leveraging GHC's typechecker for your compositions. Is the TidalCycles runtime doing some kind of runtime typechecking on whatever it parses from these strings?

> It's also notable for being probably the only Haskell library used almost exclusively by people with no prior knowledge of Haskell, which is an insane feat in itself.

I think Pandoc or Shellcheck would win on this metric.


> So it definitely isn't leveraging GHC's typechecker for your compositions. Is the TidalCycles runtime doing some kind of runtime typechecking on whatever it parses from these strings?

the runtime is GHC (well GHCi actually). tidal's type system (and thus GHC's typechecker) ensures that only computationally valid pattern transformations can be composed together. if you're interested in the type system here's a good overview from a programmer's perspective https://www.imn.htwk-leipzig.de/~waldmann/etc/untutorial/tc/...

these strings are a special case, they're formatted in "mini-notation" which is parsed into composed functions at runtime. a very expressive kind of syntactic sugar you could say. while they're the most immediately obvious feature of Tidal (and have since been adapted in numerous other livecoding languages), mini-notation is really just the tip of the iceberg.

>The whole paradigm is going to encourage a very specific style of composition where repeating structures and their variations are the primary organizational principle.

but that applies to virtually all music, from bach to coltrane to the beatles! my point is that despite what the average livecoder might stream/perform online, live coding languages are certainly not restricted to or even particularly geared towards repetitive dance music - it just happens that that's a common denominator of the kind of demographic who's interested in livecoding music in the first place.

i'd argue that (assuming sufficient knowledge of the underlying theory) composing a fugue in the style of bach is much easier in tidal than in a DAW or other music software. on the more experimental end, a composition in which no measure ever repeats fully is trivial to realize in tidalcycles - it takes only a handful of lines of code to build up a stochastic composition based on markov chains, perlin noise and conditional pattern transformations. via the latter you can actually sculpt these generative processes into something that sounds intentional and follows some inner logic rather than just being random.

the text-based interface makes it much easier to use than anything GUI-based. it's all just pure functions that you can compose together, you could almost say that Tidal is like a musical equivalent of shell programs and pipes. equally useful and expressive both for a 10 year old and a CS professor.

>I think Pandoc or Shellcheck would win on this metric.

touché!


> i'd argue that ... composing a fugue in the style of bach is much easier in tidal than in a DAW or other music software. on the more experimental end, a composition in which no measure ever repeats fully is trivial to realize in tidalcycles - it takes only a handful of lines of code to build up a stochastic composition based on markov chains, perlin noise and conditional pattern transformations. via the latter you can actually sculpt these generative processes into something that sounds intentional and follows some inner logic rather than just being random.

I agree that it's easier to build a composition in a coding environment that uses stochastic models, markov chains, noise, conditions, etc. But I don't think that actually makes for compelling music. It can render a rough facsimile of the structure, but the result is uncanny. The magic is still in the tiny choices and long arc of the composition. Leaving it to randomness is not sufficient.

Bach's style of composition _is_ broadly algorithmic. So much so that his style is taught in conservatories as the foundational rules of Western multi-voice writing, but it's still not a perfect machine. Taste and judgment have to be exercised at key moments in the composition on a micro level. You can intellectually understand florid counterpoint on a rules-based level, but you still have to listen to what's being written to decide if it's musically compelling or if it needs to be revised.

The proof is in the pudding. If coded music were that good, we would be able to list composers who work in this manner. We might even have charting music. But we don't, and the best work is still being done with instruments in hand, or written on a staff, or sequenced in a DAW.

I want this paradigm to work - and perhaps it can - but I've yet to hear work that lives up to the promise.


The ease of quantization in the DAW is pretty easy to do as well. So I am not sure that would be unique to music / live coding sessions.


It is unique because everything is quantized. I've never used these tools but I am assuming you could give it some level of randomness but as someone who has performed and recorded a non-quantized performance is not random. So sure, it's super easy to quantize in your daw but it is a tool to be applied when needed, not something that is on all the time by default.


yes exactly, and when I say "quantization of every dimension of composition" I mean an application of quantization to every aspect of composition not just pitch and rhythm.


Quantization and repetition are what some genres depend on. It won't be the right instrument for a Rock ballad, but for a Techno track you need this kind of "everything being quantized". That said, in loopmaster you can add swing and noise to the note offsets to humanize a sequence, a lot is left to the imagination and ability of the creator.


No one in this thread is saying quantization is never appropriate.


Some of us enjoy highly repetitive music, at least some of the time.

"Computer games don't affect kids. If Pac Man affected us as kids, we'd all be running around in darkened rooms, munching pills and listening to repetitive music." -- Marcus Brigstocke (probably?)

Also, related but not - YouTube's algorithm gave me this the other day - showing how to reconstruct the beat of Blue Monday by New Order:

https://www.youtube.com/watch?v=msZCv0_rBO4


I'm not saying anything negative about repetitive music. I'm saying that tools like live coding are going to constrain the kind of music you can produce reasonably.


I mean, sure, art has constraints.

My sister likes to work with [checks notes carefully to avoid the wrong words] old textiles. This of course constrains the kind of art she can make. That's the whole point.

I see live coding the same way as the harp, or a loop sampler, an instrument, one of an enormous variety of tools which you might find suits you or not. As performance I actually enjoy live coding far more than most ways to make music, although I thought Amon Tobin's ISAM Live was amazing that's because of the visuals.


I saw Amon Tobin about 15 years back. Still one of my favourite shows ever - and I see a _lot_ of shows.

And year, your music tools/instruments constrain you. There are only so many music genres you can reasonable play or compose on an acoustic guitar. Or an oboe. Or modular synths. I suspect it's _possible_ to compose and play altrock or country music using live coding instead of a guitar - but why would you?


Bleep country feels like it's maybe an idea that could go somewhere.


> but why would you?

For the end result .. which we've yet to hear.

Before it landed a country(?) banjo(?) cover of Eminem's rap classic Lose yourself was a but why would you.

And then Kasey Chambers owned it.


I have to say that I still feel the same way about a country cover of "Lose Yourself" after listening to it. I've never had great love for country, but then I'm not a huge rap fan either, yet to me that cover/ version is only good for the same reason as the original, lyrically this still works even if I don't believe Kasey whereas I did believe Marshall, that this was (or at least seemed) their only way out, but, nothing new was brought to the table IMHO.

Johnny Cash's "Hurt" is an example where the performance was transformative. Reznor's "Hurt" is a song by a 20-something addict feeling sorry for himself. However Cash is a man who knows he actually doesn't have much time left†, and so almost identical lyrics ("Crown of shit" is changed) feel very different.

† Cash died about a year after his recording was published.


The point of the comment wasn't to persuade you to like a particular cover.

I'm only aware of it myself because of an unusual number of vocal coaches being overly enthusiastic about it. "Country" is a an odd label for it given the transition midway.

The thrust to the comment was to remind the GP to not limit their expectations about what others might do. You yourself highlighted Cash's cover as something you deem of value, it's another example of an unexpected product.

Live coding my or may not progress in any particular direction or genre, I'd prefer to not make any predictions myself and leave open the possibility of being pleasantly surprised.


I've suspected that, too, while looking at DAWs and how people make music with them. It seems a bit boring to me.

To kinda get away from that or even just experiment, I was interested in the possibility of writing music with code and either inject randomness in places or at least vary the intervals between beats/etc using some other functions (which would again just be layering in patterns, but in a more subtle way).


Procedural generation can be useful for finding new musical ideas. It's also essential in specific genres like ambient and experimental music, where the whole point is to break out of the traditional structures of rhythm and melody. Imagine using cellular automata or physics simulations to trigger notes, key changes, etc. Turing completeness means there are no limits on what you can generate. Some DAWs and VSTs give you a Turing complete environment, e.g. Bitwig's grid or Max/MSP. But for someone with a programming background those kinds of visual editors are less intuitive and less productive than writing code.

Of course, often creativity comes from limitations. I would agree that it's usually not desirable to go full procedural generation, especially when you want to wrangle something into the structure of a song. I think the best approach is a hybrid one, where procedural generation is used to generate certain ideas and sounds, and then those are brought into a more traditional DAW-like environment.


I've actually tried all of the approaches that you've mentioned over the years, and - for my needs - they're not that compelling at the end of the day.

Sure it might be cool to use cellular automata to generate rhythms, or pick notes from a diatonic scale, or modulate signals, but without a rhyme or reason or _very_ tight constraints the music - more often than not - ends up feeling unfocused and meandering.

These methods may be able to generate a bar or two of compelling material, but it's hard to write long musical "sentences" or "paragraphs" that have an arc and intention to them. Or where the individual voices are complementing and supporting one another as they drive towards a common effect.

A great deal of compelling music comes from riding the tightrope between repetition and surprising deviations from that scheme. This quality is (for now) very hard to formalize with rules or algorithms. It's a largely intuitive process and is a big part of being a compelling writer.

I think the most effective music comes from the composer having a clear idea of where they are going musically and then using the tools to supplement that vision. Not allowing them to generate and steer for you.

-----

As an aside, I watch a lot of Youtube tutorials in which electronic music producers create elaborate modulation sources or Max patches that generate rhythms and melodies for them. A recurring theme in many of these videos is an approach of "let's throw everything at the wall, generate a lot of unfocused material, and then winnow it down and edit it into something cool!" This feels fundamentally backwards to me. I understand why it's exciting and cool when you're starting out, but I think the best music still comes from having a strong grasp of the musical fundamentals, a big imagination, and the technical ability to render it with your tools and instruments.

----

To your final point, I think the best example of this hybrid generative approach you're describing are Autechre. They're really out on the cutting edge and carving their own path. Their music is probably quite alienating because it largely forsakes melody and harmony. Instead it's all rhythm and timbre. I think they're a positive example of what generative music could be. They're controlling parameters on the macro level. They're not dictating every note. Instead they appear to be wrangling and modulating probabilities in a very active way. It's exciting stuff.


I don't think any of that is an argument against the use of procedural generation, it's just an argument for the tasteful use of it. Partly it also depends on what works in your own workflow. I find that it's an essential component in the creative process of lot of the artists I admire. Autechre is a great example. I think a lot of the pioneers of early IDM like Autechre and Aphex Twin have found ways to incorporate randomness at the micro level, while maintaining control at the macro level over the shape and direction of the composition. I don't see this as competing with traditional composition methods, it's just leveraging code-based tools to give the artist more control over which elements are random and which ones they control.


When you learn to use it you can throw a lot of intention into it, knowing the output even before you hit play. Yes, you can go the other way and "subtract" your way out of a chaos, but you can also intentionally piece together the components and produce an output you imagined beforehand. The missing pieces here for this format, my instinct tells me, are layers of abstraction or additional UI elements that will help in composing a final piece, using code for the fundamental components plus something else that hasn't been invented yet or noone has thought of glueing it together.


> ...forsakes melody and harmony. Instead it's all rhythm and timbre

Harmony and timbre is basically the same thing. You can feel this if you play a long drone note and twiddle the filter cutoff and resonance.


I see it as a neat way for nerds to nerd out about nerd stuff in an experiential way. Like, this is not going to headline a big time rave or festival or anything, but in a community of people who like math or programming or science, sure, why not introduce this kind of performance as another little celebration of their hobby?

Years ago I went to a sci-fi convention for the first time, because I had moved to a new town and didn't know anyone, and I like sci-fi. I realized when I was there that despite me growing up reading Hugo and Nebula award winners, despite watching pretty much every sci-fi show on TV, despite being a full-time computer nerd, the folks who go to sci-fi conventions are a whole nother subculture again. They have their own heroes, their own in-jokes, their own jargon... and even their own form of music! It's made by people in the community for the community and it misses the point to judge it by "objective" standards from the outside, because it's not about trying to make "interesting music" or write the best song of all time. The music made in that context is not being made as an end in itself, or even as the focus of the event, it's just a mechanism to enable a quirky subculture to hang out and bond in a way that's fun for them. I see this kind of live coded music as fulfilling a similar role in a different subculture. Maybe it's not for you, but that's fine.


Fair point, and that's the challenge in both the software's abilities and the creator's skills.

If you see it as yet another instrument you have to master, then you can go pretty far. I'm finding myself exploring rhythms and sounds in ways I could never do in a DAW so fast, but at the same time I do find limiting a lot of factors, especially sequencing.

So far I haven't gotten beyond a good sounding loop, hence the name "loopmaster", and maybe that's the limit, which is why I made a 2 deck "dual" mode in the editor, so that it can be played as a DJ set where you don't really need that much progression.

That said, it's quite fun to play with it and experiment with sounds, and whenever you make something you enjoy, you can export a certain length and use it as a track in your mix.

My goal is certainly to be able to create full length tracks with nuances and variations as you say, just not entirely sure how to integrate this into the format right now.

Feedback[0] is appreciated!

[0]: https://loopmaster.featurebase.app/


My wife's (and many other music-lovers') test for whether something counts as "real music" is whether they can perform it live (and sound as good as the recording). Music which is programmed doesn't count, as there's a lot of nuance that a skilled musician with an actual instrument can put into a performance in a split-second as they play.


If it moves and connects with you then it's real music.

It's fine to have a preference for live musicianship, but the 'real music' argument has been leveled against every new musical technology (remember the furore around Dylan going electric?). It dismisses contemporary creativity based on a traditionalist bias that elevates one form of execution above all others. There's also a huge amount of skill in producing good electronic music. It's always hard to make good music no matter the means.


It's a spectrum and people are free to draw the line wherever they want.

If you dial the dial high enough you can say that that amplifiers aren't "real music" because you are no longer hearing the "real instruments", but "a machine that is distorting the sound". If that's your line, then only listening to classical music at a concert hall would count as "real music".

You could dial it up even higher. Using a musical instrument at all is not "real music" any more, because human voice can have more nuance than any instrument. Then going to a church to listen to gregorian chants would be the only "real music".

I personally think that Daft Punk rocks, and for a lot of artists I very much prefer listening to their studio recording rather than listening to them in a concert. (Surrounded by ... people. Ugh.)


The nuance of a skilled player in the moment is a beautiful thing to behold, but saying programmed music isn't "real music" is like saying that film acting isn't real acting, but theater acting is.

It's like saying a novel isn't real speaking, but a speech is.

Like animating an image isn't real, but recording a video is.

If that's your preference, then that's alright. But it's a silly distinction to make.


here's a whole opera from a Star Trek episode I coded in Supercollider - can indeed code things other than EDM... (its a screen grab - being synthesized in real time)

https://vimeo.com/944533415?fl=ip&fe=ec


the great advantage over DAWs etc is that you can name things and slowly build your own bespoke tools... for this work all timing was done in reference to the words rather than beats and bars - I can re-flow the whole piece by tapping through the syllables on my space-key. Something that would be totally impossible in a traditional platform!


In loopmaster you can define functions and abstract slowly and build your tools as well. Not yet with callbacks but it's in the works to do more complex SuperCollider-style stuff.


Sign up please and teach us.


I've seen a couple of TikToks with someone doing live coding with this same tool and it was really cool to watch because they really knew it well, but like you said it was bog-standard house/techno.


I find trackers to be in the same category you put live coding into, probably DAWs as well, but many people do some amazing things with all three. In the more academic computer music world there is a fair amount of diversity in live coding where it is generally combined with algorithmic/generative techniques to rapidly create complexity and nuance. SuperCollider seems to have the most interesting output here, for me at least; I have seen little that really grabs me but they do show the capabilities of the process and I find that quite interesting. Improvisation and jamming is just not my thing, so live coding falls a bit short for me.


It's not gonna add anything to your repertoire. It will appear so after some time but it's really just an approach for people who have bad hand-eye coordination and ability to hold a rhythm or a hard time acquiring these skills, or tinkering with DAWs, which have a weirdly annoying first hour use time/learning curve


I'm chuckling at the comments for the video:

"If you add two pounds of sugar to literally one ton of concrete it will ruin the concrete and make it unable to set properly which is good to know if you wanna resist something being built..."


Yes yes yes. You rarely need to DO anything other than listen. Just be a good listener. Maybe identify handles in what they're saying and then occasionally ask them about them:

How did that make you feel? Wait, you did what? Why did you do that? What do you enjoy about that?


>You rarely need to DO anything other than listen

There is a particular amount of risk here, this does set you up to interact with attention vacuums. People that will talk constantly without break nor desire to listen to what you have to say. Over any amount of time (weeks/months) a person that you can have a real two way conversation is needed.


In my experience such people are rare. You can always steer the conversation, give your input, tell them (nicely) to stop talking, or - if they're unwilling to take the hint - walk away.


I indirectly know one or two people who match this lonely home gambling archetype (friends of a friend) and every day I feel gratitude that I am (for the most part) socially well-adjusted and supported.

The handful of men like this who I've spoken to I find to be pretty self-aware and somewhat self-loathing. I often find that what they need is someone who can listen to them, give them some tactful encouragement, and occasionally help them find a strategy that will help them to overcome the troubles that they're facing. Be that therapy or a debtors anonymous meeting.

It's a challenging world that is often overwhelming to manage by yourself. It's easy to feel dis-empowered when you don't have a solid social circle that you can lean on, or who can help to re-direct your bad tendencies. Helping folks find their own social group can be immensely helpful for people who are trying to course correct.


It would be more convincing if you explained what it actually is. Rather than what it is not.


There are books and Google and Wikipedia.

Like people refer to meditation and don't explain all the process involved in one of the traditions because there is a wealth of information available, I would much prefer to answer to specific questions on the practice instead of copying and pasting from Wikipedia, which I am doing now.

"The technique involves repetitions of a set of visualisations accompanied by vocal suggestions that induce a state of relaxation and is based on passive concentration of bodily perceptions like heaviness and warmth of limbs, which are facilitated by self-suggestions.Autogenic training is used to alleviate many stress-induced psychosomatic disorders"

The formulas are six: heaviness, warmth, heart beating regularly and strongly, calm breath, warm solar plexus, and cool forehead.

There's no vocal suggestion (the Wikipedia article is wrong in that regard), the formulas are repeated silently. It's a much more effective practice of the hocus-pocus that is often meditation of the Eastern tradition, especially the bastardized variety adopted in the West, and there are plenty of books and papers available on the results of scientific studies that measure the effect on soma and psyche of AT.


Everyone wants to be happy, but nobody wants to be happy with what they have.


Doesn't everyone want to be happy with what they have? Why would you not want that. Like, ideally we'd all be happy with nothing, right?


> Doesn't everyone want to be happy with what they have?

No, most people think getting more (or getting something else) will make them happy.

> Why would you not want that. Like, ideally we'd all be happy with nothing, right?

Because it's hard to become wise, and that's not what society teaches.


In the 1920's of the US the idea of making people not content to stimulate buying gained popularity. This is still used today. The culture is directed at making people not satisfied. It's hard to go against the grain of society.


That wasn’t a new idea. It’s not even restricted to humans.

Competing for mates is one of the basic mechanisms in evolution, seen in many animals. Instead of fighting the tribal leader or whomever to display fitness, humans came up with a less violent solution, which manifests itself in the ability to buy things.


I think this is a very American ideal (that has been exported with much success).


So maybe the right question to ask is - how to be content under capitalist regime if you are smart.


The Affordable Care Act wasn't a complete solution - and I don't get the feeling universal health care was necessarily achievable - but it is the reason that I have health care and mental health services today. So I consider it to be a meaningful - if incremental - improvement. I imagine there are quite a few people aside from myself who are happy to have it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: