Musical My Buddy

02 Sep 2015

A Musical My Buddy

Having long been an musician and having performed deep within the bowels of Bushwick, in the past I’ve had a desire to simply build a program that would mimic my playing style and attempt to play with me or against me. This would be the entire set. It would be literally “My Buddy” a sort of man w/ machine improvisation, where I would respond to what would be essentially what the machine mimicing me. The music created would be a sort of tit for tat. I would challenge myself to play essentially with myself, making something different challenging my own assumptions about myself or really understanding myself to encourage me to play something different and unexpected. The following is a sort of back of the envelope consideration of what it would take to make that whole ecosystem work.

Considerations:

  • The program would have access to a large body of my playing sessions / practice or song > Most music is created or at least produced, with some audio software, be that Ableton or Logic or ProTools. Not to discredit traditionalist that are hardware only types, but must music at this point in history has some digital genesis. This digital making would allow one to either copy music on the wire and simply routing all midi data to one of these software DAW’s where either with SysEx or some other system you could describe the program that was created.
  • The program would be able to have a machine level description of what I was playing note for note so to speak > Right now, this is not quite possible, midi is limited as it doesn’t really do any justice to describing the synth or music making item. Its tonal properties, its wave form, its form of synthesis etc… . If we imagine that all synths had Csound descriptions or virtual models, we might be able to incorporate this information, into just generic math models of the wav form generated, that with the timing would then make it possible to have the machine gain a better/accurate understanding of what actually happened
  • The program would need to be insanely fast at applying or building a model to describe me. > The program would need to probably need to be trained in some sort of offline capacity to deal with the stimuli that was being put in, so a realtime performance would work.
  • The program would also have to take feed back either audience attention feed back or something else > A sort of tuning feature of how well the program was doing , we might include some sort of feedback that included so machine rec, of audience participation
  • Finally the program would have to be reasonably accurate at recreating my vibe or my preceived vibe > This we could have an offline training sequence that allowed for the marking of how well the model produced music that sounded like the user according to the user.

Extra Thoughts

  • One of the main blockers , in incorporating machines into our process of music creation in a way that’s non trivial, is to describe the music that we’re playing more accurately. I think the old barriers of MIDI will eventually. Where the barriers I speak of is a limited way to describe what’s happening musically. If we had CSound files we would already have a much better way/machine readable way of describing the music that we’re creating or performing. In a future world, perhaps your favorite instrumental artists would be downloadable as a style or a vibe, the more they simply played or varied or explored and made music the more varied the machine generated music would get. You could then potentially have a way to expand this , to do things like building algorithms that made mashups to take two distinct styles and merge them into a third style that did not resemble the two original. Example take Madonna Style and Kiss styles and use genetic algorithms to evolve the styles into The Cranberries.
comments powered by Disqus