Coord 1
Coord 2
Coord 3
Coord - generative music improvisation
Music creation involves, at some level, generation of note patterns. The notes themselves form a musical surface, while the patterns form musical structure in the mind. For the listener, surface is input and structure is output. For the composer or improviser, structure is input and surface is output. If a computer could trace the emergence of structure from a given surface, and the creation of surface from given structures, then it could enable music creation by handling musical complexity while allowing a human to focus on musical impulse, drive, preference, and feeling.
Two meanings of “generative music” can be used jointly to make this link between human musical intuition and machine-based analysis/synthesis, where the goal is to enable persons (perhaps without musical training) to steer music by ear in real time. In the first meaning, generative grammar captures how a listener intuitively senses structure in music; this has been described by Lerdahl and Jackendoff in A Generative Theory of Tonal Music, building on Chomsky’s approach to language. Secondly, generative music and art sometimes use systems such as cellular automata or L-systems to create complex surfaces from simple rules. This has been described and used by artists and/or musicians such as Philip Galanter and Brian Eno. Harnessing the two generative approaches together would allow a computer to make the round trip from selected musical surfaces to manipulation of intuitive musical structures, back to newly generated musical surfaces. That is, the computer could act as a music theory co-processor that models certain (mostly subconscious) mental tasks of experienced musicians and couples them to human navigation in real time. This intersection of generative approaches is difficult, however, to pin down, because a generative grammar usually does not produce musical structures that are specific enough to be used as generative rules for recreating musical surfaces.
But low-level rhythms are a domain where such an intersection of can work. Here, because the hierarchical, recursive nature of a generative grammar is bound to the strict metrical structure that underlies rhythmic anticipation and repetition, an attractor is formed where a self-similar set of rhythmic building blocks maps a key psychological aspect of music coherence: expectation. This map provides a bidirectional path between musical surface and musical structure that can be generated on-the-fly, much as it does in the actions and mind of an experienced composer or improviser. The overarching goal is to keep the human tightly in the creative loop, navigating selected musical influences by ear in order to create new but recognizably related rhythms and melodies. This stands in contrast to some recent data mining and machine learning music-based approaches that require off-line training and engage the user at the level of a menu or questionnaire (essentially replacing the composer, and casting the user in the role of a shopper) rather than from within a stream of notes that are actively steered in real time. Papers describing the relevant music/number theory, and the algorithmic implementation can be found at http://coord.fm/papers. The videos here show the algorithm at work. In each video, the melody and bass line morph between three different inputs based on the position of the pointer; the user thereby improvises new variations by navigating between selected musical inputs, shaping note patterns by ear and action during playback.