Jazzy Beach Critters

Last modified: Jul 22, 2020 @ 11:00 am

New: try the Interactive Web Version of Jazzy Beach Critters! It works best on the new Microsoft Edge (the Chromium based version) on Windows 10. Chrome on Mac and Windows will also work, but the audio may not load immediately and you may initially hear a pause followed by a pile of notes that sorts itself out after a while (just be patient for a few seconds if this happens). Older versions of Microsoft Edge and Android/iOS devices have known compatibility issues.

Video demo:

Jazzy Beach Critters is a proof of concept demonstration of functional models for real-time music generation to a game scene. There are two kinds of sound in this demo: sound effects that are in response to critter/user actions and generated music. Each critter represents a musical part (solo, harmony, or bass) and emits notes corresponding to the notes played. The music produced by the group will change if the critters’ moods change, but it does so while preserving harmonic and metrical coherency. The user can interact by petting a critter (makes it happier), poking a critter (makes it angrier), dropping food for critters to eat, and clicking on the beach ball to change the overall style. Happy critters will play in key with each other while angry critters will become chromatic/atonal.

Jazzy Beach Critters was a collaboration with Christopher N. Burrows, who designed the game scene with Unity and C#. The generative algorithms are implemented in Haskell. Music is generated at the level of individual notes, which trigger single-note samples within the Unity framework. All of the music generated is stochastic, and all sound synthesis takes place within the game in real-time. Unlike a number of other procedural soundtrack implementations for games, there are no pre-recorded or pre-composed musical passages in this scene. The communication between the game environment and the music generation algorithms is also bidirectional, meaning that the music can influence game events (as evidenced by the critters producing notes in sync with the music) in addition to game events influencing the music. 

General flow of information over time in Jazzy Beach Critters.

Generative Models for Music

The models for generative music are derived from my existing work on generative jazz and other interactive music. The same models have been used to produce a number of stand-alone compositions as well as to create interactive systems for live performance. Because the same models have been used for real-time interactive musical systems where the machine responds to a human musician’s melodies, it would be possible to extend something like Jazzy Beach Critters to allow the user to interact more deeply with the soundtrack. For more information on these generative models for music, see the following: