Update 09-Nov-2017: the github version of this now includes the other features I listed as to-dos here along with writeNRT working (it just required a Windows-compatible version). More complex and infinite music values should behave just fine, and there is also support for different ways of handling the “R” part of ADSR envelopes.
Tom Murphy recently gave a talk at the FARM workshop in Oxford showing the use of a Haskell library called Vivid, which is for real-time audio synthesis using SuperCollider as a back-end (this video of another talk is also worth a look). After many years of digging my heels into the ground about trying yet another real-time audio method for Euterpea*, Vivid represents a real, feasible option. Give this a try:
{-# LANGUAGE DataKinds, ExtendedDefaultRules #-} import Vivid import Euterpea m :: Music Pitch m = c 4 wn :=: (e 4 qn :+: f 4 qn :+: g 4 hn) theSound = sd (0 ::I "note") $ do wobble <- sinOsc (freq_ 5) ? KR ~* 10 ~+ 10 s <- 0.1 ~* sinOsc (freq_ $ midiCPS (V::V "note") ~+ wobble) out 0 [s,s] playMusic :: (ToMusic1 a) => Music a -> IO () playMusic = playMEvs . perform playMEvs :: [MEvent] -> IO () playMEvs [] = return () playMEvs (me:mevs) = do fork $ do wait (fromRational (eTime me)) s0 <- synth theSound (fromIntegral (ePitch me) :: I "note") wait (fromRational (eDur me)) free s0 playMEvs mevs main = playMusic m
To make this work, in addition to having Euterpea, you will need to…
- Install Vivid with “cabal install vivid” and also install SuperCollider.
- Start SuperCollider and boot the server before you can pipe from GHCi to it.
Really the main thing missing in this little experiment is a table-style lookup to match instrument names to synth definitions, much like the way custom Players work in HSoM. Of course, that’s not the only issue, and there are naturally some other potentially uglier things lurking in the tall grass beyond that:
- Trying to do infinite music values with this approach kills everything. The thread-handling needs to be smarter to only spawn new threads when overlaps exist, not to just spam out everything at once. The approach above will only work for fairly small, finite values.
- There probably needs to be some sort of datatype to wrap instruments in a way that allows correct handling of the R part of ADSR better than Euterpea’s built-in offline audio method (which can’t support a normal R after note-off).
- The doScheduledIn and writeNRT parts of the Vivid demo gave me type errors; not sure why yet and still looking into it.
Nevertheless, I am optimistic about real-time audio things for once. Development will be taking place here:
https://github.com/donya/VividEuterpea
* For those interested in why I said “no” to real-time audio and Euterpea for so long, the reason is that many cycles were wasted trying to get Euterpea’s existing audio system kicked into real time in a pure Haskell sort of way. Nothing worked, and even some fairly simple examples took more time to write to WAV than their playback would need to have been. Each foray into that domain was just another infuriating waste of time with the same outcome, including a number of attempts to ditch arrows and try limiting available syntax (Euterpea’s current system allows arbitrary expressions). Haskell can manage graphical update rates just fine in the range of 30-60Hz, but beyond that simply seems to ask too much of it. Modern audio sampling rates are 44,100Hz and the timing must be strict in a way that graphical applications typically don’t worry about. It’s a tough problem and the best solutions I’ve seen in the Haskell domain are like Vivid and do not use Haskell all the way down to the level of piping out samples to the audio hardware.