Researchers Ge Wang and Perry Cook have developed ChucK, a new audio programming language for real-time synthesis, composition, and performance. ChucK presents a new time-based concurrent programming model, which supports multiple, simultaneous, dynamic control rates, and the ability to add, remove, and modify code, on-the-fly, while the program is running, without stopping or restarting. It offers composers, researchers, and performers a powerful and flexible programming tool for building and experimenting with complex audio synthesis programs, and real-time interactive control.

The paper ChucK: A Concurrent, On-the-fly, Audio Programming Language won the International Computer Music Association best paper award in 2003.

ChucK is quite new, and needs a bit more work, but it has some real advantages over some of the other synthesis languages. The way it treats time as a first class entity is quite slick. They are working hard to make sure that ChucK can be used during live performances.

Comments:

I think usability is going to be a big problem. Has anyone thought of a graphical front-end? Having to worry about syntax in real-time performance could be a big inhibitor to use. There have been many such attempts at audio programming languages, but I've never been very impressed with them. (I started with Music IV). Its not a very musical way of thinking, but it may be ok for the audio technician or experimenter. Composers might have a hard time with it. But I should be fair and give it a try. I've downloaded the MacOSX version and tried a few examples. They sound pretty bad. But I should read the docs first. BTW, I will be at the Computer History Museum event on Dec 14, and hope to put a report on my blog. Glad to see someone else at Sun is interested in these things. ->Richard Friedman, [email protected]

Posted by Richard Friedman on November 21, 2004 at 07:36 PM EST #

Richard:

The ChucK folks are working on a graphical environment intended to be used during live performances. More info is here:

http://audicle.cs.princeton.edu/

They describe audicle as: a concurrent smart editor, compiler, virtual machine, and debugger, all running in the same address space, sharing data, and working together at runtime.

One of the focuses of their research is this notion of on-the-fly programming. The idea is to give a programmer/performer the proper tools to allow them to program on-the-fly, reducing the typing / syntax / debugging burden that is typical in program development. I have a feeling that with a properly designed system, the cognitive load as well as the physical/dexterity requirements for performance would be similar to any other musical instrument.

As for suitability for composing ... I'm no composer, so I defer to your expertise, but I do think that there are some who's mental model of music matches that of a computer program more closely than any other model. For those folks, a musical programming language may be the best composing tool

Posted by Paul on November 22, 2004 at 07:13 AM EST #

Post a Comment:
Comments are closed for this entry.

This blog copyright 2010 by plamere