There's an interesting article in MIT Tech Review: Audio Software for the Moody Listener about a batch of research systems that allow content-based exploration of music collections.  The article mistakenly focuses on mood as a primary factor for these systems, when in fact most of them seem to not use mood at all.  The article highlights a few systems including AudioRadar [PDF Description] developed by the Media Informatics group at the University of Munich and Playola developed at Columbia.  I particularly enjoy the Music-space browser in the Playola system. 

Stephen Downie delivers the obligatory stairway  reference:  "You as a human will recognize 'Stairway to Heaven' played on a banjo, as opposed to the original version played at the Led Zeppelin concert, but these systems really can't get it."   (Note that Stephen is in the process of organizing the MIREX 2006 Cover Song contest, where systems will be challenged to do just that ...  to find the banjo version of Stairway to Heaven).
Comments:

Somewhat unreleated, but I thought you might be interested that the W3C is forming an incubator group "to investigate a language to represent the emotional states of users and the emotional states simulated by user interfaces". Could be helpful for systems where mood is a factor...

Posted by Evan on July 19, 2006 at 05:42 PM EDT #

"You as a human will recognize 'Stairway to Heaven' played on a banjo, as opposed to the original version played at the Led Zeppelin concert, but these systems really can't get it."

My question is why would they want to? What would that achieve for the user? Is there a major need to identify covers?

A banjo version of Stairway to Heaven is almost entirely unrelated musically to Led Zeppelin.

Posted by Ian Wilson on July 19, 2006 at 09:26 PM EDT #

Ian: If you can get good at identifying literal covers (Stairway to Heaven on a banjo), then you will also get good at identifying interpretive covers. These are songs that are not necessarily "perfect" translations, but songs that are influenced or otherwise harmonically or melodically similar or related. For example, there was a lawsuit a decade ago over whether Andrew Lloyd Weber had stolen the tune of a Baltimore songwriter named Ray Repp. Whether or not this is true, it would be very interesting for me to be able to pick one song as my query, and then get a similarity-based ranked list of all other songs that _could_ be covers of my query. You could use this cover-finder for plagiarism detection, but IMO that is a bit boring. More interesting would be to find hints of historial influence. It is very common in music to "creatively quote" other musicians.. especially ones that you like. In this manner, you can actually build up a bit of inferential logic about who was listening to whom, historically. And I'm not alone in thinking this is interesting. There are actually a good number of bands recently that have started to put out "under the influence of" compilations.. where it is basically a CD filled with songs that influence their own music. Suppose now that you can identify songs that are likely "influences" (covers) of each other, and then use that to infer who was listening to what. That could be an extremely interesting method for discovering new music.

Posted by Jeremy P on July 21, 2006 at 09:02 PM EDT #

hey, is the there any reseach sun research project related with music similarity recognition? I ve done some basic work myself based on synthetized human sound comparision against melodic countour that may potentially be indexed and stored as metatadata for realtime searchs for "song recognition" based on human synthetized melodic countour and rythm. Unfortunately, im not really aware of the different research initiatives related with music similarity, so i was wondering if you can guide me a little bit.

Posted by Nahum on July 22, 2006 at 06:17 PM EDT #

Post a Comment:
Comments are closed for this entry.

This blog copyright 2010 by plamere