Most music recommendations are based on collaborative filtering algorithms.  Sites like last.fm, goombah, qloud and iLike use the 'wisdom of the crowds' to find related artists or songs.   Using these techniques yield pretty good recommendations, but they could be better.  For instance, in a collaborative filtering system, the opinion of a highly-engaged music listener, the 'super-fan' if you will, counts the same as the casual or indifferent listener.  A CF system can't tell the difference between a Pitchfork music critic, listening to the latest Deerhoof album over and over so they can write the review, and a jogger that listens to the latest Deerhoof album over and over because it is good music for running.  A music recommender should recognize that not all opinions are created equal, that some people really do know better than others.    

One attempt at focusing on taste makers is the site When Killed Like Roosters.  WKLR is a music recommendation site that, unlike a traditional social recommender, will give extra weight to the opinions of the highly-engaged music listener.  WKLR mines the taste data of music bloggers, those that care enough about music to write about it - and provides charts of the most highly rated music based upon these tastemaker ratings.   WKLR seems to have gone quiet over the last few months - so this is not an idea that is taking over the world by storm. Still, I think we will be seeing more CF systems that try to find the taste makers and the trend setters and give the opinions of these individuals more weight.

Comments:

Interesting stuff, Paul, have you seen Peter Gabriel's latest pet project, which is also encouraging tastemakers http://www.we7.com/howitworks/tastemaker.html ?

Posted by David Jennings on May 01, 2007 at 02:13 PM EDT #

That's interesting, Paul. Certainly some reputation aspects should be taken into account in recommender systems, if only to eliminate bogus ratings by spammers. But, I wonder whether giving taste makers more weight will necessarily improve the recommendations. I would think that, for the most accurate predictions, you would want recommendation to come from people with similar tastes, not necessarily from people who do a lot of reviewing. You know, there is a fairly good testbed now available for working on this kind of idea. You could take the Netflix contest data and determine if weighting certain types of raters more highly resulted in substantial improvements in predictive accuracy.

Posted by Greg Linden on May 01, 2007 at 05:20 PM EDT #

What could happen if you just rely on artist-band relationships, like common performances on discs, bands, songs, etc? There was a Google SoC assignment where MusicBrainz, in order to populate it's artist-artist relation table wanted some queries to it's database dad did that. http://musictrails.com.ar uses the MusicBrainz database in order to make an Erdos number of artists, creating a path of as many degrees as you want by common performances on discs, bands, songs, cover songs, etc.

Posted by Nick on May 02, 2007 at 02:01 AM EDT #

Nick I did something very similar with the musicbrainz data called 'six degrees of black sabbath'. It was pretty fun to use. I should release it sometime. The musictrails site looks fun. - Paul

Posted by Paul on May 02, 2007 at 07:46 AM EDT #

Greg: I agree with you in general, that the best recommendations come from people with similar tastes, however, given the results of some of the latest studies around popularity feedback (I'm thinking of the Duncan Watt's study at Columbia in particular), we see that social recommenders are particularly vulnerable to 'sheep effects', where the early opinions get amplified by the susceptible masses. There are a large number of indifferent music consumers who will just listen to what everyone else is listening to. I'm sure you could find this same pattern in books and movies as well - there are some people who just read what is on the NYT best-seller list. To me, it just doesn't make too much sense to treat the tastes of these indifferents (Steve calls them 'sheeple') with the same weight as the highly engaged. Indeed, the answer lies in some of the large datasets like the netprize data or in the last.fm data. - Paul

Posted by Paul on May 02, 2007 at 07:57 AM EDT #

Post a Comment:
Comments are closed for this entry.

This blog copyright 2010 by plamere