Wednesday Oct 18, 2006

Jeremy asked how to post to from

It is pretty easy. First, go to You login with your gmail address, type in your post. When you are ready to publish, select the 'publish' tab. The first time you publish to your blog, you'll need to configure your blog settings like so:

Existing Blog Service:

[x] My own server /custon

API : MetaWeblog API

Then just enter your User Name and Password.

The only problem I had was that it seemed to take forever for the post to actually appear in my blog. I think the publish time is set incorrectly.

(It looks as though it wants to add about 6 hours to the publishing time, so you have a choice of hand-editing the publish time at, or just live with a six hour wait).

Greetings from

I saw over at the Javaposse that writely has switched to using java across the board. I figured I would try it out. I noticed that it is possible to configure writely to publish to my blog, so I figured I would give it a try.

I've been working on a web 2.0 mashup.  I've released it to a small audience to get an idea of how stable it is and to get an idea of how well it scales.  One thing I've learned  in deploying  and monitoring this app is that unlike a web 1.0 page where you could easily track and report page views, it is much harder to measure how popular a web 2.0 app is.  In the web 1.0 world, there was the simple page load.  A user would visit your page and you would bump your page counter.   Nearly every web page displayed the number of page hits. This was a nice simple metric. 

Now however in the web 2.0 world, things are not so cut and dried.  With a web 2.0 app, a user may visit a site and stay for a long time, interacting with the site while the browser is making background asynchronous calls to the server to update the content.  The question is how to count this.  Technically there's only one page load when the user arrives at the site, but the user may spend all day at the site, interacting with it.  Some sites are particulary sticky - web sites  such as writely may have users that use the site for many hours continuously. Clearly there's got to be a a web 2.0 version of the page count. Others have raised this question, but without offering a clear answer.

So let me humbly propose a web 2.0 metric:  user-minutes.  This metric indicates how many cumulative minutes all users have been at the site.  One user visiting the site for an hour would be 60 user-minutes. 60 users visiting a site for one minute each would also be 60 user-minutes.

So here are some stats for my web mashup:

Days Live
Unique visitors
Total http requests
Page loads
about 10,000
Average visit time per user
30 minutes

Considering the attention span of a web 2.0 user (remember that a typical YouTube video is about 2 minutes long), I think it is reasonable to equate a web 2.0 user-minute with a web 1.0 page load.    So for my app, at first glance with 10,000 page loads it is probably not too interesting.  But this app is particulary sticky - the average user spends about 30 minutes a the site.  The 10,000 page loads doesn't show that,  but using the  metric of 157,000 user-minutes we get a better understanding of how active the site is.

Perhaps the user-minute isn't the best metric, but it is easy to track, and easily understood and gives a good idea how active a site is.

There are some songs that just have to go together.  Imagine hearing Queen's 'We will rock you' without it immediately being followed by 'We are the Champions'.  It just wouldn't be right.  Unfortunately, most automatic playlist generators don't know about these, so when you are listening to your iPod on shuffle play you are likely to hear one of these song orphans.  To make it easier for playlist generators I'm attempting to put together the canonical list of song chains.  These are songs pairs that must occur together, and any DJ (or automatic playlist generator) worth his or her salt will make sure that they are never played separately.   This initial list was put together with the help of listeners over at Radio Paradise.  Feel free to offer additions to the list.

 First Song
Second Song
 Amazing Journey
The Who
She came in through the bathroom window
Golden Slumbers / Carrth that weight / The End
The Beatles
Sweet Jane
Lou Reed
No More Trouble
Bob Marley
Pseudo Silk Kimono
Kayliegh / Lavender
Come On into my kitchen
Steve Miller
Countdown to Armageddon
Bring the Noise
Public Enemy
Jet Airliner
Steve Miller
Let the sunshine in
P.Funk (Wants to get Funked Up)
Mothership Connection (Star Child)
Moving in Stereo
All Mixed Up
The Cars
People's Parties
Same Situation
Joni Mitchell
Troubled Child
Joni Mitchell
Sailing Shoes
Hey Julia / Sneakin' Sally
Robert Palmer
Bongo Bong
Je Ne T'Aime Plus
Manu Chao
Freedom Rider
You really got me
Van Halen
Fat Lenny
Cold and Wet
Funeral for a friend
Love lies bleeding
Elton John
Summer's Cauldron
Uncle Albert
Admiral Halsey
Paul McCartney
Tainted Love
Where did our love go?
Need you tonight
Dulcimer Stomp
The Other  Side
People's Parties
The Same Situation
Joni Mitchell
We will Rock You
We are the champions
Living Loving Maid
Led Zeppelin

Tuesday Oct 17, 2006

SpotDJ lets you record an audio comment for a song and listen to the comments made by others.   When you use SpotDJ, at the end of a song, if SpotDJ has a 'spot' for the song, it plays it before going on to the next song. 

I can't think of any thing I would want less - I don't want to hear anyone talking between my songs thank you!  The idea of allowing people to annotate music is good, but doing it with audio, at least for me, is bad. Audio is hard to search and it is disruptive to the flow of music. All it takes is one wise-guy shouting obscenities to ruin that quiet evening listening to music.  SpotDJ is not for me. - seen on TechCrunch.
Over at MixedContent, Colin Brumelle is proposing a SXSW panel called 'The Ultimate Music Recommendation Smackdown".  Here's Colin's proposal:

With the unprecedented accessibility of recorded music, how can we discover that hot new band when there are millions of possibilities at our fingertips? Fortunately, many companies address this very question. Find out which service creates playlists worthy of a veteran DJ, and which service recommends tracks like an iPod set on shuffle as they battle it out in the ultimate playlist smackdown. Based on audience feedback, trophies will be awarded.

Read more about it on MixedContent and don't forget to vote for the panel at SXSW.

Monday Oct 16, 2006

Eagle-eyed Jeremy noticed this little tidbit buried in a press release describing the University of Wisconsin  joining Google's effort to scan all printed material:

It will also target American and Wisconsin history, genealogical materials, decorative arts and sheet music, among other subjects, the University of Wisconsin said.

Jeremy wonders if they'll just do it by title OCR, or if they'll actually take advantage of some of the work that has been happening over the last 6 or 7 years on sheet music OMR and sheet music melody search. For instance, at this year's ISMIR, Don Byrd and Megan Schindele presented a paper entiled 'Prospects for Improving OMR with Multiple Recognizers'

Last week I pointed out that Qloud would have trouble competing with the likes of since the user base of Qloud is so small compared to, and the key to a good social recommender is to have lots of users.  Well, there's some good news for Qloud.  In just a week, the number of Qloud listeners to Green Day's 'American Idiot' has grown by 75% -  going from 4 listeners to 7, while at's idiot listeners only grew by  less than 2% - going from 341,147 listeners to 346,825 listeners.  Sure, at the end of the week, had 5675 more listeners, but the trends are clear!

Sunday Oct 15, 2006

On the last day of ISMIR, Stephen Downie and the MIREX crew had a panel and discussion session about this year's MIR evaluations.

Stephen at the MIREX panel

Stephen highlighted the fact that these he likes to call these evaluations 'tasks' instead of 'evaluations'.  He wants to avoid treating these as win/lose contests - but instead to treat them as a way to test how  well an algorithm works when compared to others.  Treating the evaluations as tasks instead of contests may encourage those that are reluctant to participate to join in.  Still,  it is difficult, especially for commercial ventures to participate unless they can guaranteed a good result.

Some of the MIREX crew (Cameron, Andreas, Kris and Mert) joined Stephen in commenting on how the evaluations were run.

The MIrex 2006 Panel

There are all sorts of results on the MIREX results page.
Rainer Typke has been augmenting some of the result pages with some interesting graphs.

And finally, check out the MIREX audio similarity tee-shirt, ably modelled by conference organizer Holger Hoos.
Holgar poses with the MIREX shirt

Saturday Oct 14, 2006

The Filter is a plug-in for iTunes that will generate playlists that match the 'mood and tone'  of your seed tracks.  Judging from the the FAQ, it looks like the playlists are generated by the standard collaborative filtering techniques, which means that 'mood and tone' really means 'people who liked this song, also liked these songs'.   So we can add another collaborative filtering music recommender to the mix.  Pretty soon we are going to need a recommender just to help us pick a music recommender. 

The Filter currently only works with iTunes for Windows, so I can't try it first hand (a Mac OS X version is coming soon) - but from the screenshots and FAQ it looks like the app tries hard to look and work like iTunes:

One of the difficulties faced by all of these CF and content-based recommenders have to face is how to resolve song titles to their music database.  Some, such as MusicIP  use a music fingerprinting system to resolve a track to the proper metadata, while other systems, including The Filter, will attempt to match songs to their database by using the information in the ID3 tags attached to your song. This means that if the ID3 tags are not correct, The Filter won't be able to build playlists from or including that song - the only course of action is for you to fix the ID3 tag of the offending song.  For many, the ID 3 tags are a mess (even songs that were ripped from CDs can have problems).

It seems to me that for a music recommender to distinguish itself from the ever growing pack of recommenders it either has to be extremely innovative (like Pandora ) or execute really well - like qloud is trying to do.  I'm not sure if The Filter has what it takes.  - Thanks Jeremy for the tip.

Friday Oct 13, 2006

With the release of iTunes 7.0, it seems that finally, we are starting to get back in touch with our album art.  The latest version of Oboe at (an online music locker), now adds cover art to your music and displays it in your online locker.
At ISMIR 2006, I sat on a panel with a few folks discussing music metadata quality, chaired by Adrian Freed. In preparation for this panel I read a few things on metadata quality that I thought were interesting:
  • On the quality of metadata - a blog entry by Stefano Mazzocchi - a semantic web perspective. Stefano says that what's missing is the feedback loop: a way for people to inject information back into the system that keeps the system stable. ... Both the open source development model and the wikipedia development model are examples of such socio-economically feasible systems, althought they might not scale to the size we need/want for an entire semantic web.
  • Giving Music More Brains: A study in music metadata management. - a masters thesis by Arjan Scherpenisse that looks at MusicBrainz and how it deals with errors, compares it to a non corrected system (audioscrobbler), and suggests ways to improve error handling.
  • Music MetaData Quality: A multiyear case study using the music of Skip James - by Adrian Freed. Adrian looks at the errors in the various music metadatabases. Adrian talked about this study during the panel. It is an interesting survey of the types of errors and their possible sources.

Thursday Oct 12, 2006

The ISMIR organizers have (in record time), put the proceedings for ISMIR 2006 online. PDFs of all submitted papers and poster sessions are available on the ISMIR 2006 - Detailed schedule page.  Hopefully, these papers will soon be migrated into the full historical ISMIR record, which can be found on the ISMIR proceedings page

Wednesday Oct 11, 2006

The lecture hall
Another great day at ISMIR - lots of great talks on classification, chord and key estimation, databases & algorithms and transcription/separation. One of the highlights for me (because it relates so directly to what I've been working on) was Michael Casey and Malcom Slaney's paper on Song Intersection by Approximate nearest neighbor search.  Michael and Malcom introduces LSH (locality-specific hashes) to the MIR community. LSH can be used to solve the approximate nearest neighbor problem - in sublinear time.

Another highlight at the conference has been the food.  Lots of food, plenty of it  ... and apparently according to George T we can sit down while we eat if we want.

Lunch time

Some interesting posters and demonstrations today:

Ajay Kapur demonstrated human/robot performance.

Human/Robotic musical performance

Audrey Laplant had an excellent poster about the music seeking behaviors of young adults.  I'm hoping to talk to her more about what her findings imply about  how we should be designing music interfaces.
1/2 of Audrey and 1/2 of her poster

Elias Pampalk was showing a very clever interface for exploring a personal music collection. (note to Joan - Elias was using 'processing' to build his interface).

Elias give a demo

All in all, it was a good day, and I wish I had time to write more .... at the end of it all, some folks had some fun making music too.

After hours music

Tuesday Oct 10, 2006

Om Malik writes about QLoud, a 'people powered music search engine'.  They are going to try to do what already does.  The QLoud site looks nice, and has lots of web 2.0 eye candy, but their user-base right now is embarrasingly small when compared to ... and of course, the more users you have in a social system, the more data ... and the more data, the better the recommendations.  For instance, on qloud, the Green Day song 'American Idiot' has been played by 4 users, while on it has been played by 341,147 users.  So which one will give better recommendations?

This blog copyright 2010 by plamere