I'm at the International Conference on Music Information Retrieval (ISMIR) in Philadelphia where I'm presenting 2 papers:
Combining feature kernels for semantic music retrieval,
5 approaches to collecting tags for music (Doug Turnbull is the first author)
Our entry in the MIREX auto-tagging competition came first in a competitive field of eleven, which is pretty cool.
Also, we're going to be giving the first real-world demo of our upcoming music annotation game "Herd It".
However, tons of other people are doing lots of cool stuff here. Some highlights for me so far are:
Paul Lamere and Elias Pampalk's tutorial on collecting tags for music.
This was another awesome overview that included:
cool work on a Sun, in-house semantic search engine with a really nice interface where you could rescale tags in a cloud to change your query (I've been thinking about this for ages but they've actually done it!)
distance between semantic profiles built from last.fm tags powering artist, tag and user similarity and how this can be used to build a structured taxonomy from an unstructure folksonomy.
A survey of 200 users (conducted by Paul and to be published in his upcoming JNMR article) that showed that users prefer music recommendations based on similarity than collaborative filtering.
Paul's even set up a website - SocialMusicResearch.org - where you can find the slides and more.
Magno & Sabel's paper on perceptual similarity using various music models that showed that MFCC+GMM-derived similarity was preferred to recommendations from last.fm or Pandora! (and I think that our system works even better than that)
Masahiro, Takaesu, Demachi, Oono and Saito's work on uses a shoe-sensor to detect "steps per minute" and this communicates with the runner's iPod to play music with the same beats per minute. The presentation included an hilarious video of a determined Japanese researcher testing the system by running on a treadmill but still keeping it formal in shirt and tie! Check out figure 5 in their paper.
Mark Godfrey and Parag Chordia's paper on improving MFCC+GMM modelling by detecting and removing "anti-hubs" (and thereby, also removing hubs) by finding GMM components that are very distant from all other components.
Matt Hoffman, David Blei and Perry Cook's work on hierarchical Dirichlet models of music. I think I finally understand HDPs although I still don't fancy trying to train one...
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment