Interesting topic. I'm all for mathematically better algorythms, but the lack of interoperability is just too huge of a issue for me to bother with changing my recordings from 320kbps MP3s. Between car stereos, DVD/CD/MP3 players in my home entertainment center, iDevices, computers, and other digital devices, I would rather risk losing a little audio fidelity than risk it simply not playing correctly. Interoperability has definitely gotten better through the years, but IMHO its still got a long way to go. I've got maybe 1500 CDs ripped to my computer at 320kbps and another 2-300 albums from Amazon and eMusic that are ripped at ~160kbps, and the sheer volume makes re-ripping all that media makes it highly unlikely I'll bother with it until a significantly better industry standard gets widely adopted. By widely adopted, I mean it gets adopted by ALL hardware/software manufacturers, just like the MP3 standard did.

The other real challenge I see is the quality of earbuds/speakers/headphones that most people use. There's zero chance any human can truly hear a difference between a 320kbps MP3 and the same track recoded with {insert your favorite lossless audio codec here} when listened to through those inane little earphones that come with Apple iDevices. Through a high quality pair of studio headphones? Sure. But through cheap earbuds? Not in my experience, not even close. Unfortunately, the majority of music isn't listened to through studio quality playback devices - it's listened to through portable digital devices and cheap earbuds, or car stereos that have to compete against road noise. The manufacturers obviously know this & their devices appear to be selling just fine the way they are, so IMHO there's little incentive for them to really standardize on a higher quality codec.

Just my $0.02.