Thoughts on Information Architecture: Why We Still Use GUI to Listen to Music

Dan R Bothwell
2 min readNov 8, 2020
Photo by Mark Farías

My partner and I got an Amazon Alexa a few months back as a bonus feature on a soundbar we purchased. We weren’t intending on getting one but thought it would be a fun, possibly helpful gadget we could use occasionally. After a week of playing around with it and trying to make it work consistently, we gave up and haven’t used it since. The only time we hear from Alexa now is when she mistakes a sound for her name and reminds us she’s always listening.

For me, one of the most enticing, potential uses for Alexa was being able to play whatever song, album, or artist I wanted by simply requesting it verbally. Unfortunately, that didn’t end up being the case. I quickly noticed that Alexa failed to recognize a lot of the artists or songs I was asking her to play. It all depended on how renowned and phonetically clear the artist or song name was. Very often she would misunderstand what I said and play something completely different. Other commands, however, like asking about the weather or requesting more information on a random topic seemed to yield much more consistent results.

Understandably, Voice User Interface (VUI) hasn’t reached it’s full potential as far as functionality goes, which can explain why VUI isn’t currently the preferred method for selecting and listening to music. However, I also think the information architecture of current apps such as Amazon Music and Spotify also prevents VUI from being as fully functional as music listeners need it to be.

There are many artists whose names, song titles, and album titles don’t work well with VUI. For example, users selecting music via VUI would have a hard, if not impossible, time trying to play Bon Iver’s most recent LP titled “i,i” which features songs titled “Yi”, “iMi” and “U (Man Like)”. Due to the unconventional naming of Bon Iver’s work, asking Alexa to play something specific by him is impossible. Additionally, asking an English speaking Alexa to play specific songs titled in another language is nearly impossible. I believe this is where the information architecture of music streaming apps falls short of accommodating VUI like Alexa.

When it comes to making VUI fully functional, information architects are tasked with organizing and delivering content in a way that is more dynamic and adjacent to human interaction, something the AI fails to accomplish. I think the biggest reason why GUI is still the preferred method for listening to music is because information architecture’s marriage with VUI hasn’t come to fruition. My hope is that as VUI proliferates, designing products to work with VUI’s potential will be standard practice and I’ll have no problem trying to listen to “ECCOJAMC1” by Oneohtrix Point Never or “xXXi_wud_nvrstøp_ÜXXx” by 100 gecs.

--

--