I’m a Virginia Tech alumnus. My thoughts and wishes for recovery go out to the university and Blacksburg community.
There is considerable interest in the amount of damage to one’s hearing that listening to an iPod (or similar portable audio player) can cause. I’ve posted before on Apple’s attempt to prevent harm to hearing from iPods and discussed possible improvements. In the absence of such a solution, it’s important that people understand safe listening levels for iPods and similar devices.
At the recent annual conference of the American Auditory Society, researchers from the University of Colorado and the Children’s Hospital Boston presented the latest data on how long one can safely listen to digital portable music players (Portnuff and Fligor, Output Levels of Portable Music Players).
Firstly, they showed that the danger among all of the players tested (iPod, iPod Mini, iPod Nano, Creative Zen Micro, Sandisk Sansa) are approximately the same. In other words, iPods are no worse for your hearing than any of their competitors.
Secondly, there was no difference in danger to your hearing between different music genres. Believe it or not, R&B music is as potentially damaging as Rock music or Country music. No word on the danger from Opera (although some might suggest that the true danger lies in falling asleep and being exposed to hours of Wagner—or perhaps the danger is in staying awake).
Thirdly, how long you can safely listen to your music player depends on what you are listening with. People choose different earphones to listen to their player: some use the buds provided with the player, some upgrade to expensive insert earphones such as the Etymotic ER6s or Shure E4cs. Some choose to use large headphones that sit over the ears. Depending on which you use, your safe duration of listening is different.
The table below shows the levels calculated by the Boston University researchers that reach the 50% noise dose per day according to NIOSH standards. Exceeding these levels is not a good idea.
For example, if you use Etymotic ER6s (categorized as “Isolators” in the table below) and you are listening with the music player’s volume control at 80%, then you should not listen for more than 50 minutes a day. The authors of this research report even suggest that “more conservative recommendations may be warranted.” Listener beware.
The 2007 San Francisco Chronicle Top 100 Restaurant list was just published today, so I have revised my interactive chart for exploring various aspects of the top entries in San Francisco. I did this previously for the 2006 list using IBM’s Many Eyes, analyzing the SF entries and evaluating the relationships among the rating categories.
Click on the figure to the right to be taken to the website where you can easily plot data on the top SF restaurants according to your choosing. I’ve chosen to plot Food Rating on the horizontal axis, Price on the Vertical axis, and Overall Quality as symbol size. After you click on the figure, you can hover your mouse over each data point and see to which restaurant each corresponds.
As with last year’s list, Ton Kiang continues to be the best valued restaurant in San Francisco (near the right-bottom: high food quality, low price), with Chow, Range, and Delfina among the next best valued.
One wonders why Kokkari is still in the Top 100 given how it sticks out from the crowd (it’s the tiny dot near the left-top: low food quality, high price). Kokkari used to be a top restaurant in the city, and I wonder if its continued Top 100 presence is simply due to lethargy in updating the list. I noticed that every new restaurant that I added to the list this year had solid 3 ratings across the board while Kokkari has mediocre ratings in each category—which is painfully obvious using the Many Eyes plot—except in atmosphere. There are 73 restaurants in San Francisco with an Overall Quality rating of 3 stars or more—why does the Top 100 list only 52 of them? And why do Hog Island Oyster Company and Tartine Bakery, both entries in the Top 100 Restaurant list, not have any Overall Quality rating at all from the SF Chronicle? Is it because they do not really qualify as restaurants?
As with last time, I did a correlational analysis of the restaurants to see what was responsible for the high ratings, and to see what higher prices bought a customer and what, if anything, is associated with good service.
The data above (see my previous post for an explanation of what this analysis means) shows how each category rating is related to each other. The closer to a value of 1, the more the two categories are related; the closer to a value of 0, the less the two categories are related.
Many of the same relationships hold from last year. The Overall Quality score was predominantly determined by Food Rating (correlation 0.88). This reflects what criteria chief restaurant critic, Michael Bauer, uses to come up with his Overall Quality score (what the SF Chronicle refers to as a restaurant’s rating).
Noisiness was more correlated with the overall score this year (-0.35 this year vs -0.18 last year). Paying more will now get you slightly better service (0.36 this year vs 0.22 last year), although a correlation of 0.36 is still pathetically small—I’d like to see this number well above 0.5. Last year, the only thing that paying more seemed to get you was better atmosphere, but this year the Price is as correlated with Atmosphere (0.6) as it is with Overall Quality (0.56).
Note: I have not considered levels of statistical significance in this analysis, nor have I considered partial correlations which would be a more accurate but more time-consuming approach to this analysis.