The Science of iPods and Hearing Loss

After having had a lawsuit leveled at them for claimed hearing loss resulting from iPod listening, Apple has responded by upgrading iPod firmware to allow people to reduce the maximum volume level in their iPod. Great response from Apple, right? Wrong. Infinite Loop asks whether this may be an admission of guilt regarding the lawsuit, as does Lifehacker, and and one definitely has to wonder whether this insufficient solution is simply a quick response to the lawsuit.

Apple’s response provides so little guidance on how to set the limit that it is near useless to concerned parents of children who use iPods or to concerned iPod listeners. Here’s why.

First, Apple provides no guidance on what level to set the iPod and what it means for how long someone can listen to their iPod at that level. Consider that whether a sound is damaging depends on both the level the sound and how long the sound is heard. Listening to a sound for twice as long is the same as doubling the power of the sound with respect to hearing damage. What does this mean? To properly set the level, you have to know how long the iPod will be listened to in a day; equivalently, after the level has been set, you need to know how long you can safely listen to that level in one day.

Nano_1
National standards by OSHA and other groups have guidlines for sound exposure and hearing protection. A sound at 85 dBA can be safely heard for 8 hours. Great, so what level is that for an iPod? Well, it depends on what you are listening to your iPod with, but for now let’s assume you are using the iPod earbuds. According to Harvard doctor Brian Fligor, 85 dBA is just below the 60% mark on the iPod nano volume control (indications are that it will be at an even lower setting on larger and more powerful iPods). So, at a 60% volume setting you can listen for  8 hours, but that time will halve for every 3 dB increase in sound level (that’s a doubling of power). Again according to Filgor, a volume setting of 80% on the nano produces a 98 dBA level, producing a safe listening duration of 23 minutes! What about full volume? 111 dBA and 1 minute of safe listening. Complete data is shown on the right and is from this paper in The Hearing Review.

The case is even worse if you listen to your iPod (like I do) with insert earphones like the Etymotic ER-6is and Shure earphones. Acoustic measurements indicate that levels are approximately 7 dB higher at the eardrum with those listening devices. This means that for the same volume setting, insert earphones cut the safe listening time by 4.

So, you can now see where the firmware upgrade is inadequate. It does not ask whether the person is listening with insert earphones or earbuds–Apple’s software should tell people that limits should be set lower if inserts are used. Secondly, they don’t give any guidance at all as to how long the iPod can be used at that maximum setting. Right now, providing the volume control limit ability is like being given medicine with no guidance on dosage or how often to repeat a dose. If Apple wanted to, they could design firmware that would embed a sound level meter inside the iPod and beep when the daily safe limit of sound exposure has been reached.

Hearingloss
The latest Hearing Review issue has several excellent articles on music and hearing loss, including "Portable" Music and Its Risk to Hearing Health and The Medical Aspects of Noise Induced Otologic Damage in Musicians. The articles are easy to understand by the average person. One researcher offers the following evidence that portable music players cause hearing damage. Dr. Fligor saw a 15-year old patient who had significant wax in his right ear. After removing the wax, hearing thresholds were measured in both ears. The ear that didn’t have the wax showed a 20 dB hearing loss at 4kHz while the ear with the wax was normal. The teenager said that he listen to his portable music player at full volume, and Fligor concludes that the most likely explanation for the loss in the one ear was the music player–the ear with wax didn’t have loss because it was protected by the wax which naturally reduced the loudness and protected that ear. The audiogram for this child is reproduced on the right, and the yellow highlight shows the 20 dB difference in ability between the two ears at 4 kHz.

Concerned? Data suggests that you can listen to your iPod at 50% volume indefinitely, and at 60% volume for over 4 hours. You will be tempted to go well above that if listening in an already noisy place, like on a subway or an airplane, but consider the consequences if you do and maybe leave the iPod off until you get to a quieter place.

Navigating to New Worlds of Innovation

Guy Kawasaki, the former Chief Evangelist at Apple and startup guru, has a great blog dishing advice to entrepreneurs. You can read about what to ask a startup if you are being recruited, how to run a board meeting, how to be a great moderator at a conference, how to raise angel capital for your start-up, the list goes on. I highly recommend subscribing to his feed.

Kawasaki recently posted, surprisingly, on a book called What Would Jackie Do?, a self-help book with lessons from the life and style of Jackie Onassis. Not the kind of book that you expect a technology and business expert to post about. The point that Kawasaki really wanted to make (I think) is at the very end of his post:

One of my recommendations for innovators is that they eat information like a bird eats food. (If you had the metabolic rate of a hummingbird, you would ingest approximately 155,000 calories per day.) This means reading voraciously–and not just HTML for Bozos and Encryption for Lovers–but books like these that are seemingly unrelated to “business.”

Looking for new ideas and new opportunities from new sources is an important tactic to finding innovation. If you stick to the usual sources of information for your industry/ technology, you may be inspired to develop incremental innovations, but you will be unlikely to develop a radically innovative idea. By exposing creative people to fields of expertise different than their own, revolutionary ideas can be developed in a lateral-thinking way. What’s not required is a whack to the head, however–what’s required is exposure to ideas, procedures, techniques, approaches, and technology that are incremental innovations in other fields but would be radical ones in yours.

The key to the success of this tactic–harvesting ideas from other fields–is in identifying which areas have the greatest potential for opportunity. Sending your key R&D engineers to Fashion Week in NYC may get you some excited high-fives, but be will unlikely to produce value for your company. Fields outside of your own but with overlapping technologies or areas of interest need to be identified–areas that may be  producing new concepts that could be translated to your own products and services. In my field of auditory science, there are many examples of established techniques and theories from vision science that have been adapted to hearing to develop innovative new concepts in audition. The research center that I run is looking outside our field of hearing impairment to concepts in cognitive science to inform technology development in hearing aids.

Is it enough to send R&D people to new and different conferences? No. Every potential new concept from another field has to be examined from the perspective of a technical expert from your own field with knowledge of:

  • you industry’s current and past technology
  • your industry’s market definition
  • your industry’s customer needs
  • your industry’s open research issues.

If your opportunity explorer doesn’t have this knowledge of your industry, opportunities for plundering will not be identified because your representative won’t have the expertise to identify one when it is revealed, and irrelevant concepts will be recruited by your representative because they are unable to provide on-site analysis, synthesis and filtering of the new approaches to which they are exposed.

Each potential opportunity from the other field that you are exploring has to be viewed through your own lens, from the perspective of the needs of your own industry. When you see an idea that is new to you, ask yourself

Is there an opportunity to apply this in my field?

What nugget of new information here can be applied to solving problems that currently exist in my field?

Once you broaden your field of scope to absorb information from outside of your normal sources of information (although Jackie O may be a little too far outside), you will be surprised to find out how much useful information there is out there. The key is being able to navigate your way to the new worlds of ideas, and have the creativity to identify potential opportunities when you see them.

Presentation Tips

Lifehacker recently had a post called Public Speaking Do’s and Don’ts. It’s a little basic, but there were a couple of points that I liked seeing: (i) know your audience and (ii) be flexible.

The first tip gets violated by speakers who give the same talk regardless of who their audience is, which produces some of the worst audience experiences imaginable: excruciating engineering details to clueless consumers, marketing jargon to annoyed scientists. I wouldn’t, however, phrase this advice the way that Lifehacker did, recommending that you ask yourself, "Will the head of the company be there or just your co-workers?" The CEO may rely on the opinions of "just your co-workers" to judge the value of what you spoke about. Don’t under-estimate or de-value your audience.

The second point is something I’ve mentioned before with respect to VC pitches: if your audience tells you they are really only interested in digging into your financials, don’t spend all of your time reviewing your technology. Be flexible and able to adjust your talk on the fly–which, of course, means that you are not reading from a script.

I have a couple of other tips from my experience:

Once, twice, three times a message

Start your talk by summarizing for your audience what you key message is, then give your talk that explains your key message(s), then finish by summarizing for your audience what the key message was. Repetition like this makes it more likely that your audience will walk away from your talk remembering your key points. It also makes them recognize the key points in your talk when they occur because you’ve already told them up front what to look for and they anticipate the message.

Have a conversation

You should be able to give your presentation without your slides. This means that you know your talk well enough that your talk becomes more of a conversation with the audience rather than a read script. Every time you give your talk, the words should be different but the content should be the same. A speaker with this technique is more engaging to the audience. In order to be able to do this, you should be talking about what you know very well. If you don’t know your content inside and out, you won’t be as confident and be able to speak naturally and with passion.

Engage your audience

If at all possible, get some audience interaction. This only works, of course, in a less formal conference/convention/classroom presentation and may not fly in a formal business or scientific presentation. I saw audience engagement done extremely well during a conference at Stanford where audience members could send questions throughout presentations via wi-fi to the presenters. They also had individual response boxes that allowed every audience member to enter letters A-F and Yes/No, allowing presenters to frequently ask the audience questions and quickly show the results from the response boxes in their multimedia presentation. I recently gave an online presentation that allowed me to ask questions of the audience and then show the results after they responded using their computer. Most of us, though, don’t have the luxury of having such polling technology during a typical presentation, but Powerpoint can be coaxed into simulating this–a technique that I will talk about in a future post.

Great advice on giving presentations without relying on a slew of bullet points can be found at Presentation Zen and on the Beyond Bullet Points discussion board.

Best in Class for Dummies

Geoffrey Moore recently posted that “’best in class’ is a sucker bet,” meaning that applying resources to develop a best-in-class product does not provide a valuable Return on Innovation.

Moore states that only three innovation strategies are worth a company’s time:

  • Differentiation
  • Neutralization
  • Productivity

This will no doubt start several debates as Moore’s words spread through the blogosphere.

Like most rigid declarations, in some cases Moore’s theory holds and and in some cases it doesn’t. I have no doubt that there are many examples where the pursuit of Best in Class was not worth the effort spent (Moore gives IBM and HP’s pursuit of the best PC as such an example in his response to a reader’s comment). Randy Cronk disagrees with Moore and suggests that luxury brands are counter-examples.

I suggest that fields that have strong indicators of which products are best in class will also provide returns on innovation—the automobile industry is one such an example. Another example is the field of medical devices and biotech, where product decisions are based on patient indications and clinically-proven patient benefits. An increase in efficacy of a new treatment over current solutions can result in large product success without requiring any of the three strategies that Moore defined.

The Value of Outsider Perspectives

An article that I wrote was just published in an industry trade journal called The Hearing Review. The article focuses on how the hearing aid industry is perceived by those outside of the industry and what we can learn from that perception.

I wrote the article after having listened for years to the opinions of people in the Bay Area and around the world on the hearing aid industry. While many of their perceptions were incorrect or out of date, they still revealed to me what we as an industry were doing wrong. I also thought about what the implication of these misperceptions were for our industry—we should not simply ignore them for being wrong and irrelevant because those opinions they have indirect consequences for the growth of our industry.

One of the main messages of the article is that evidence-based product development by companies and evidence-based practice by audiologists can help spur the development of new technology. If companies provide clinical data on patient benefit for their products and audiologists use that data in their decisions on what technology to give to their patients, entrepreneurs will be more motivated to develop new technology for the hearing impaired. This is because they will have confidence that demonstating patient benefit with their technology will result in success in the marketplace and a profitable return on their investment.

A side-bar in the article discusses how to realistically assess the potential market size of the hearing aid industry, a topic that I discussed relative to any industry in a previous post called How Big Is Your Potential Market.

Here are a couple clips from the article:

Despite all of these positive indicators that our industry is healthy and growing, many people outside of our industry still look upon the hearing aid business as if it were small, antiquated, and uninteresting. These outside opinions affect future customers by influencing their opinion of hearing technology and the process of obtaining a hearing aid. It also affects our industry in more subtle ways: Top university graduates may not be attracted to work in our industry, technology innovators may not be interested to contribute to our industry, and high-tech companies may be reluctant to pursue business opportunities in our arena. These influences can only have the effect of dampening the growth potential for dispensing professionals and companies in the hearing aid industry.


We can embrace the aspects of other industries that are likely to bring about positive change and that promote the integrity and quality of its products and services. Valid clinical data that demonstrate product benefit should become a part of product development industry-wide. This clinical data should be made openly available to audiologists and other dispensing professionals, and these professionals should embrace the evidence-based practice approach. Hearing care professionals should know what benefit new products provide to patients, should demand supportive data from manufacturers, and should be cautious when they see claims that are vaguely worded with no supporting data. This approach is understood by everyone outside of our field, and by embracing these standards industry-wide, we can only increase support for our industry among government, health insurance companies, health care professionals and potential patients.

Mathematical Sleuthing

I heard a great interview last week on NPR with Justin Wolfers, a professor at the Wharton School of Business. The story was also covered by papers such as the San Jose Mercury News. Wolfers has done statistical research that concludes at least approximately 500 games in NCAA basketball over the past 16 years have involved point-shaving: 6% of games with strong favorites which represents 1% of all games. I like his application of simple mathematics to analyze something that everyone can understand, and I also like the name for the field of this application: forensic mathematics.

Wolfers analyzed the game scores of tens of thousands of NCAA division I games and compared their point difference to the Vegas’ betting line for those games. He discovered, in a way that can only be done with a huge set of data, that the games in which one team was favored by over 12 points had a statistically significant anomaly in the distribution of difference in points scored, indicative of point-shaving.

Point-shaving is where a key team member plays poorer so that their team’s score is on the lower-side of the Vegas odds, but typically not so much that his team loses. This is most easily done by a key contributor to a dominant team in a game that is expected to win by a large score: that key player could turn over the ball a couple of times or clunk a few shots off the rim near the end of a game where they will clearly win.  For example, a player ensures that they only win the game by 10 points when the Vegas spread had then winning by at least 12 points.

How can statistics tell whether such a thing happened? By looking at conditional probabilities. Wolfers looked at the final point spread in games where the predicted spread was 12 points or more and found that the winning team scored less than predicted. The obvious skepticisms to this are: (i) maybe Vegas was wrong, and the betting line typically over-estimates the spread in lopsided match-ups, or (ii) maybe the overwhelming favorites tend to not play as hard when they are the clearly dominant team so they tend not to score as much as expected. Wolfers raised these reasonable objections in the NPR interview and explained how his analysis avoided these possibilities, but I’m guessing that his quick verbal explanation wasn’t so clear to NPR listeners, so I’m going to provide a visual of what he found.

Pointspread_2
Wolfers looked at the distributions of game score differences conditioned on the predicted Vegas spread. The figures at the right are my representations of distributions for three different hypotheses. Figure B in the center shows what such a distribution probably looks like if the predicted spread of 12 points was accurate on average. The Central Limit Theorem tells us that this distribution will be Gaussian if the individual points scored are independent samples with equal probability distributions–certainly not true but perhaps not an unreasonable assumption. In this case, the Vegas adds were accurate, and the same number of games that had point differences above 12 points equaled the number of games that had point differences below 12 points. Figure A at the top shows what this distribution would look like if Vegas tended to over-estimate the point spread for these games, or if the favored teams tended to slack off when the spread is this high and under-achieve. The distribution is approximately the same shape as Figure B, but shifted to the left. Figure C at the bottom shows what you would expect if favored teams cut a few points out of their win when the game is close to the spread: the number of games that barely beat the spread would be lower than expected and the number of games that barely missed the spread would be greater than expected. The distribution found by Wolfers for lopsided games resembled Figure C.

The mathematical analysis isn’t too rigorous but is compelling nonetheless. What does this all mean? Point shaving probably occurs in NCAA basketball more than some of us thought. More importantly, however, if you are going to bet on a game with a large point spread, bet against the spread!

A Little Help, Avis?

Once again today, I drove my car rental into a gas station and suddenly realized that I had no idea on which side the gas tank was. I checked both side-view mirrors hoping to see the cover in one of them, but with no luck. So, I made a guess, pulled up to a gas pump station, got out of the car, and sure enough I made the wrong choice. Got back in, backed up, drove to the other side of the island. Wasted time, added annoyance.

This scenario, sadly, gets repeated with an embarrassing frequency for me. On which side of my car rental is the @#$ gas tank? I frequently think about checking which side it’s on while I’m driving down the highway, or in a meeting, or in a hotel, but I always have something else on my mind when I can actually check and inevitably fail to do so until it is too late.

Gastank_1
I can’t believe that I am alone in my repeated “which side” dilemma. It’s a consumer need and a usability issue. Car rental agencies should put stickers like the one shown on the right on their dashboard to help poor customers like me when I’m in a mad rush to fill up the tank on my way to the airport.

I’m tempted to make one myself and bring it with me whenever I travel. But then I’d have to remember to check which side the tank is on to know how to orient the sticker…

The Art of Innovation, Invention, and Creativity

I spent an enjoyable evening today at dinner with friends talking about innovation: specifically, what differentiates the characteristic of innovative from creative from inventive.

Answer the following questions:

       A. e e cummings’ poetry is:

  1. creative
  2. innovative
  3. inventive

        B. Name a painter who was creative but not innovative.

        C. Can an artist be innovative but not creative?

        D. Name an invention that was neither creative nor innovative.

These were some of the questions that we discussed, and I’d be curious to hear answers from those reading this and their reasons for giving them.

You’ll notice that our discussion tended away from technology and more towards artistic fields. This was intended as a way to shed new light on how we innately think of these concepts, what representations of these concepts have been built up in our lives. Thinking about which artists you consider creative, or innovative, or inventive, and why you make that distinction can illuminate our application of these terms to business, science and technology–fields where these terms get applied so often that their differentiation has become obscured.

With respect to the arts, the term creative seemed to require that the observer have a visceral response to the artwork, some level of appreciation or aesthetic response. The mere act of creating something does not demand that the act be denoted “creative” in this context. A work could be different and inventive while not inducing in the viewer/reader a response that creates the reaction “creative”. We can understand this in technology through Edison’s quote, “Genius is one percent inspiration and ninety-nine percent perspiration.” This quote from our greatest inventor can also apply to the act of invention. Developing something new can result simply from hard work and does not necessarily require any creativity at all.

Creativity also requires the context of history—a piece of art is judged creative when considering what has been done previously by that artist and by other artists. Context helps define what is creative and what is derivative.

Innovation, however, requires the contexts of both the past and the future. Innovation must be creative (the past, see above), but must also cause a change in the creations of others (the future). If someone creates a piece of art that incorporates a new technique, the piece would only be innovative if it inspired other artists to change how they create art, perhaps by creating a movement based around a new technique or approach. Innovation thus demands a social context of some sort that creativity does not.

So when is something an invention? Obviously it must be new, but if I throw paint at a piece of paper , then I’ve created something new while not something inventive. It must be new in the sense that it has novelty and utility. Unlike creativity, inventiveness seems to require the creation of a tool of some sort that others can use. Invention can somehow be disassociated from creativity in the sense that one can slog one’s way to an invention (or utility creation) without the flash of inspiration and imagination that is associated with creativity. One can create an invention simply by trying something over and over again until something works. This would not be a creative process. Nor would it be an innovation.

Or is this all wrong? I have a suspicion—no, I’m sure—that there are inconsistencies in these arguments and some of the statements are outright wrong. Which ones, I’m not sure. But it certainly is worth thinking about, and it definitely makes for a great dinner discussion.

Google’s World Domination Intact?

That Google was the only company that didn’t comply with the government’s request for search records last year wasn’t surprising, nor was the relief expressed by Google today over the judge’s ruling that attempted to alleviate the public’s concern about the security of online data.

Many commentators have been discussing the story as if the main issue is the privacy rights of  searching, but for Google it’s much more than that. What’s become clear over time is that Google’s long-term business strategy is to get people to search, store and create all of their knowledge online through Google services. Google wants us, ultimately, to store all of our data (documents, pictures, writings, knowledge) and all of our knowledge sources online in a way that they can be easily shared/searched/modified/mashed. Goodbye hard-drive, goodbye local apps: Google wants to provide everything for you. This frees you to access your data whenever you want with whatever machine you want: home PC, work PC, phone, pda-embedded sunglasses, implanted neurochip.

Why does Google have this strategy? Because that’s the logical business plan that one would create on the assumption that data storage will be essentially free, wireless broadband access will be available to everyone at all times, and people will have multiple devices with which they can access, store and create their information. Google’s original innovation–their search engine–is the linchpin of this personal total-knowledge system.

What could kill this business plan and what will be Google’s biggest hurdle to overcome? Distrust by consumers in giving their data/knowledge to Google. Forget people becoming leery of online searches. Google’s dreams of being the consumer’s omnipresent global knowledge partner would die in a flash if people thought their private data could be compromised. Hence their fight to not turn over search records.

A lawyer for the ACLU was perhaps reading too much altruism in Google’s actions when quoted in the New York Times as saying, "The mere fact that Google has stood up to the government is a positive thing." Google was just protecting their long-term business plan.

Kevin Kelly on Trends in Science

Kevin_kelly
Kevin Kelly, co-founder of Wired magazine, The WELL, and author of the blog Cool Tools gave a talk called "The Next 100 Years of Science: Long-term Trends in the Scientific Method" at the Long Now Foundation Lecture Series. I was not really familiar with Kelly’s writings, but I attended the talk because of my interest in the topic and because I was familiar with Kelly’s reputation as a respected commentator. Needless to say, his Cool Tools blog did not prepare me for what to expect from his talk, nor does the blog do his talents justice.

Kelly is a self-proclaimed scientist groupie, being a college drop-out and having never participated in technology as a scientist or engineer. He contributes as a cultural commentator, which is how he approached his lecture. Kelly said that he is more interested in the process of science rather than science itself and noted that most scientists are “clueless” about the topic. His interest in talking about the future of science is in how the process will evolve, rather than what actual breakthroughs will be made. So, there was no speculation on the forthcoming prevalence of jetpacks, flying cars or replicants (those would be technological advances rather than scientific advances, anyway).

Despite the forward-looking title, Kelly spent much of his talk detailing key developments in the past history of science. To predict future developments in the scientific method, he would look for patterns in the scientific process over the past 2000 years.

Kelly’s abbreviated history of the scientific process timeline went like this:

2000 BC: first bibliography
250 BC: First catalog
200 BC: first library with an index
1000 AD: first collaborative encyclopedia
1590: first controlled experiment
1600: Introduction of laboratories
1609: Introduction of scopes
1650: Society of Experts created
1665: The concept of necessary repeatability introduced
1665: First scholarly journal published
1675: Peer review introduced
1687: The concept of hypothesis/prediction introduced
1920: Falsifiability introduced
1926: Randomized design created
1937: Controlled placebo approach developed
1946: First computer simulation
1950: Double-blind refinement
1962: Kuhn’s Study of the Scientific Method

All of these are changes to the process of how we know something. The introduction of Falsifiability, for example, affected what we would consider a scientific theory: if a theory could not be proven wrong, then it wasn’t a theory at all (and could more likely be categorized as a belief).

After detailing his view of how the scientific method has evolved up until now, Kelly then went on to present five predictions of how science and the scientific method would change over the next century:

  1. Science will change in the next 50 years as much as it changed in the last 400. No doubt. Everything is accelerating, although we are highly unlikely to achieve a singularity as Ray Kurzweil suggests.
  2. It will be a Bio century. Kelly provided data that demonstrates how biology is already the biggest scientific field today and suggested that the amount that we have to learn over the next several decades will overshadow developments in every other field.
  3. Computers will lead the Third Way of Science. Kelly suggested that the general methods for making scientific progress have so far been Measurement and Hypothesis. He suggests that Computer Simulations will become just as important a tool in the scientist’s arsenal for advancing our knowledge and understanding. Don’t know how something works? Run simulations of every possible parameter set and permutation until you accurately model the behavior of the process that you are observing. I see this already in my field, and certainly simulations play a significant role in our understanding of many different systems today, from economics to physiology.
  4. Science will create new ways of knowing. Kelly (I think) is talking about tools here. He mentioned wikis, distributed computing, journals of negative results, and triple-blind experiments as examples of recent changes to the process of developing and sharing information. Distributed computing is the distribution of a parallel-processed problem to be solved across many connected computers, as is already being done by SETI and for conducting cancer research. Triple-blind experiments refer to the gathering of massive amounts of data and storing it for future experiments that haven’t been specified yet, with such a broad swath obtained that the control data can also be extracted from the database.
  5. Science will create a new level of meaning. Here Kelly extrapolated the concept of distributed computing by speculating on the power of all the computers on the internet as a single computing machine. He created analogies between this massive system and the structure of the brain. I have to admit, my notes are sketchy on this section, but they include discussion of both science and religion as consisting of infinite games and recursive loops, and proclamations that Science Is Holy and that the long-term future of science is a divine trip. I guess you’ll have to wait for his book for an explanation of these concepts.

The Q&A section after his talk was perhaps the most interesting part of the seminar. Kelly has clearly spent a lot of time thinking about these issues, and his thoughts are both entertaining and intellectually interesting even if you think that he has completely missed the boat and take issue with his non-scholarly approach.

Keven Kelly seems like he would be an interesting guy to meet at a party for a memorable night of discussion.