Jason and I wound up in Washington, DC in the mid-1990s – he came down from Massachusetts when a friend, who was moving overseas, offered him an apartment. I followed a year later to go to graduate school. We had always thought of DC as a temporary home, assuming that we’d stay for a while and then move back to Massachusetts, where both of our families were. Now, we realized how lucky we were to be in DC, where there were so many services and competent professionals in the field of hearing loss and deaf education to support families like ours. Because all of a sudden, we were a family eligible for and in need of services.

Before the end of our ABR appointment, Heidi had made impressions of earmolds for Grace, so that she could be fitted with hearing aids. Because Heidi was leaving the area, she re-assigned us to a colleague named Susan.

We met with Susan a couple of weeks after the ABR. She was a bit older than Heidi, and seemed to have more experience working with families. She told us about the hearing aid loaner program DC offered – so that Grace could get hearing aids for free until she was finished with them, or until we decided to fit her for permanent ones. Susan also gave us some information on local resources and services, although I have no concrete memory of exactly where she suggested we go. She most likely listed the AG Bell Association; the National headquarters of which are located nearby in Chevy Chase, as well as the Parent Infant program available for free at Gallaudet University. I know she gave us several pamphlets, probably with information on the different types of education and language options we’d soon come to learn a great deal more about: Total communication, American Sign Language, Cued Speech, Auditory-verbal therapy. Despite all of this literature, I felt lost as to how to proceed. I was daunted by my daughter’s newly assigned identity, and by what was now required of me to parent her under these alien circumstances.

We were thrown into the vocabulary of audiology. An audiogram – which is a mapped out representation of a person’s response to sounds, achieved through hearing tests in a sound booth. The audiogram depicted Grace’s response – or lack thereof – at a wide range of frequencies. Frequencies had never meant much of anything to me until now. Now, they meant everything, because it was explained to us, from the start, that the critical point of severity of hearing loss depended upon frequencies as much as decibels. In other words, it wasn’t just how loud a sound might have to be for you to hear it that mattered. Because the important frequencies – when it comes to navigating through the human world in a meaningful way, are the ones in which speech and language fall. And these, it turns out, are just about always the first to go. So, technically, even Grace might be able to hear the loud low bass rumble of a garbage truck going by (how romantic, how reassuring…) or a big chopper revving it’s engine a couple of feet from her head. But these very low frequency sounds were the only ones she had a chance at any access to. The rest were falling on truly deaf ears.

We learned about the differences between an audiologist (who is trained to test hearing and assist in the use of hearing devices), a speech pathologist (who focuses on speech and language development) and an otolaryngologist (an ear, nose, and throat doctor). Eventually, we took Grace to each of these. There were so many appointments between her fifth and twelfth month it was hard to keep them straight.

What had been so innate and inherent – the way we communicated with Grace – was now rendered null. Looking back, four and a half months seems like so little time lost. But from where I was standing, it was painfully long. It was hours and hours of lost time. I felt like a fool. I’d been talking to her constantly for what felt to me like a small lifetime. Now, I was to start over again. We began to read books about different communication options. I learned that even the most skilled and patient lip-readers are only going to be able to truly “read” 40 -50% of the words they’re trying to decipher, if that. After all, there’s essentially no visible difference between lips saying “b” and lips saying “p”, just as one example. And so the lip-reader must fill-in-the-blanks using rapid-fire deduction and well-honed sense of context. Like walking through life trying to instantaneously understand half-filled-in hangman games, flashing by instantly and evolving second by second. How exhausting. How unfair.

We learned that American Sign Language is a different language from the signed languages in other countries, and that many deaf people speak signed English or “pigeon” sign, rather than ASL, depending upon the circumstances under which they were raised and educated. ASL has its own syntax, and facial expressions are critical. I learned that the size or grandiosity of the gesture is sometimes as important as the gesture itself. I learned, too, that the part of the brain that is tapped when learning spoken language is different from the part of the brain that is activated when sign language is perceived and used. So, teaching a child Spanish would exercise the same language-center that teaching her English would; but exposing her to sign language was accessing a different area. The longer Grace went without exposure to spoken language, the more that part of her brain would lie dormant, and ultimately, become incapable of easily grasping language. This area of the brain might stagnate, or become re-assigned for other purposes, but unless we somehow found a way to give her auditory input – and do it soon – her capacity for grasping spoken language concepts – for turning language into something meaningful to her – was going to diminish and become limited.

I became familiar with the anatomy of the ear – to which I’d never before given more than a passing thought, or had memorized just long enough to pass in basic high school human anatomy class, and then forgotten. Now, we needed to learn about hair cells, and auditory nerves, and the cochlea. About different causes of hearing loss, including bacterial and viral infections, genetic mutations, and complex conditions that are often accompanied by other symptoms.

We learned that hearing aids only work to any meaningful degree when there is enough hearing intact to amplify the acoustical information entering the inner ear in the first place. The way hearing aids work is simple – they amplify whatever sounds a person can hear – like a tiny, high-powered auditory telescope. For people with mild, moderate, or sometimes even severe hearing loss – hearing aids can be a very viable and meaningful solution.

For the profoundly deaf, however, they are often worse than ineffective. I remember going downstairs to the fifth floor of our building at work, where the creative department was housed, and typing a question to M_, one of our programmer/designers who happened to be deaf. In writing, I asked him if the reason he didn’t wear hearing aids was because they didn’t offer him any access to sound. He wrote back, “They help me hear enough so that I know I’m missing something. With my hearing aids on, I can sort of hear my co-workers laughing, but I have absolutely no idea what they’re laughing at. I’d rather not hear anything than know how much I’m missing. It’s less of a distraction.” I imagine it was, in some strange way, less isolating, too.