Please read this out loud: “Saturday”
You just voiced a series of “phonemes.”
[Phoneme = Any of the perceptually distinct units of sound in a specified language that distinguish one word from another. (Oxford Languages Dictionary)]
What is the difference between that and the series of phonemes in “notskarliomsk”?
The first sequence of phonemes constitute a word that has been created and accepted in English-speaking societies. You can find it in the dictionary if you look it up (it is “lexical”).
The second sequence of phonemes is not in the English dictionary. It points at (signifies) only one concept: “meaningless sounds coming out of a mouth.” Both units are sounds, but one is coded as a linguistic unit, with a direct meaning, the other is not. Arguments over the meaning of music almost always end up here.
The coded phoneme combinations in language (words) are symbolic signs. Almost all other sounds are indexical signs. What does this mean?
It means that when your brother comes to the front door and shouts “open the door,” you don’t say “since my brother is saying ‘open the door,’ then my brother must be saying ‘open the door.’” Your brother is vocalizing certain phonemes that constitute words which symbolize certain concepts.
But if your brother rings the doorbell instead, you say “since the doorbell is ringing, there must be someone at the door.” You form a cause-effect relationship based on what you have learned from your past experiences. The sound of the doorbell is a signifier just like the uttered words, but its function is to point at a concept one associates it with. That is why it is an indexical sign (see “Modes of Signs”).
We primarily recognize the timbre of the sounds and differentiate them accordingly – such as, sounds of plates, wood, horn, explosion, automobile, piano, etc. (denotation). In addition to the timbre, the other three components of sound (pitch (high-low), duration (short-long), and volume (loud-soft)) are also considered in differentiation but we pay special attention to timbre because it points at the source (it may be a sign of good or bad news).
However, we don’t stop after recognizing the source and the characteristics of the sound, we find out what it is pointing at (connotation) in order to assign meaning to it. When sounds of plates and silverware point at the arrival of dinner time, when the alarm sound of the cell phone points at the wake-up time, when the sound of explosion points at danger, they become semiotic signs.
As long as the volume or the pitch of a sound does not create physical disturbance in the ear, the structural differences between nonverbal sounds do not make one sound superior, better, prettier, uglier, warmer than the other. The adjectives we assign to sounds do not result from their physical properties. We say “I heard a terrifying sound last night,” for example. What is terrifying is not the sound itself, it is the source we associate it with that implies unwanted consequences.
For example, there is no difference between the sound of a slap in the face and the sound of a clap: when we find out the sources, we consider the slapping scary, clapping positive. For example, sirens sounding at a factory, which we would perceive as an alarm about some problem, may turn out to be announcing the lunch break. For example, some find the sound of the accordion moving and some others find it unbearable. The reason is not the physiological effects of the sound waves coming from the instrument, differing from eardrum to eardrum, but the particular associations and connotations those people have developed.
We develop aural associations unknowingly, subconsciously, relating the sounds to the contexts where we hear them. This is because we don’t have a direct, communicative relationship with nonverbal sounds. As Ivan Pavlov proved in his experiments, if you ring a bell when you feed a dog, at some point the animal will salivate when she hears the bell even if food is not served. Similarly, we don’t even notice the presence of the violin solo in the background of a romantic scene between a couple in a movie, but if there is a trombone solo in the back of that scene, we notice it and try to figure out what is “funny.”
Because aural associations operate at subconscious levels, sound is used extensively for subliminal manipulation of consumers (there is always music playing in all types of stores, there is always a background music in all movies, even in the most factual documentaries).
In addition to our efforts at interpretation, our minds also follow some rather interesting strategies in our physical relationships with sounds that surround us at all times:
I had heard John Cage’s name for the first time when I was in my first or second year in college, in Istanbul, at a seminar by the composer Ilhan Mimaroglu on electronic music. After hearing some samples from a tape recorder, someone in the audience said something like “but these are not music, they are noise.” In response, Mimaroglu replied, “As put by John Cage, if you are sitting somewhere and listening to Mozart, sounds coming from the street will be noise. But if you are listening to the sounds coming from the street, then Mozart will become the noise.” I had wondered why that very simple and fully realistic observation had never occurred to me before.
There is sound at every moment of our lives. As Cage said, there is no such thing as silence. To be able to deal with that, we have developed a technique of stratafication: we bring some of the sounds to the fore and push some to the back, depending on the situation (“aural segregation”). In other words, we hear all of the sounds but we don’t listen to all of them, we listen only to those we find necessary or worthy. We filter the sound. (Thompson, 10) Those sounds which we cannot push to the back because of their volumes or intensity become “noise.” The statement “turn that tv down, we can’t hear each other” means that the person is trying to move the tv’s sound to the rear but its volume level makes it difficult, becoming “interference,” that is, “noise.”
For example, one of the most remarkable differences people visiting Turkey from Western countries notice is the call to prayer, which is heard from the loudspeakers on the minarets at full volume five times a day. After being woken up early in the morning for a few days, the visitors stop noticing that sound, just like the locals. The brain pushes the sound to the back – it hears it but doesn’t listen to it.
I am trying to emphasize here that there is a matter of preference in our perception of sounds. We hear all sounds but do not listen to all of them. It is one of the practical solutions we have come up with in our struggle for survival. Yet, this does not mean that we keep those sounds we don’t listen to completely “offline” – our brains link and associate all of the sounds we hear with the visuals of the contexts we are in.