WTF Fun Fact 12768 – English as a Second Language (ESL)

English is the most commonly spoken language – but only because so many people learn it as a second language. Click to read the full fact.

The post WTF Fun Fact 12768 – English as a Second Language (ESL) appeared first on WTF Fun Facts.

Source

Does the Language You Speak Affect How You Think?

Imagine enjoying a picnic as a friend suddenly informed you that there was a spider on your north shoulder. Would you know which shoulder to check? This might sound bizarre to an uninitiated English speaker, but to a speaker of Gugu Yimithirr, it would be perfectly natural. Speakers of this Australian language (and several others around the world) primarily use absolute directions when describing everyday situations, to the exclusion of relative directions like “left” and “right”. Similarly for speakers of Tzetal (a language spoken in mountainous southern Mexico), everyday spatial relationships are described in terms of “uphill” and “downhill,” reflecting the terrain where the language is spoken. Speakers of languages that favour absolute direction are reported to have highly accurate of awareness of absolute directions without the aid of a compass, even indoors or in the absence of obvious landmarks.

This is a striking example of how differences in language may affect our perception of the world around us. The idea that language and thought are intertwined is not a new one. Plato suggested that “The soul when thinking appears to me to be just talking.” In more modern times, Benjamin Lee Whorf described it as “the world is presented in a kaleidoscopic flux of impressions which has to be organized by our minds–and this means largely by the linguistic systems in our minds.” This idea that language shapes how we think and see the world goes by several names, including the Sapir-Whorf Hypothesis, Linguistic Relativity, and the Whorfian Hypothesis. But how true is it really?

Traditionally, views have been described as either “universalist” or “relativist,” but these views can encompass a wide range of different ideas. The strongest forms of the relativist view include the notion that language constrains the range of thoughts available to us (often called linguistic determinism) or even that we do all of our thinking through language.

The idea that we think in a language has some intuitive appeal. However, it’s pretty easy to show that this can’t be entirely true. Steven Pinker, a prominent critic of linguistic determinism, pointed out that “Sometimes it is not easy to find any words that properly convey a thought. When we hear or read, we usually remember the gist, not the exact words, so there has to be such a thing as a gist that is not the same as a bunch of words.” Our ability to interpret ambiguous sentences also hints at thought that exists independently of language. When we read (or hear) a headline like the famously ambiguous “British Left Waffles on Falkland Islands” (often attributed to the Guardian), we can apply two different meanings—one involving indecision of a political party during the 1982 Falklands War and the other involving an abandoned breakfast treat. It could be argued that these multiple interpretations (two or more “thoughts” for one string of words) wouldn’t be possible if language and thought were one and the same. If people couldn’t think beyond the confines of their language, it would also be impossible to create new words—a process that we know occurs regularly–just look at the number of words we invent every year for things on the internet.

On the other hand, we do know that language can facilitate memory and helps us grasp complex concepts like numbers, directions, and understanding the viewpoints of others (also known as Theory of Mind). People who grow up with limited early language exposure (often due to hearing loss and lack of access to sign language), experience difficulties and delays in a variety of cognitive domains. In this sense, language does appear to affect how we think. What is less clear is whether (or how much) learning different languages results in different patterns of thinking.

Debate between relativist and universalist sides has often been heated and recent research acknowledges that the strongest forms of both views are unlikely to be true. However, before discussing evidence, it’s worth noting that this field of research remains controversial. The validity and significance of nearly all experimental findings on both sides of the debate have been questioned in some form or another.

But let’s dive into it shall we?

At a very basic level, language experience almost certainly affects how we perceive certain sounds. For example, children of English and Hindi speakers respond differently to the sounds /t̪/ (dental t, pronounced with the tongue touching the front teeth and /ʈ/ (retroflex t, pronounced with the tongue further back than English /t/). To English speaking adults, these sounds can be difficult to distinguish, both sounding like some sort of /t/. However, for Hindi speakers, these sounds are clearly different, as different as, say, /t/ and /d/ in English. Interestingly, infants are born with the ability to tell these sounds apart. At six months, babies with English-speaking parents can distinguish these sounds easily, but by the age of one year, English-learning infants lose the ability to distinguish them, while Hindi-learning infants keep it. Similar patterns are followed for numerous sounds in languages from around the world. These differences show how language can change our perception at a very basic level, but this is not what people usually think of when they think about Linguistic Relativity. What about other domains of perception not directly linked to the mechanics of language itself?

To begin with here, let’s discuss time.

Most, if not all, languages use spatial metaphors to describe time, but differ in how they achieve this. English tends to describe time as moving horizontally, with the future in front of us and the past behind. We talk about looking forward to something or putting the past behind us. Mandarin Chinese, on the other hand, often uses vertical metaphors, with the past above and the future below (although it does use horizontal metaphors too). The Mandarin word for next is the same as the word for down and the word for previous is the same as the word for up. This leads to the interesting question of whether speakers of these two languages actually think of time differently. And if that’s true, how do we measure it? Fortunately, psycholinguists are here to do this so that you don’t have to.

A few different methods have been used to investigate these questions. In one experiment, participants were asked to arrange pictures, such as photographs of a person at several different ages, sequentially. English speakers almost exclusively arranged the pictures horizontally from left to right, but Mandarin speakers arranged them vertically (from top to bottom) fairly often, around 30% of the time. In similar tasks, speakers of Hebrew (which is written from right to left) arranged pictures from right to left, opposite to the direction English speakers favored. A completely different pattern was found in speakers of Aboriginal Australian languages that use absolute direction (north, south, east, and west, rather than left and right). They tended to arrange the pictures from east to west, mirroring the direction of the sun’s daily journey across the sky. Experiments like this show an intriguing relationship between language and time perception, but it’s not always easy to determine whether language actually causes the differences. What if the language differences exist because of cultural differences in how time is conceptualized? This kind of chicken-and-egg problem can be difficult to avoid when studying the relationship between language and culture.

One potential way to get around this issue is to study bilingual speakers and observe whether they perform differently in their two languages. Experiments with Mandarin-English bilinguals have shown that they are more likely to give responses indicating a vertical perception of time when tested in Mandarin than when tested in English. In another experiment, Mandarin speakers were more likely to indicate vertical thinking about time when answering questions asked with vertical metaphors. These kinds of within-language results (which have also been found in other domains) show that language may affect how we think in the moment, but also that thinking can be quite flexible—not necessarily defined just by what language we speak.

Moving on to the fascinating case of language and Color. Color perception has a long history of being studied in relation to cross-linguistic perceptual differences. At a physical level, modern humans all have essentially the same hardware for color vision (color blindness and rare genetic variants aside). We generally have three kinds of cone cells in our retina, each attuned to different wavelengths of light. This means that the colors we are physically capable of seeing don’t vary across populations. However, the way language is used to describe color is highly variable across languages. The case of color demonstrates both striking differences in perception across language groups and evidence of universal patterns in cognition. This is still an active area of research and exactly how universal or variable color perception is remains controversial.

Many scholars have noted that Ancient Greek didn’t describe colors in the same way as modern English—classical texts by Homer have famously likened the appearance of the sea to wine or described sheep with a color that could also be applied to blood. When discussing the rainbow, Greek poet and philosopher Xenophanes, describes it as having only three bands of color. In an omission surprising to many English speakers, it appears that Ancient Greek didn’t have any word corresponding closely to English “blue” (although blue did exist within the scope of words that also covered other colors including greens and greys). In fact, this absence of a specific word for blue has been noted in numerous other languages around the world, especially those spoken in ancient times. Writing of ancient Vedic Hymns, philologist Lazarus Geiger observed that

“These hymns, consisting of more than 10,000 lines, are nearly all filled with descriptions of the sky. Scarcely any other subject is more frequently mentioned; the variety of hues which the sun and dawn daily display in it, day and night, clouds and lightnings, the atmosphere and the ether, all these are with inexhaustible abundance exhibited to us again and again in all their magnificence; only the fact that the sky is blue could never have been gathered from these poems by any one who did not already know it himself.”

English speakers may find this baffling, but in a worldwide linguistic context, lack of a specific category for blue isn’t particularly unusual. It’s so common for languages to have a single term that encompasses both green and blue that linguists even have a name for it– “grue.”

English has 11 basic color words (red, orange, yellow, green, blue, purple, black, white, pink, grey, and brown), but this set of colors is far from universal. Languages are usually described as having anywhere from two to (approximately) twelve basic colors. A survey of 20 languages in the 1960s found that color words in languages followed a surprisingly predictable pattern. Languages with only two basic color terms divided colors into “black” and “white,” with darker colors falling under the “black” category and lighter colors falling under the “white” category. In languages with three basic color terms, the third was always “red.” Fourth and fifth color terms were generally either “green” or “yellow.” Only with six color words was “blue” included, followed by “brown” at seven. Languages with higher numbers of color terms included some combination of “purple”, “pink”, “orange”, and “grey”. A few languages make a further division, such as Russian having two separate words for lighter blues and darker blues (much like the difference between red and pink in English). More recent data from larger and more diverse samples of languages (The World Color Survey included over 100) show that the patterns above largely hold true, although there are some exceptions and variability, including substantial variability between speakers of the same language.

With all this variety in how languages describe color, it’s hard not to wonder if language affects how we actually see color. Does the sky still look blue if you don’t have a word for it? Would a rainbow have different numbers of stripes depending on the language you speak? The answer (not to mention the question itself) is complicated. Understanding and measuring exactly how different cultures conceptualize color isn’t necessarily a straightforward task. Even defining what constitutes a color term in the first place can be tricky. It’s not always clear if words refer directly to color, or to other visual attributes, or similarity to some other object. Color terms can also be limited to specific domains (such as blonde primarily being used to describe hair). Some languages, such as Walpiri (spoken in central Australia), may not even have a word for the concept of “color” in the first place. All of that having been said, many researchers have attempted to untangle the relationship between language and color perception experimentally.

Several experiments have shown differences between speakers of different languages in tasks related to color perception and memory. For example, speakers of Russian (which has different basic color words for light and dark blue) are faster at identifying the difference between pairs of colors that straddle this boundary than pairs that don’t. When presented with the same stimuli, English speakers show no such differences. Similar results have been found in Himba (a language spoken in Namibia that doesn’t distinguish between green and blue) and Korean (which has a different boundary for green than English). Some recent work has suggested that these effects may be more pronounced in the right visual field, corresponding to the left hemisphere of the brain (where language is primarily processed). Additionally, these effects tend to subside when participants perform a verbal distraction task (such as mentally rehearsing an eight-digit number), suggesting that linguistic mental resources are required for the effects to occur.

On the other side of the debate, universalists have argued that small differences in highly specific experimental tasks don’t necessarily indicate any important difference in everyday thought and perception. We can still see differences between colors within a category, even if we don’t have different words for them. Steven Pinker pointed out that “No matter how influential language might be, it would seem preposterous to a physiologist that it could reach down into the retina and rewire the ganglion cells.” Also, while the number of color categories is highly variable from language to language, they tend to cluster around similar hues. Exact boundaries and “best examples” for specific color categories may differ across languages, but tend to be much more similar than would be expected by chance. Given the infinite number of ways it would be theoretically possible to divide up the color space, this consistency is remarkable. Even infants and children who can’t yet reliably label color categories show categorical perception of color that largely follows these patterns, hinting at some universal aspects of color perception.

This brings us to numbers. Number is another area where language differences can be striking. As with color, number shows a mixture of universal and relativist findings. While fairly unusual, there are languages that have limited to no words for expressing exact numbers. Pirahã, spoken by a small group of hunter-gatherers in the Amazon, is perhaps the most extreme case. It has been described as having no words for exact quantities at all—only approximate terms for “one,” “two,” and “many.” Mundurukú, also spoken in the Amazon, only has numbers up to five (and even “four” and “five” are not always exact). Perhaps unsurprisingly, speakers of these languages perform differently on mathematical tasks than people from cultures that regularly use large, exact numbers.

In a typical experiment, several nuts were placed in a can and then removed one at a time. After each nut was removed, participants were asked whether any nuts remained in the can. For quantities above about 3, Mundurukú listeners often didn’t respond accurately. However in estimation tasks (like comparing sets of dots), they perform much like speakers of other languages. Estimates tended to follow a logarithmic pattern in accordance with Weber’s Law (which predicts that differences of the same magnitude appear smaller for larger sets). That is, sets of 4 and 8 dots would be easier to distinguish than, say, 34 and 38 dots. Even for sets as large as 80 dots, Mundurukú speakers showed largely similar performance to French controls in some estimation tasks. Patterns of estimation consistent with Weber’s law have also been observed in children, infants, and even non-human animals, suggesting humans may have an innate estimation ability with ancient evolutionary origins.

Overall, findings seem to suggest universal patterns for quantity estimation, but number words may be necessary for tasks involving exact amounts above about three. However, the picture is not entirely clear. Even Mundurukú speakers who had learned some Portuguese showed similar patterns of results to monolinguals, so culture, rather than just language, may play a role in how we think about numbers. On the other hand, there is also research suggesting that both Pirahã and Mundurukú speakers who have received mathematical education (either through schooling or informal instruction) show higher accuracy in some quantitative tasks. It’s also worth remembering that some of these findings are controversial due to limited sample size and the difficulties of controlled experiments in remote settings.

Differences between languages extend further than what they have words for or how they label specific concepts. Can a language’s grammar affect how we think? Many languages assign grammatical gender to nouns, with everyday objects like tables, cups, and bridges being classified, often quite arbitrarily, as masculine or feminine. When asked to assign a voice to various inanimate objects, people tend to choose voices that match the grammatical gender of the noun. For example Spanish speakers would be more likely to assign a female voice to a house (la casa), which is feminine in Spanish and a male voice for a book (el libro), which is masculine in Spanish. However many other studies using less direct tasks measuring memory or associations have showed mixed results, so it’s not clear how pervasive the effect of grammatical gender on our thinking is.

One recent study suggests that grammar could also have an effect on memory. Languages are often classified as predominantly right branching (with the most important information at the beginning of a sentence or phrase) or left branching (with the most important information at the end). English primarily uses right branching structures (such as “turtles who live in the sewers”), but we do use some left-branching patterns as well (“sewer-dwelling turtles”). Both phrases describe a kind of turtle, but differ as to whether we find that out at the beginning or end of the phrase.

This study found that speakers of left-branching languages remembered early items in lists of words or numbers more accurately than late items while speakers of right-branching languages showed the opposite pattern and instead remembered late items more accurately. As with other areas of study, caution is warranted and results have to be interpreted carefully. Only eight languages were covered in this study and more research is needed to determine whether these findings generalize to a larger sample of languages. If these results do hold up to scrutiny over time, they suggest that language could have a subtle, but pervasive effect on basic thought processes like memory.

So to sum up, despite decades of extensive study, the exact details of how language may affect thought remain slippery and elusive. Language doesn’t appear to constrain how we think, but it does seem to be able to influence our thoughts more subtly. It may nudge us towards certain ways of thinking, make certain distinctions easier to notice, or facilitate our ability to grasp certain concepts. Given the diversity of the world’s languages (many estimates place the number around 7,000) and the difficulty of measuring thought, debate about the specifics of the relationship between language and thought is likely to continue for many decades to come.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Expand for References

 

Absolute Direction
Gugu Yimithirr
Tzeltal

Review Articles
Linguistic Relativity
Effect of Language on Perception
Wikipedia

Pinker Quote
“Sometimes, it is not easy…”

Early Language Exposure
Risk of Language Deprivation

Infant sound perception
Cross Language Speech Perception
Becoming a Native Listener

Time
How Languages Construct Time
Absolute Direction and Time
Do English and Mandarin Speakers Think About Time Differently
English and Mandarin Time in 3D
Immediate and Chronic Influence of Spatio-temporal metaphors

Color
Language, Thought, and Color
Color Naming Universals
Focal Colors are Universal After All
No Universals in Color Perception
Russian Blue Perception
Categorical Perception of Color in Korean
Human Color Perception
English and Himba Toddlers
Color Perception in Infants
Color Terms
Ancient Greek
Ancient Greek (continued)
Xenophanes
Lazarus Geiger
Wine Dark Sea

Numbers
Exact and Approximate Arithmetic
Effect of Education
Number as Cognitive Technology
Quantity Recognition Among Speakers of an Anumeric Language
Weber’s Law
Independence of Language and Mathematical Reasoning

Gender
Grammatical Gender and Linguistic Relativity
Spanish Grammatical Gender

Memory
Word Order Predicts Working Memory

Number of Languages in the World
How Many Languages?

The post Does the Language You Speak Affect How You Think? appeared first on Today I Found Out.

Source

What is the Record for Most Languages Spoken By One Person?

If you speak one language, you’re a normal, functioning human being. If you speak two, you’re bilingual. If you speak three, you’re an overachiever and everyone hates you. But what about if you speak 10? Or 20? Or 30? Well, then you’re considered a polyglot or a hyperpolyglot depending on how awesome a word you think you deserve to describe your mastery of the spoken word. (Though if we’re being really technical, while colloquially these two terms are often used pretty interchangeable, a polyglot is usually used to describe someone who speaks more than 6 languages whereas a hyper-polyglot is used to describe someone who speaks over 12. And if you’re wondering, the term polyglot is Greek in origin, coming from the Greek, polyglōttos, roughly translating to “many tongued”, which is probably a killer inclusion in a pick-up line if you’re such an advanced linguist.)

Now you’d think that discovering the person who spoke the most languages would be as simple as searching for it on the Guinness World Records site or a quick Google search, but alas, even the almighty Guinness and Google don’t know the answer to this question, which is perhaps why Patron Kyle posed the question to us in the first place.

So why is this such a difficult question to answer? The problem seems to lie in the fact that the definition of what it takes to be able to “speak” a language varies greatly. Is a person who can hold basic conversation in 100 languages more impressive than a person who has mastered reading and writing in 30? Is being able to speak in 10 different regional dialects the same as being able to speak 10 different languages? How different would those dialects need to be for the distinction to be made?  If someone becomes fluent in over 200 languages in their lifetime, but at any given time can only speak fluently a couple dozen worthy of the crown vs their compatriot who learned and maintained fluency in just 50?

It’s questions like these that make it very unlikely that we’ll ever truly know the identity of the most gifted polyglot in history, but we have a fairly good idea of a handful of people who should at least be considered for the crown

In terms of living people, a candidate for the record holder is Ziad Fazah, who reportedly speaks around 60 languages, though the exact number isn’t clear. That said, in one television appearance, Ziad was stumped by basic questions in several languages he’d previously claimed to be fluent in. That’s not to detract from the fact that Ziad has proven he’s able to speak a pretty ridiculous number of languages and he may have studied and once been fluent in the languages he was stumped in, and simply forgotten them; but it throws into question his claim of being able to currently speak 60 or more languages at this very moment.

A more verifiable living polyglot is one Alexander Arguelles, who has a proven working understanding of around 50. Again, the number isn’t clear, even in interviews, Alexander very rarely puts a hard figure on the number of languages he can speak and understand, stating only that “Now, I can read about three dozen languages and speak most of them fluently, and I’ve studied many more“.

In Alexander’s case, he puts his amazing gift for language down to thousands of hours of study and work. A sentiment that is echoed by other living hyper-polyglots, for example, Timothy Donner, who speaks over 20 different languages. In his case, though, he’s still in his mid-20s, so he has the potential to speak and understand as many, if not more languages than Alexander some day and perhaps become the greatest of all time. It should be noted that Timothy also refuses to bother learning so-called “easy” languages like Spanish, in lieu of learning more difficult ones like Urdu, Russian, Arabic, Hebrew, Yiddish, etc.

Looking historically, in the book, Babel No More: The Search for the World’s Most Extraordinary Language Learners, the 18th and 19th century Cardinal Giuseppe Caspar Mezzofanti is perhaps one of the most accomplished historic polyglots, reportedly being able to speak or understand 72 languages. Again, no one is sure of the exact figures, however. Regardless, Cardinal Mezzofanti’s skills with language were legendary in his time. The reason for the vast difference in the number of languages Mezzofanti was reported to have spoken stems from the fact he spoke many different dialects, which some scholars argue were so different in nature that they should technically count as entirely separate languages, while others aren’t willing to give him the credit. Even discounting his dialects, Mezzofanti was known to be able to speak Turkish, Arabic, German, Chinese, Russian and around two dozen other languages with, to quote the book, “rare excellence”. Considering he lived in 19th century, the fact he even came into contact with this many languages and found adequate books on the subjects to study, let alone learned to speak the languages fluently enough to converse with people in them, is hugely impressive.

A slightly more recent example of a hyper-polyglot is 19th century born Emil Krebs, who spoke a reported 65 different languages. Fun fact, Krebs took great enjoyment in the fact that he could translate the phrase “kiss my ass” into 40 different languages. When told that it’d be impossible to learn every language on Earth, Krebs asked which language would be the hardest to learn and mastered the hell out of that on principle.  If you’re curious, the language Krebs eventually settled on as the hardest was Chinese. Krebs affinity for language was so great that when he died in 1930, his brain was sent off for scientific study where it presumably exploded into a cloud of foreign expletives the second a researcher cut into it.

Yet another of one of the world’s top polygots was child prodigy William James Sidis who was a child whose famed psychologist father and doctor mother (one of the few women in the world in the 19th century to hold such a medical degree) used as a bit of a guinea pig to prove his father’s methods of more or less creating a child prodigy from scratch. To help facilitate this, his mother actually quit her medical practice and more or less trained the child with her husband from day 1, including the couple successfully teaching the child the English alphabet by a few months old, and to start speaking in under six months.

His parents were proud of their son, but possibly more proud that his father, Boris’s, techniques in teaching his son were genuinely working, constantly publishing academic papers showing off their successes. By two years old, William was reading the New York Times and tapping out letters on a typewriter from his high-chair – in both English and French. He wrote one such letter to Macy’s, inquiring about toys.

Unfortunately, his time to act like a child had already passed young William by. Studying seven different languages (French, German, Latin, Hebrew, Greek, Russian, and one he made up himself – Vendergood) and learning a high school curriculum at seven left Billy precious little time to act his age. His parents wanted the whole world to know about their prodigal son, as well as their participation in all of it.

He was accepted into Harvard at age nine, but the university refused to allow him to attend due to him being “emotionally immature.” His parents took this perceived slight to the media and William was front page news in the New York Times.  This gave William the notoriety and fame he was not prepared for. Tufts College, though, did admit him and he spent his time correcting mistakes in math books and attempting to find errors in Einstein’s theory of relativity.

His parents pressed Harvard further and when William turned eleven, they relented. William Sidis became a student at one of the most prestigious universities on Earth at the age most kids were perfectly content playing stick ball and not worrying about giving a dissertation on the fourth dimension.

Not hyperbole, on a freezing Boston January evening in 1910, hundreds gathered to hear the boy genius William Sidis in his first public speaking engagement, a talk about fourth dimensional bodies. His speech, and the fact that it was over most of the audiences’ heads, became national news.

Reporters followed William everywhere on campus. He rarely had a private moment. He graduated from Harvard at the age of 16, cum laude. Despite his success, Harvard was not a happy experience for young Billy.  According to Sidis biographer Amy Wallace, William once admitted to college students nearly double his age that he had never kissed a girl. He was teased and humiliated for his honesty. At his graduation, he told the gathered reporters that, “I want to live the perfect life. The only way to live the perfect life is to live it in seclusion. I have always hated crowds.”

After leaving Harvard, society and his parents expected great things from William. He briefly studied and taught mathematics at what later would become known as Rice University in Houston, Texas. His fame and the fact that he was younger than every student he taught made it difficult on him. He resigned and moved back to Boston.

He attempted to get a law degree at Harvard, but he soon withdrew from the program. William, brilliant as he was, struggled with his own self-identity. In May 1919, he was arrested for being a ringleader of an anti-draft, communistic-leaning demonstration. He was put in jail and that’s where he would meet the only woman he would love – an Irish socialist named Martha Foley. Their relationship was rather complicated, mostly due to William’s own declaration of love, art, and sex as agents of an “imperfect life.”

When in court, he announced that he didn’t believe in God, that he admired a socialist form of government, and many of the world’s troubles could be traced back to capitalism.  He was sentenced to eighteen months in prison.

Fortunately for him, his parents’ influence kept him out prison, but William decided he’d had enough of “crowds” and wanted his “perfect life.” He moved city to city, job to job, always changing his name to keep from being discovered. During this time, it’s believed he wrote dozens of books under pseudonyms (none of which were particularly well read), including a twelve hundred  page work on America’s history and a book entitled “Notes on the Collection of Streetcar Transfers,” an extremely in-depth look at his hobby of collecting streetcar transfers. It was described by one biographer as the “most boring book ever written.”

And just for fun, we’re going to read a small excerpt for a taste… Please don’t click away:

“Stedman transfers: This classification refers to a peculiar type turned out by a certain transfer printer in Rochester, N. Y. The peculiarities of the typical Stedman transfer are the tabular time limit occupying the entire right-hand end of the transfer (see Diagram in Section 47) and the row-and-column combination of receiving route (or other receiving conditions) with the half-day that we have already discussed in detail.”

We’re pretty sure Sidis’ real intent with this book was to once and for all cure insomnia, just the fact that this was his intent went over the rest of us mere mortal’s heads.

In any event, seclusion fit William just fine. He wanted nothing more than him and his genius to be left alone.

In 1924, no longer talking to his parents and out of contact with anyone who truly cared for him, the press caught up to William. A series of articles were printed describing the mundane jobs and the measly living conditions the supposed-genius William Sidis had. Ashamed and distressed, he withdrew further into the shadows. But the public remained infatuated with the former boy wonder’s apparently wasted talents.  In 1937, The New Yorker printed an article titled “April Fool!” which described William’s fall from grace in humiliating detail.

The story resulted from a female reporter who had been sent to befriend William. In it, it described William as “childlike” and recounted a story about how he wept at work when given too much to do. Sidis sued the New Yorker for libel and the case went all the way to the Supreme Court, before they eventually settled seven years later. But the damage had been done. William Sidis, for all the potential he showed as a child prodigy, would never become the man he was supposed to be.

On a summer day in July 1944, William’s landlady found him unconscious in his small Boston apartment. He had had a massive stroke, his amazing brain dying on the inside. He never regained consciousness and was pronounced dead at the age of 46 with a picture of the now-married Martha Foley in his wallet.

So how many languages did he speak? During his life, he became fluent in about 40 languages, though how many he remained fluent in at a given point isn’t clear.

Moving on from the sad tale of Sidis, perhaps the greatest number of languages claimed to be spoken by a single person is over 100. Yes, 100, with two zeroes. This claim was made by one, Sir John Bowring, the 4th governor of Hong Kong. In his life, Bowring was reportedly familiar with 200 languages, and was supposedly able to commune with others in over 100 of them. However, other than the fact that he and others close to him claimed he could speak this many languages, little else has ever been recorded about how proficient he was in any of them at a given point in time. Although, seeing as he lived his entire life as an obsessive student of language and given what his compatriots said of him, it is at least generally accepted he was likely one of the world’s most successful polygots. If claims are true, maybe even the most accomplished in history.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Bonus Fact:

Languages obviously need not include the spoken word, with various sign languages being perhaps the first thing people think about when hearing that statement. But it turns out there exists languages entirely made up of whistles.  Perhaps the most talked about one is Silbo Gomero- a whistling language “spoken” on La Gomera in the Canary Islands (which incidentally may have been named after dogs, and certainly wasn’t named after birds as you might have expected from the name Canary Islands).

The language was used by the Guanches—the aboriginal people of the Canary Islands—long before Spanish settlement. It is a whistled form of the original Guanche language, which died out around the 17th century. Not much is known about that spoken language of those people save for a few words recorded in the journals of travellers and a few others that were integrated into the Spanish spoken on the Canary Islands. It is believed that spoken Guanche had a simple phonetic pattern that made it easily adaptable to whistling. The language was whistled across the Canary Islands, popular on Gran Canaria, Tenerife, and El Hiero as well as La Gomera.

It’s likely that the first Guanches were from North Africa and brought the idea of a whistled language with them, as there are several different whistling languages that have been recorded there.  From the time of Guanche settlement, the language evolved into Guanche whistling, and then to silbo.

Today, silbo is a whistled form of Spanish. It was adopted in the 16th century after the last of the Guanches adapted their whistled language to Spanish. The language works by replicating timbre variations in speech. One study showed that silbo is recognized in the “language center” of the brain by silbo whistlers, though regular Spanish speakers who were not silbo whistlers simply recognized it as whistling.

As to why such a version of a language would originally be developed at all, it’s thought that silbo was developed as a form of long distance communication. The island of La Gomera is awash with hills, valleys, and ravines. A whistle can travel up to two miles across such a landscape, and the whistler doesn’t have to expend as much energy as he would by hiking or shouting and, in the latter case, the whistled message is heard further away besides. When La Gomera was largely an agricultural island, crops and herds of animals like sheep would be spread out across the hills, and herders would use the language to communicate with one another across these large distances.

Speaking via whistling still saw widespread use as late as the 1940s and 50s. Unfortunately, economic hardship around the 1950s put silbo-speaking in the decline, as most of the whistlers were forced to move to find better opportunities. The introduction of roads and the invention of the mobile phone also contributed to the decline, as they made silbo largely unnecessary. By the end of the twentieth century, the whistled language was dying out.

However, as it is an integral part of the island’s history, there was interest in reviving the language to preserve the culture and today every primary school child on La Gomera is required to learn the whistling language.

The post What is the Record for Most Languages Spoken By One Person? appeared first on Today I Found Out.

Source

Why Do We Call a Software Glitch a ‘Bug’?

“It’s not a bug, it’s a feature.” At one point or another we’ve all heard someone use this phrase or a variation thereof to sarcastically describe some malfunctioning piece of equipment or software. Indeed, the word “bug” has long been ubiquitous in the world of engineering and computer science, with “debugging” – the act of seeking out and correcting errors – being an accepted term of art. But why is this? How did an informal word for an insect become synonymous with a computer error or glitch?

According to the most often-repeated origin story, in 1947 technicians working on the Harvard Mk II or Aiken Relay Calculator – an early computer built by the US Navy – encountered an electrical fault, and upon opening the mechanism discovered that a moth had had flown into the computer and shorted out one of its electrical relays. Thus the first computer bug was quite literally a bug, and the name stuck.

But while this incident does indeed seemed to have occured, it is almost certainly not the origin of the term, as the use of “bug” to mean an error or glitch predates the event by nearly a century.

The first recorded use of “bug” in this context comes from American inventor Thomas Edison, who in a March 3, 1878 letter to Western Union President William Orton wrote: “You were partly correct. I did find a “bug” in my apparatus, but it was not in the telephone proper. It was of the genus “callbellum”. The insect appears to find conditions for its existence in all call apparatus of telephones.”

 The “callbellum” Edison refers to in the letter is not an actual genus of insect but rather an obscure Latin joke, “call” referring to a telephone call and bellum being the latin word for “war” or “combat” – implying that Edison is engaged in a struggle with this particular hardware glitch. In a letter to Theodore Puskas written later that year, Edison more clearly defines his use of the word: “It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise—this thing gives out and [it is] then that “Bugs”—as such little faults and difficulties are called—show themselves and months of intense watching, study and labor are requisite before commercial success or failure is certainly reached.”

Where Edison himself got the term is not known, though one theory posits that it originated from a common problem plaguing telegraph systems. For almost 40 years since their introduction, electric telegraphs were limited to sending a single message at a time over a single wire. As the popularity of telegraphy rose through the mid-19th Century, this limitation became a serious problem, as the only way to allow more messages to be sent was to install more telegraph wires – an increasingly inelegant and expensive solution. This lead inventors around the world to seek out methods for transmitting multiple signals over a single wire – a practice now known as multiplexing. By the 1870s several inventors had succeeded in perfecting workable multiplex or “acoustic” telegraphs, which generally worked by encoding each individual signal at a particular acoustic frequency. This allowed multiple signals to be sent along a single telegraph wire, with only a receiver tuned to the sending frequency of a particular signal being able to extract that signal from among the others. Among the many inventors to develop multiplex telegraphs were Alexander Graham Bell and Elisha Gray, whose work on sending acoustic frequencies over telegraph wires would eventually lead them to discover the principles that would be used for the telephone.

In any event, while these early multiplex telegraphs worked reasonably well, they had a tendency to generate phantom signals in the form of loud “clicks” that reminded many telegraph operators of the sound of an insect. Thomas Edison himself patented an electronic workaround to this problem in 1873, which he referred to as a “bug catcher” or “bug trap” – suggesting this phenomenon as a likely origin for the term.

Another hypothesis points to the word “bug” being derived from the Middle English bugge, meaning “a frightening thing” or “monster.” This root is also the source of the English words bogeyman, bugaboo, and  bugbear – the latter originally referring to a malevolent spirit or hobgoblin but today used to mean a minor annoyance or pet peeve. Advocates for this hypothesis therefore posit that “bug” in this context was used in much the same manner as “gremlins,” the mythical goblins that WWII aircrews blamed for malfunctions aboard their aircraft.

Whatever the case, Edison’s frequent use of the term in his letters and notebooks lead to it being widely repeated in the press, with a March 11, 1889 article in Pall Mall Gazette reporting: “Mr. Edison…had been up the two previous nights working on fixing ‘a bug’ in his phonograph—an expression for solving a difficulty, and implying that some imaginary insect has secreted itself inside and is causing all the trouble.”

Edison and his so-called “insomnia squad” ’s habit of staying up all night to fix particularly stubborn technical problems was of particular fascination to the press, with Munsey’s Magazine reporting in 1916: “They worked like fiends when they [were] ‘fishing for a bug.’ That means that they are searching for some missing quality, quantity, or combination that will add something toward the perfect whole.”

The term was first formally standardized by engineer Thomas Sloane in his 1892 Standard Electrical Dictionary, which defined a “bug” as: “Any fault or trouble in the connections or working of electric apparatus.”

Three years later Funk and March’s Standard Dictionary of the English Language defined the term for the general public as: “A fault in the working of a quadruplex system or in any electrical apparatus.”

Thus by the early 20th Century the term was well-established in engineering circles, and soon began making its way into everyday usage. One notable early appearance was in a 1931 advertisement for Baffle Ball – the world’s first commercially-successful pinball machine – which proudly proclaimed “No bugs in this game.” Science fiction writer Isaac Asimov further popularized the term in his 1944 short story Catch the Rabbit, writing: “U.S. Robots had to get the bugs out of the multiple robots, and there were plenty of bugs, and there are always at least half a dozen bugs left for the field-testing.”

Despite being in use for over 70 years, it was not until the aforementioned moth incident in 1947 that the term “bug” would become inextricably associated with the field of computer science. The insect in question was discovered lodged in Relay #7 of the Harvard Mark II in the early morning hours of September 9. Later that day the night shift reported the incident to Navy Lieutenant Grace Hopper, a computing pioneer who would later go on to develop FLOW-MATIC, a direct ancestor of COBOL and among the very first high-level programming languages.

In any event, at 3:45 PM Hopper taped the slightly crispy moth into the computer’s logbook with sellotape, gleefully noting beside it: “The first actual case of a bug being found.”

As British cybersecurity expert Graham Cluley notes, Grace Hopper’s whimsical logbook entry clearly indicates that the term “bug” was well-known at the time, but:

“…while it is certain that the Harvard Mark II operators did not coin the term ‘bug’, it has been suggested that the incident contributed to the widespread use and acceptance of the term within the computer software lexicon.”

The historic logbook page, complete with preserved moth, survives to this day in the collection of the Smithsonian Museum of Natural History in Washington, DC, though it is not currently on public display. And in commemoration of the infamous incident, September 9 is celebrated by computer programmers around the world as “Tester’s Day” – reminding everyone of the vital role played by those who tirelessly hunt and slay the various glitches, bugs, gremlins, and ghosts in every machine.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Bonus Fact

 While we tend to think of software bugs as minor annoyances and inconveniences at worst, depending on what a piece of software is controlling, they can have serious real-life consequences. Among the most notable examples of this is the tragic case of the Therac-25, a computer-controlled cancer therapy machine produced by Atomic Energy of Canada Limited starting in 1982. The unit contained a high-energy linear electron accelerator which could either be aimed directly at the patient or at a retractable metal target, generating an x-ray beam that could reach tumours deeper inside the body. The machine could also be operated in “field light” mode, in which an ordinary light beam was used to line up the electron or x-ray beam on the patient.

While AECL had achieved a perfect safety record with its earlier Therac-6 and Therac-20 machines through the use of mechanical interlocks and other physical safety features, the Therac-25 dispensed with these entirely, its designers relying solely on the machine’s control software to ensure safety. Unfortunately, this software contained two serious software bugs which soon resulted in tragedy. The first of these allowed the electron beam to be set to x-ray mode without the metal x-ray target being in place, while the second allowed the electron beam to be activated while the machine was in field light mode. In both cases, this resulted in patients being bombarded with an electron beam 100x more powerful than intended. The initial effect of this was a powerful sensation of electric shock, which lead one patient, Ray Cox, to leap from the table and run from the treatment room. Between 1985 and 1987 six patients in Canada and the United States received massive radiation overdoses, resulting in severe radiation burns, acute radiation poisoning, and – in the case of three of the patients – death.

A subsequent investigation revealed the truly shocking depths of AECL’s negligence in developing the Therac-25. While the two lethal bugs had been reported during the control software’s development, as the software was directly copied from the earlier Therac-6 and Therac-20 and these machines had perfect safety records, the report and bugs were ultimately ignored.

Of course, the earlier machines relied on mechanical interlocks for safety and their software was written to reflect this, leaving the Therac-25 control software with almost no built-in failsafes and no way of communicating potentially lethal errors to the operator. Even more disturbingly, the software was never submitted for independent review and was not even tested in combination with the Therac-25 hardware until the machines themselves were installed in hospitals. Indeed, throughout the Therac-25’s development cycle little thought appears to have been given to the possibility of software error leading to dangerous malfunctions, with a Failure Modes Analysis conducted in 1983 focusing almost exclusively on potential hardware failures. Software failure is mentioned only once in the report, with the probability of the machine selecting the wrong beam energy given as 10-11 and the probability of it selecting the wrong mode as 4×10-9 – with no justification given for either number. This absolute confidence in the software ultimately served to prolong the crisis. Following the first two overdose incidents in 1985, AECL was ordered by the FDA to investigate and submit a solution. Refusing to believe that the software could be to blame, AECL concluded that the issue lay with a microswitch used to control the positioning of the machine turntable, and in 1986 submitted this fix to the FDA. This, of course, did nothing to solve the problem, leading to three further overdoses before the actual cause was finally tracked down.

Once the fault was uncovered, the FDA declared the Therac-25 “defective” and ordered AECL to develop a full suite of corrective modifications. These were all implemented by the summer of 1987, but no sooner was the Therac-25 returned to service, another patient in Yakima, Washington, received a massive overdose, dying of radiation poisoning three months later. This incident was caused by yet another software error – a counter overflow – which caused the updated software to skip a critical safety step and withdraw the x-ray target from the electron beam. In the wake of the six incidents AECL was hit with multiple lawsuits by the families of the victims, all of which were settled out of court. Since then no further accidents have been reported, with the original Therac-25 units continuing to operate for nearly two decades without incident.

The Therac-25 affair has become a seminal case study in safety and systems engineering, dramatically illustrating the dangers of blindly trusting pre-existing software and of not thoroughly testing hardware and software together as a complete system. It also serves as a stark reminder that in our modern, hyper-connected world, the effects of software are not limited to the inside of a computer; sometimes, they can slip out into the physical world – with devastating results.

Expand for References

McFadden, Christopher, The Origin of the Term ‘Computer Bug’, Interesting Engineering, June 12, 2020, https://interestingengineering.com/the-origin-of-the-term-computer-bug

Was the First Computer Bug A Real Insect? Lexico, https://www.lexico.com/explore/was-the-first-computer-bug-a-real-insect

Whyman, Amelia, The World’s First Computer Bug, Global App Testing, https://www.globalapptesting.com/blog/the-worlds-first-computer-bug-global-app-testing

Laskow, Sarah, Thomas Edison was an Early Adopter of the Word ‘Bug’, Atlas Obscura, March 16, 2018, https://www.atlasobscura.com/articles/who-coined-term-bug-thomas-edison

Magoun, Alexander and Israel, Paul, Did You Know? Edison Coined the Term “Bug”, IEEE Spectrum, August 1, 2013, https://spectrum.ieee.org/the-institute/ieee-history/did-you-know-edison-coined-the-term-bug

Leveson, Nancy and Turner, Clark, An Investigation of the Therac-25 Accidents, IEEE 1993, https://web.archive.org/web/20041128024227/http://www.cs.umd.edu/class/spring2003/cmsc838p/Misc/therac.pdf

Fabio, Adam, Killed by a Machine: the Therac-25, Hackaday, October 26, 2015, https://hackaday.com/2015/10/26/killed-by-a-machine-the-therac-25/

The post Why Do We Call a Software Glitch a ‘Bug’? appeared first on Today I Found Out.

Source

Is it pronounced “Jif” or “Gif”?

It is the single most profound question of the 21st Century, a debate which has dominated intellectual discourse for more than three decades. Some of the greatest minds and institutions in the world have weighed in on the issue, from top linguists and tech giants to the Oxford English Dictionary and even the President of the United States. Yet despite 30 years of fierce debate, controversy, and division, we are still no closer to a definitive answer: is it pronounced “gif” or “jif’?

At its face, the answer might seem rather straightforward. After all, the acronym G-I-F stands for Graphics Interchange Format. “Graphics” has a hard G, so G-I-F must be pronounced “ghif.” Case closed, right? Well, not quite. As is often the case, things aren’t nearly as simple as they might appear.

The Graphics Interchange Format was first introduced in June of 1987 by programmer Steve Wilhite of the online service provider Compuserve. The format’s ability to support short, looping animations made it extremely popular on the early internet, and this popularity would only grow over the next two decades, with the Oxford English Dictionary declaring it their ‘Word of the Year’ in 2012.

As its creator, Wilhite should be the first and final authority on the word’s pronunciation. So how does he think we should say it?

“Jif.”

Yes, that’s right: despite all arguments to the contrary, the creator of everyone’s favourite embeddable animation format insists that it is pronounced with a soft G. According to Wilhite, the word is a deliberate reference to the popular peanut butter brand Jif; indeed, he and his colleagues were often heard quipping “choosy developers choose JIF” – a riff on the brand’s famous slogan “choosy mothers choose JIF.” And he has stuck to his guns ever since. When presented with a Lifetime Achievement Award at the 2013 Webby Awards, Wilhite used his 5-word acceptance speech – presented, naturally, in the form of an animation – to declare: It’s pronounced ‘jif,” not ‘gif’

In a subsequent interview with the New York Times, Wilhite reiterated his stance: “The Oxford English Dictionary accepts both pronunciations. They are wrong. It is a soft ‘G,’ pronounced ‘jif.’ End of story.”

 While the debate should have ended there, language is a strange and fickle thing, and despite Whilhite’s assertions a large segment of the population continues to insist that the hard “G” pronunciation is, in fact, the correct one. In 2020 the programmer forum StackExchange conducted a survey of more than 64,000 developers in 200 countries, asking how they pronounce the acronym. A full 65% backed the hard G and 26% the soft G, with the remainder spelling out each individual letter – “G-I-F.” This seems to agree with a smaller survey of 1000 Americans conducted by eBay Deals in 2014, in which hard G beat soft G 54 to 41%. However, as The Economist points out, people often base their pronunciation of new or unfamiliar words on that of similar existence words, and the prevalence of the hard or soft G varies widely from language to language. For example, Spanish and Finnish have almost no native soft G words, while Arabic almost exclusively uses soft Gs. Those in countries that predominantly use hard Gs make up around 45% of the world’s population and around 79% of the StackExchange survey respondents. Nonetheless, even when these differences are corrected for, hard G still narrowly beats out soft G by 44 to 32%.

In the wake of Wilhite’s Webby Award acceptance speech, many prominent figures and organizations have publicly come out in favour of the hard-G pronunciation. In April 2013 the White House launched its Tumblr account with a graphic boldly announcing that its content would include “Animated GIFs (Hard G),” while during a 2014 meeting with Tumblr CEO David Karp, U.S. President Barack Obama threw his hat into the ring, declaring: “[It’s pronounced GIF.] I’m all on top of it. That is my official position. I’ve pondered it a long time.”

Many backers of the hard-G pronunciation, like web designer Dan Cederholm, focus on the pronunciation of the acronym’s component words, with Cederholm tweeting in 2013: “Graphics Interchange Format. Graphics. Not Jraphics. #GIF #hardg”

However, this argument ignores the many other instances in which the pronunciation of an acronym does not line up with that of its components. For example, while the A in “ATM” and “NATO” stand for “Automatic” and “Atlantic,” respectively, we do not pronounce them as “Awe-TM” or “Nah-tow.” Many also point out that there already exist words such as “jiffy” in which the same sound is produced using a J, but this too ignores exceptions such as the now-archaic spelling G-A-O-L for “jail.”

So if common sense and everyday usage can’t settle the debate, then how about the rules of the English language? As noted by the good folks at the Daily Writing Tips, words in which the G is followed by an e, i, or y – like giant, gem, or gym – are more often than not pronounced with a soft G, while all others are pronounced with a hard G. According to this rule, then, “G-I-F” should be pronounced the way Steve Wilhite originally intended: as “jif.” However, there are many, many exceptions to this rule, such as gift, give, anger or margarine. In an attempt to clear up the matter, in 2020 linguist Michael Dow of the University of Montreal conducted a survey of all English words which included the letters “G-I,” grouping them according to pronunciation. The results seemed to indicate that the soft G is indeed more common as many state, with about 65% using this pronunciation rather than the hard G. However, one thing missed with this argument is that many of these soft-G words, like elegiac, flibbertigibbet, and excogitate, are rarely used in everyday communication. When the actual frequency of a word’s use is corrected for, the number of hard and soft-G words commonly used becomes about equal.

The fundamental problem with such rules-based approaches is that unlike many other languages, English evolved rather chaotically without the guidance of a central regulatory authority like the Académie Française. Consequently, English has little in the way of consistent set of pronunciation rules, and the pronunciation of any given word depends largely on its specific etymology, common usage, or even the geographic region where it is spoken. Thus, so as far as the gif/jif debate is concerned, the linguistic jury is still very much out.

But of course, it wouldn’t be America without a major corporation weighing in on the issue. On May 22, 2013, shortly after Steve Wilhite received his Webby Award, Jif brand peanut butter took to Twitter with a post reading simply: “It’s pronounced Jif® .”

Seven year later, the brand teamed up with gif website GIPHY to release a limited-edition peanut-butter jar labeled “GIF” instead of “JIF.” In an interview with Business Insider, Christine Hoffman explained: “We think now is the time to declare, once and for all, that the word of Jif should be used exclusively in reference to our delicious peanut butter, and the clever, funny animated GIFs we all use and love should be pronounced with a hard ‘G’”.”

Alex Chung, founder and CEO of Giphy, agreed, stating in a press release: “At Giphy, we know there’s only one ‘Jif’ and it’s peanut butter. If you’re a soft G, please visit Jif.com. If you’re a hard G, thank you, we know you’re right.”

 Yet despite such efforts to force a consensus, the debate continues to rage and shows no signs of stopping anytime soon. While deferring to Steve Wilhite’s originally-intended pronunciation might seem like the most logical solution, that just isn’t how language works – as John Simpson, Chief Editor of the Oxford English Dictionary, explains: “The pronunciation with a hard g is now very widespread and readily understood. A coiner effectively loses control of a word once it’s out there.”

 As evidence, Simpson cites the example of “quark,” a type of subatomic particle. The word, derived from a passage in James Joyce’s 1939 novel Finnegans Wake, was coined in 1963 by physicist Murray Gell-Mann and originally rhymed with “Mark.” Over the years, however, the word evolved and is today pronounced more like “cork.”

More close to the web, the creator of the world’s first Wiki, WikiWikiWeb, Howard G. Cunningham, also pronounced this word differently than most people today. As for the inspiration for the name, during a trip to Hawaii, Cunningham was informed by an airport employee that he needed to take the wiki wiki bus between the air port’s terminals.  Not understanding what the person was telling him, he inquired further and found out “wiki” means “quick” in Hawaiian; by repeating the word, it gives additional emphasis and thus means “very quick”.

Later, Cunningham was looking for a suitable name for his new web platform. He wanted something that was unique, as he wasn’t copying any existing medium, so something simple like how email was named after “mail” wouldn’t work.   He eventually settled on wanting to call it something to the effect of “quick web”, modeling after Microsoft’s “quick basic” name.  But he didn’t like the sound of that, so substituted “quick” with the Hawaiian, “wiki wiki”, using the doubled form as it seemed to fit; as he stated, “…doublings in my application are formatting clues: double carriage return = new paragraph; double single quote = italic; double capitalized word = hyperlink.”  The program was also extremely quick, so the “very quick” doubling worked in that sense as well.

The shorter version of the name, calling a wiki just “wiki” instead of “Wiki Wiki” came about because Cunningham’s first implementation of WikiWikiWeb named the original cgi script “wiki”; all lower case and abbreviated in the standard Unix fashion.  Thus, the first wiki url was http://c2.com/cgi/wiki.  People latched on to this and simply called it a “wiki” instead of a “Wiki Wiki”.

So how was Wiki originally pronounced? “we-key”, rather than the way most today pronounced it, “wick-ee”. However, given the popularity of the mispronunciation of the word, as with “gif” now being popularly pronounced differently than the creator intended, Cunningham and others have long since stopped trying to correct people on the correct way to pronounce wiki.

Going back to gif vs jif, in the end, the choice is entirely a matter of personal preference, and as with all language and as many a linguist will tell you, how you use a word ultimately doesn’t matter as long as you are understood, and few are going to get confused on this one. But if you’d like to pronounce it the way its creator intended, go with jif, and if you’d like to follow the crowd like sheep, go with gif.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Expand for References

Locker, Melissa, Here’s a Timeline of the Debate About How to Pronounce GIF, Time Magazine, February 26, 2020, https://time.com/5791028/how-to-pronounce-gif/

Biron, Bethany, Jif is Rolling Out a Limited-Edition Peanut Butter to Settle the Debate Over the Pronunciation of ‘GIF’ Once and For All, Business Insider, February 25, 2020, https://www.businessinsider.com/jif-campaign-settle-debate-pronunciation-of-gif-2020-2

Gross, Doug, It’s Settled! Creator Tells Us How to Pronounce ‘GIF,’ CNN Business, May 22, 2013, https://www.cnn.com/2013/05/22/tech/web/pronounce-gif/index.html

GIF Pronunciation: Why Hard (G) Logic Doesn’t Rule, Jemully Media, https://jemully.com/gif-pronunciation-hard-g-logic-doesnt-rule/

Nicks, Denver, WATCH: Obama Takes a Stand in the Great GIF Wars, Time, June 13, 2014, https://time.com/2871272/obama-tumblr-gif-wars/

McCulloch, Gretchen, Why the Pronunciation of GIF Really Can Go Either Way, WIRED, October 5, 2015,

Belanger, Lydia, How Do You Pronounce GIF? It Depends on Where You Live, Entrepreneur, June 20, 2017, https://www.entrepreneur.com/article/296674

Webb, Tiger, Is it Pronounced GIF or JIF? And Why Do We Care? ABC Radio National, August 9, 2018, https://www.abc.net.au/news/2018-08-10/is-it-pronounced-gif-or-jif/10102374

The post Is it pronounced “Jif” or “Gif”? appeared first on Today I Found Out.

Source