What Is Intelligence?
Just what is intelligence? Is it the ability to acquire knowledge from books or formal schooling?
Or might it be “street smarts” — practical intelligence of the kind we see in people who survive by their wits rather than by knowledge acquired in school?
Is it the ability to solve problems? Or is it the ability to adapt to the demands of the environment? Psychologists believe intelligence may be all these things and more. Though definitions of intelligence vary, a central belief of each is that intelligence is the ability to adapt to the environment.
Intelligence is the capacity to think and reason clearly and to act purposefully and effectively in adapting to the environment and pursuing one’s goals.
Perhaps the most widely used definition of intelligence is the one offered by psychologist David Wechsler (1975): “Intelligence is the global capacity of the individual to act purposefully, to think rationally, and to deal effectively with the environment.”
Perhaps there are multiple intelligences. Some theorists believe there are many different forms of intelligence, that the ability to recognize and manage emotions is a form of intelligent behavior, called emotional intelligence.
Before we explore theories of intelligence, let us consider the history and nature of intelligence testing in modern times, as well as extremes of intelligence.
How Is Intelligence Measured?
In 1904, school officials in Paris commissioned Alfred Binet to develop methods of identifying children who were unable to cope with the demands of regular classroom instruction and required special classes to meet their needs. Today, we might describe such children as having learning disabilities or mild mental retardation Opens in new window.
To measure mental abilities, Binet and a colleague, Theodore Simon, developed an intelligence test consisting of memory tasks and other short tasks representing the kinds of everyday problems children might encounter, such as counting coins. By 1908, Binet and Simon decided to scale the tasks according to the age at which a child should be able to perform them successfully. A child began the testing with tasks scaled at the lowest age and then progressed to more difficult tasks, stopping at the point at which s/he could no longer perform them. The age at which the child’s performance topped off was considered the child’s mental age.
Mental age is the representation of a person’s intelligence based on the age of people who are capable of performing at the same level of ability.
Binet and Simon calculated intelligence by subtracting the child’s mental age from his or her chronological (actual) age. Children whose mental ages sufficiently lagged behind their chronological ages were considered in need of special education.
In 1912, a German psychologist, William Stern, suggested a differently way of computing intelligence, which Binet and Simon adopted. Stern divided mental age by chronological age, yielding a “mental quotient.” It soon was labeled the intelligence quotient (IQ).
At its simplest, Intelligence quotient (IQ) is a measure of intelligence based on performance on tests of mental abilities, expressed as a ratio between one’s mental age and chronological age, or derived from the deviation of one’s scores from the norms for those of one’s age group.
IQ is expressed by the following formula, in which MA is mental age and CA is chronological age:
Thus, if a child has a mental age of 10 and a chronological age of 8, the child’s IQ would be 125 (10 ÷ 8 = 1.25 X 100 = 125). A child with a mental age of 10 who is 12 years of age would have an IQ of 83 (10 ÷ 12 = .8333 X 100 = 83).
Researchers following in Binet’s footsteps developed intelligence tests that could be used with groups other than French schoolchildren. Henry Goddard (1865-1957), a research director at a school for children with mental retardation Opens in new window, brought the Binet-Simon test to the United States and translated it into English for use with American children.
Early in his career, Goddard taught at the University of Southern California (USC), where he also briefly served as the school’s first head football coach and remains to this day the only undefeated head coach in USC football history (Benjamin, 2009). (His 1888 team had a record of two wins and no losses against a local athletic club team).
During World War I, the U.S. Army developed group-administered intelligence tests used to screen millions of recruits. At about the same time, Stanford University psychologist Lewis Terman (1877-1956) adapted the Binet-Simon test for American use, adding many items of his own and establishing criteria, or norms, for comparing an individual’s scores with those of the general population. The revised test, known as the Stanford-Binet Intelligence Scale (SBIS), was first published in 1916.
The Stanford-Binet Intelligence Scale is still commonly used to measure intelligence in children and young adults. However, tests developed by David Wechsler (1896–1981) are today the most widely used intelligence tests in the United States and Canada.
What Are the Characteristics of a Good Test of Intelligence?
Like all psychological tests, tests of intelligence must be standardized, reliable, and valid. If they do not meet these criteria, we cannot be confident of the results.
Standardization is the process of establishing norms for a test by administering it to a large number of people that make up a standardization sample. The standardization sample must be representative of the population for whom the test is intended.
As noted earlier, norms are the criteria, or standards, used to compare a person’s performance with the performance of others. You can determine how well you do on an intelligence test by comparing your scores with the norms for people in your age group in the standardization sample.
As noted above, IQ scores are based on the deviation of a person’s score from norms for others of the same age, and the mean (average) score is set at 100. IQ scores are distributed around the mean in such a way that about two-thirds of the scores in the general population fall within an “average” range of 85 to 115.
In Figure X we can see that distribution (spread) of IQ scores follows the form of a bell-shaped curve. Relatively few people score at either the very high or very low end of the curve, with most bunching up in the middle of the distribution.
|Figure X | Normal Distribution of IQ Scores
The average (mean) IQ score is 100, plus or minus 15 points. The percentages shown are rounded off.
Standardization has another meaning in test administration. It also refers to uniform procedures that must be followed to ensure that the test is used correctly.
Reliability refers to the consistency of test scores over time. You wouldn’t trust a bathroom scale that gave you different readings each time you used it. Nor would you trust an IQ test that gave you a score of 135 one day, 75 the next, and 105 the day after that. A reliable test is one that produces similar results over time.
One way of assessing reliability is the test-retest method, where the subject takes the same test again after a short interval. Because familiarity with the test questions can result in consistent performance, psychologists sometimes use the alternative-forms method. When this method is used, subjects are given a parallel form of the test.
Validity is the degree to which a test measures what it purports to measure. A test may be reliable—producing consistent scores over time—but not valid. For example, a test that measures head size may be reliable, yielding consistent results over time, but invalid as a measure of intelligence.
There are several types of validity. One type is predictive validity, the degree to which test scores accurately predict future behavior or performance. IQ tests are good predictors of academic achievement and performance on general aptitude tests, such as the Scholastic Aptitude Test (SAT) and the Graduate Record Examination (GRE) (Neisser et al., 1996; Wadsworth et al., 1995).
But that’s not all. It turns out that IQ also predicts long-term health and longevity, perhaps because people who tend to do well on IQ tests have the kinds of problem-solving and learning skills needed to acquire and practice healthier behaviors (Gottfredson & Deary, 2004).
Misuses of Intelligence Tests
Even Binet, the father of the modern IQ test, was concerned that intelligence tests may be misused if teachers or parents lose interest or hope in children with low IQ scores and set low expectations for them. Low expectations can in turn become self-fulfilling prophecies Opens in new window, as children who are labeled as “dumb” may give up on themselves and become underachievers.
Misuse also occurs when too much emphasis is placed on IQ scores. Though intelligence tests do predict future academic performance, they are far from perfect predictors and should not be used as the only basis for placing children in special education programs.
Some children who test poorly may be able to benefit from regular classroom instruction. Placement decision should be based on a comprehensive assessment that takes into account not only the child’s performance on intelligence tests but also his or her cultural and linguistic background and ability to adapt to the academic environment.
Intelligence tests may be biased against children who are not part of the majority culture. Children from different cultural backgrounds may not have had any exposure to the types of information assessed by standard IQ tests, such as knowledge of famous individuals.
Several culture-fair tests—tests designed to eliminate cultural biases—have been developed. They consist of nonverbal tasks that measure visual-spatial abilities and reasoning skills. However, these tests are not widely used, largely because they don’t predict academic performance as well as standard IQ tests. This is not surprising, since academic success in the United States and other Western countries depends heavily on linguistic and knowledge-acquisition skills reflected in standard IQ tests.
Moreover, it may be impossible to develop a purely culture-free IQ test because the skills that define intelligence depend on the values of the culture in which the test is developed (Benson, 2003). Nor should we assume that all people in the same culture have the same experience with test-taking skills or familiarity with the types of materials used on the tests. At best we may only be able to develop culture-reduced tests, not tests which are entirely culture-fair (Sternberg & Grigorenko, 2008).
- Adapted from: Jeffrey S. Nevid's, Psychology: Concepts and Applications. (p. 269-272) What Is Intelligence?