MENUMENU
  • About
  • Press
  • Topics
  • Contact

The “one-size-fits-all,” “drill and kill” approach being pushed is creeping its way into early childhood education. Chicago Public Schools has expanded the required and optional testing for young children, placing importance on continuous assessment through standardized testing and benchmarks. These tests include REACH, Teaching Standards GOLD and Quarterly Benchmark Performance Tasks. Matching the standardization trend in high school education, testing is being used as the foremost indicator of student performance.

Questionable Validity

With so much riding on testing scores, one would expect clear evidence that the tests are, in fact, evaluating what they intend. It is important to remember that young children are constantly developing. This fact would question the predictive validity of tests in early childhood, since a 5-year-old may be completely different after completing the school year than he was when he took an introductory test. Samuel Meisels of the Erikson Institute writes:

“Given that young children are undergoing significant changes in their first eight years of life in terms of brain growth, physiology, and emotional regulation, and recognizing that children come into this world with varied inheritance, experience, and opportunities for nurturance, it is not difficult to imagine that a brief snapshot of a child’s skills and abilities taken on a single occasion will be unable to capture the shifts and changes in that development. To draw long-term conclusions from such assessments seems baseless.” (Meisels, 9).

Meisels uses two studies to support his claims. LaParo and Pianta (2000) found that only a quarter of variance on academic/cognitive skills on first and second grades tests were accurately predicted by preschool or kindergarten tests (LaParo & Pianta, as quoted by Meisels, 9). Social/behavioral variations were even more difficult to predict, with only ten percent also being shown in preschool or kindergarten testing. The authors conclude that due to the rapidly developing nature of children, standardized testing has little consistent predictive validity. For young children, the authors argue that “instability or change [in cognitive and behavioral ability] may be the rule rather than the exception during this period” (LaParo & Pianta, as quoted by Meisels, 9). Kim and Suen (2003) performed a similar study and found that “the predictive power of any early assessment from any single study is not generalizable, regardless of design and quality of research” (Kim & Suen, as quoted by Meisels, 9). While some results may indeed turn out as predicted, both of these studies show that it is not enough data to form any generalizable correlation for predictability given the constant developmental changes of a child’s brain.

The Effect on Curriculum

In a paper published in the Journal of Education, Louisiana State Professor Renee Casbergue argues that standardized tests incorrectly assume that the “complex processes like those required for reading and writing can be simplified into component parts, each of which can be tested separately” (Casbergue, 18). The development skills needed in early childhood cannot simply be considered by small, individual aspects, but must instead consider the whole picture.

Standardized tests only assess “constrained skills,” which are skills that are limited to small sets of knowledge that are mastered in relatively brief periods of time” (Paris, as quoted in Casbergue, 13). “Unconstrained skills”—skills that continuously learned and developed throughout life, such as vocabulary, symbolism and story themes—are rarely evaluated in these tests.

Meisels uses the Head Start National Reporting System (NRS), a standardized test to be administered every two years to all four-year olds in Head Start), as a case study. The study would cost $25 million annually. While usage of NRS was quickly suspended after widespread criticism, it had many of the same problems as current standardized tests. Despite not yet (although it was eventually planned to be) being a high-stakes test that would greatly affect the allocating of Head Start funding, the U.S. General Accountability Office (GAO) warned in 2005 that 18 percent of programs studied had altered their curriculum in order to accommodate the skills tested by NRS. GAO also found that the necessary analysis to establish validity and reliability of the study was not done (GAO, as quoted by Meisels, 13). The test used complex questions involving causality, subtraction, metric unions, and subjunctive case—all of which were viewed by child development experts as beyond the understanding of a four year old (Meisels, 12). Overall, the test represented a misunderstanding of the way young children learn:

“[NRS] is a model of passive reception, of pouring into a vessel knowledge and skills that are needed for competence, rather than recognizing learning as active and teaching as a joint process of interaction between child and adult. An active view of learning, fundamentally based on enhancing relationships between teachers, children, and challenging materials, is nowhere to be seen in this test. … Yet, when you know that the results of a test will be used to make decisions that may affect your program’s continuation and other things you value, you are sorely tempted to begin teaching to the test. … It was a failure because it ignored the complexity of early development that teaches us that no single indicator can assess a child’s skills, achievements, or personality.” (Meisels, 13, 16).

Kevin Russell is a research consultant at the Chicago Teachers Union. Read the rest of the six-page report at CTUnet.com/Research on the CTU’s website.