The question of berit: How education and access have been shaped by preferential treatment of the status quo

By Z’Hanie Weaver Liberty High School

Merit. It’s a word that evokes images of earned success, intellectual prowess and individual grit. In theory, it should be an equalizer – a fair measurement of ability, potential and talent. But in practice, especially in American education, the concept of merit has long been engineered to uphold a status quo rooted in racism, classism, and exclusion. At the center of this design is the legacy of standardized testing – a tool often praised as objective but born from ideology far from neutral. To understand how “merit” was socially and politically constructed, we have to look back to the early 20th century.

Carl Brigham was a psychologist and a eugenicist – not just by association. He was a proponent of theories that sought to rank racial and ethnic groups by intelligence. After World War I, Brigham analyzed data from the Army Alpha test, the first mass-administered IQ test given to over 1.7 million soldiers. His interpretation concluded white Americans of Nordic descent were intellectually superior to immigrants from Eastern and Southern Europe and Black Americans. His 1923 book, “A Study of American Intelligence,” stated the U.S. was “diluting” its gene pool and that immigration and racial mixing would lead to national decline. It wasn’t long before he adapted these ideas into the Scholastic Aptitude Test (SAT), hoping to standardize college admissions and reduce what he saw as the problem of subjective judgment. But instead of leveling the playing field, the SAT became a powerful gatekeeping mechanism – one shaped not to lift the best minds, but to elevate the “right” ones. Brigham would later retract some views, admitting his work had flawed assumptions and that social and environmental factors played a larger role than he thought. By then, the system was built.

The rise of standardized testing coincided with the expansion of elite higher education institutions like Harvard, Princeton, and Yale – schools that historically admitted students based on family connections and social standing. With the SAT, universities found a justification for exclusion: supposed intellectual superiority. In the mid-20th century, this led to the emergence of what some call the “cognitive elite” – a term popularized by Charles Murray and Richard Herrnstein in “The Bell Curve” (1994). The book argued intelligence, which they claimed was inherited, determined socioeconomic outcomes. Though marketed as scientific, “The Bell Curve” echoed the same eugenic ideas Brigham once embraced.

The book’s claims have been widely criticized for their racial bias. Still, the idea of a cognitive elite persists – those who excel at standardized tests, attend prestigious universities and move into positions of power and influence. What is rarely questioned are the flaws in the methodology by which those elites are selected – and, more importantly, who was left out.