Measure Student Learning

Tests are essential because they provide a consistent measure of whether or not students are meeting and
exceeding the standards we have set.

But tests aren’t playing the role we need them to play, especially in high school. First, there are too many tests
and they are often disconnected in purpose from one another. Students take high school exit exams, final exams
or end-of-course exams, SAT, ACT, PSAT, AP, IB—and the list goes on. The ones that matter for students—the
ones that have currency in higher education and the job market—rarely factor in school evaluations, and the
ones that matter for schools often don’t matter to students. Perhaps most importantly, none of the state tests are
especially useful for teachers who want to evaluate student progress and make mid-course corrections.

The second big problem is that state tests don’t measure college and career readiness adequately. As a consequence,
students who do well on them are not necessarily prepared for their next steps . College-bound students learn this
the hard way when they arrive on a college campus and take placement tests—about 40 percent of these students
are told they’re not ready for college-level work, even though they likely passed all the tests they were given in
high school.

More testing is not the answer. Smarter testing is.

There are nine core questions state leaders need to ask themselves about assessment:

Question 4-1: What tests do high school students in the state take right
now?


As states evolve their assessment systems towards the goal of college and career readiness, they need to be
sensitive to the fact that many educators and high school students are already feeling “over tested.” Some tests are
developed and administered by the state, others by school districts, and still others are taken by students as part of
the college application process. The time spent preparing for these exams is not insignificant—both for students
and educators—and most would agree the various assessments are not well aligned with each other, either in their
content or purposes.

Before adding new assessments, we recommend that state leaders take stock of what tests students are currently
taking—at both the state and local levels—and what those tests seek to measure. Taking stock will help point out
gaps and redundancies in the assessment system.

As state and district leaders review existing assessments and plan for new ones they should be very clear about the
purposes the assessments are designed to serve.

A coherent assessment system will include a combination of measures designed to meet the following goals:

• informing and improving the quality and consistency of instruction;

• indicating whether or not students are meeting mileposts that signify readiness ; and

•holding schools accountable for readying students for postsecondary education and careers.

It would be unfortunate if states simply layered on more tests on top of existing tests. The irony, of course, is that
everyone wants less testing but no one wants to give up “their” test. States need to fight this tendency, working
hard to ensure that if some tests are added, others must be eliminated or replaced . States and districts need to work
together both to streamline the volume of testing and to ensure greater coherence within the assessment system.

• What tests do high school students currently take in the state? Which are given by the state? Which by districts?

• What are the purposes of each test? Are these duplicative or complementary? Where could tests be subtracted as
new
ones are added?

Question 4-2: Is the state’s high school testing system firmly anchored by
an assessment of the knowledge and skills students need to be
college- and career-ready?

Achieve has done analyses and found that most state high school tests reflect knowledge and skills students should
learn by early in high school—few state tests indicate whether or not students are college- and career-ready by the
end of high school. As a result, students can score “proficient” on these exams and still be unprepared for life after
high school.

It is therefore not surprising that the vast majority of postsecondary institutions and employers pay no attention to
performance on state tests. Instead, they give their own tests to see whether or not students are ready. However, these
tests are rarely well-aligned with state standards or school curriculum. We can’t afford this inefficiency, or the mixed
signals it sends to schools and students.

State assessments at the high school level must do a better job of measuring real world knowledge and skills that
students will need to be successful after high school. And the rest of the assessment system must be aligned with the
high school assessments—so that proficient means prepared—all the way up and down the line .

Every state should have an “anchor” assessment (or assessments) that measures college and career readiness. Tests
given earlier in high school need to signify progress toward that standard as well.

• Does the state have a college- and career-ready anchor assessment? If so, is there a statewide readiness cut score
and is it used to place incoming students into credit-bearing college courses?

• If the state has established a common readiness score, how was it established and by whom?

• Are college admissions and/or placement incentives attached to the assessment for two -year and four-year
institutions? Are there additional incentives for exemplary performance (such as financial aid for low-income
students or diploma endorsements)?

Question 4-3: If the state doesn’t currently administer a test of college
and career readiness to all students, should the state augment
the existing state tests to add the readiness dimension or
design or purchase a test that is built to assess college and
career readiness?

States are experimenting with a variety of approaches to incorporating college- and career-ready measures into
their statewide testing systems. Three possible approaches include the following: (1) end-of-course tests (EOCs)
in advanced subject areas that are validated for use by postsecondary institutions and employers; (2) rigorous
comprehensive end-of-grade (EOG) 10th or 11th grade tests developed collaboratively by K-12 and higher
education leaders; and (3) college admissions tests that are “augmented” so they align with or complement state
standards.

The costs and benefits of each approach are relatively clear but not necessarily easy to resolve:

• College admissions tests already have currency with higher education and with students. Incorporating
admissions tests into a state’s assessment system as the college- and career-ready anchor may not significantly
increase the overall testing burden on students, as many students were already planning to take the tests. The
results are common across the country and therefore portable for students to postsecondary institutions,
creating strong incentives for students to perform well. But college admissions tests were not designed
to align with state standards for what will be taught and learned in high school.29 This undermines their
credibility as evaluative and accountability measures and makes it more difficult to ensure coherence with
the rest of the K-12 assessment system.

• End-of-grade tests, on the other hand, were designed specifically to measure state standards. They are
typically administered in 10th grade, or occasionally in 11th grade, though using these tests as a college- and
career-ready anchor assessment will be significantly easier with an 11th grade test. These assessments are
a known quantity in many states and, therefore, could be modified or augmented without substantially
increasing testing time or costs. On the downside, these tests were not designed to measure college and career
readiness and it would be a real effort to get either higher education or employers to use them.

• End-of-course exams are becoming increasingly popular across the states. The clear benefit is that these
tests are tied to specific courses and therefore have the potential to be better aligned to that course content
than other tests. End-of-course tests also allow for more flexible scheduling because students take the test
whenever they complete the course, rather than at a set grade level. In addition, states that are putting
college- and career-ready graduation requirements in place may find EOCs particularly attractive, as they can
help the state monitor the quality and consistency of instruction in high school courses and guard against
course title inflation or the watering-down of curriculum. The challenge with these exams is to build them
so that they signal college and career readiness. This requires having EOCs in more advanced courses, such
as Algebra II or English III, as well as collaboration between the K-12 and postsecondary systems in their
development, so that performance on the exams opens opportunities for students. Using an end-of-course
approach may increase the overall amount of testing required by a state, but is not likely to increase the
testing experienced by students because such exams can simply replace teacher- or district-generated exams.

We recommend that states study the advantages and limitations of these three approaches and select a strategy
that will best meet the state’s own needs.

Which strategy will lead to the most effective measure of college and career readiness? Which strategy will
enable the strongest alignment with the state’s high school standards?

• Can the state build from the assessments in place or is wholesale change required ?

• How important is it for the assessments to be taken in close proximity to when students learn the material?
Does the state want a system that allows for more or less flexibility in terms of when students take the tests?

• Which assessments will provide the best feedback to teachers and schools? To students and parents?

Question 4-4: Have colleges in the state agreed on a common placement
standard that can guide the development of high school
assessments?

In some states, colleges have come together to define common placement standards—not for admission, but for
entry into credit-bearing courses. Most states, however, have yet to take that critical step.

Imagine how frustrating it is for high school faculty members in those states. They are told that we want them to
prepare students for success in college, but there are many different definitions of “ready” depending on which
colleges their students attend. This situation is more reflective of provincialism than principle. Achieve reviews
show that actual expectations are not all that different across different campuses, even across different states. But
that fact doesn’t deter each college from wanting its own placement tests and cut scores.

Having different standards for placement into credit-bearing courses makes little sense in an era when so many
students are transferring credits between institutions, and it makes it impossible for K-12 leaders to know what
they are aiming for. As one high school principal said to us not so long ago, “I’m happy to be held accountable for
getting my students college-ready. But not if there are 34 different definitions of college-ready.”

The ADP standards alignment process creates a vehicle for higher education faculty to articulate the core
competencies and knowledge they expect students to possess for entry-level courses. Where this work hasn’t yet
taken place, state higher education executive officers, system presidents, chancellors, and other higher education
leaders should make it a priority.

The ADP Algebra II End-Of-Course Exam

In May 2005, leaders from the ADP Network States began to explore the possibility of working together, with
support from Achieve, to develop a common end-of-course exam in Algebra II. These states were planning to require
or strongly encourage students to take Algebra II (or its equivalent ) to better prepare them for college and careers.
State leaders recognized that using an end-of-course exam would help ensure a consistent level of content and rigor
in classes within and across their respective states. They also understood the value of working collaboratively on a
common test: the potential to create a higher quality test faster and at a lower cost to each state, and to compare their
performance
and progress with one another. From the outset, the intent was for the ADP Algebra II EOC Exam to
serve three main purposes: to improve curriculum and instruction; to help colleges determine if students are ready to
do credit-bearing work; and to compare performance and progress among the participating states over time.

Fourteen states—Arizona, Arkansas, Hawaii, Indiana, Kentucky, Maryland, Massachusetts, Minnesota, New
Jersey, North Carolina, Ohio, Pennsylvania, Rhode Island, and Washington—joined together to develop and use
the common end-of-course exam in Algebra II. This is the largest multi-state collaborative assessment effort ever
undertaken. It is a dramatic departure from past testing practices in which states developed their own tests, based on
their own standards and often at considerable expense. With increasingly common end-of-high school expectations
among the states, collaborative efforts to develop assessments make good policy and economic sense.

In the spring of 2008, nearly ninety thousand students across twelve of the fourteen states in the partnership took the
ADP Algebra II end-of-course exam for the first time. In subsequent years, the number of exam takers is expected to
grow significantly.

At a time when many bemoan whether tests—and their scores—are an accurate reflection of what students need to
know to succeed, these fourteen states have chosen voluntarily to raise the bar to ensure that their students graduate
from high school prepared. The partnership states anticipated that this would be difficult work, but work that had to
be done to ensure that students graduate from high school ready for the real world.
 

Prev Next