Home' Teacher : December 2009 Contents 58 TEACHER DECEMBER 2009
As you read this article, thousands of Aus-
tralian senior secondary school students
havejust completed their final exams. Within
a month, most of them will receive a terti-
ary entrance ranking. There are a variety
of names for this ranking system, but their
common purpose is to provide tertiary edu-
cation institutions witha means ofdetermin-
ing who should gain entry to their courses.
The rankings are intended to provide tertiary
institutions with an accurate and nationally
comparable assessment of students' ability
for the purpose of student selection.
Tertiary entrance rankings are deter-
mined in a variety of ways due to the fact
that eachstatehas its own methodofassess-
ing students' final year studies. The essen-
tial process, however, commonly involves
taking raw scores, scaling them for subject
weighting and standardising them so they
can be compared equally across the nation.
From this convoluted and complex process,
a score is derived that is then used to rank
students from the highest to lowest. From
these rankings, tertiary institutions decide
which students they are prepared to accept
into their various courses.
The various state and territory systems,
except Queensland's, are to be replaced by
the Australian Tertiary Admission Rank in
2009 and 2010, but the shame is that we're
missing an opportunity for a more radical
rethink about the deeper impact of such a
stupid and inadequate system.
What's so bad about this system? It's a
highly unsophisticated and grossly inaccu-
rate measure of students' real abilities.
The problem is that it's calculated on the
basis of one final exam. True, the exam in
most cases only accounts for about half of
a student's score, but the final exam score
is weighted very heavily in the statistical
manipulations as it provides a com mon
level of overall comparison. School-assessed
course work is supposed to account for a
significant portion of the overall score, but
it's modified in terms of final exam results.
In Victoria, this is further complicated by
the General Achievement Test, a further
measure designed to enable objective verifi-
cation of students' exam results.
The first stupid assumption on which this
system is based is that exams are a valid and
reliable method of assessing students' capac-
ities and abilities. The reality is that com-
mon systems of exams are incredibly primi-
tive methods of assessment. To begin with,
the element of luck is profound. Everyone
who has ever sat an exam is aware that
generally only a very small portion of what
has been studied will actually appear on the
exam. Students lucky enough to have stud-
ied the exact topics that appear on the exam
are obviously highly advantaged compared
to students who are just as capable but who
focussed on different topics to those that
featured on the exam.
Next, consider the fact that students
from elite schools continually appear in
the highest rankings. Many teachers from
these schools are involved in exam develop-
ment and correction. How advantaged are
their students compared to students in, say,
rural areas who have difficulty attracting
teachers, let alone teachers with this level of
involvement? University enrolment figures
consistently show under-representation
from disadvantaged areas.
A further aspect of exam stupidity is that
overall raw-score exam results may in fact
be quite low, even below 50 out of 100, yet
scaling methods will still give top marks
to those students who perform highest. A
It's the exams,
ONE OF THE MOST STUPID
FEATURES OF OUR AUSTRALIAN
EDUCATION SYSTEM IS THE
WAY WE ASSESS STUDENTS'
CAPACITY FOR TERTIARY STUDY,
SAYS BRIAN BRENNAN.
Links Archive November 2009 February 2010 Navigation Previous Page Next Page