Hi all! I'm just re-posting a thread I wrote on the previous forums in 2018, as I thought it might be useful or interesting for people on the new website.
Note that all of the information presented here is purely based on my understanding of public documents in 2018, and there could be errors.
-
Recently, Australia’s chief scientist, Dr Alan Finkel, suggested the ATAR is “completely obscure” and “is leading to students being given poor advice” in regard to their subject selection. [1]
After six years of high school and about that many working for VCE organisations, I can’t help but agree that the ATAR and technical side of VCE isn’t that transparent. It’s generally not taught in schools, it’s confusing for students, and I’m quite sure it leads to a lot of unnecessary anxiety through high school.
The point of this post is to try to allay some of those concerns, and to spell out clearly how VCE works from a technical point of view.
Note:
None of this is crucial knowledge. Like, you don’t need to know this stuff to do well.
What is an ATAR?
Let’s start from the start. The ATAR (Australian Tertiary Admission Rank), renamed in 2010 from the ENTER (Equivalent National Tertiary Entrance Rank), is a number calculated by VTAC to allow tertiary institutions to compare students’ levels of performance across subjects. [2] Universities and other tertiary institutions can use the ATAR as one criterion for entry to various courses. ATARs are reported between “less than 30” and 99.95, which is the maximum ATAR. There is a minimum increment of 0.05 (so, ATARs of 30.00, 30.05, 30.10, 30.15… 99.80, 99.85, 99.90, 99.95). No ATAR ending with anything other than a second decimal place of 0 or 5 exists.
The ATAR does not reflect your raw marks. That is, an ATAR of 80.00 does not indicate you have averaged 80% across the year, nor does it indicate you achieved 80% on your exams. Instead, it suggests VTAC deemed you to have outperformed 80% of your cohort (those you started Year 7 with) – it’s a percentile ranking. VTAC explains the ATAR as “the estimate of the percentage of the population in the relevant age group that the student outperformed”. [3] That is, where you ranked in your cohort – not your raw marks on tests and exams. It’s for this reason that you can’t achieve an ATAR of 100; you can’t outperform 100% of your cohort (which includes yourself).
The ATAR is not the only way to gain entrance into tertiary education, and it’s certainly not make-or-break. This largely falls outside the scope of this thread, but you can read more about the importance of otherwise of the ATAR here.
How is the ATAR calculated?
As you can probably imagine, it’s quite difficult to establish how well a single student has performed in context of the entire state. Why? Because that student will be exposed to different subjects, different SACs (School Assessed Coursework – more on this later), and different stringency of marking throughout the year. Different SACs and different approaches to marking both threaten a fair system. Naturally, VCAA wants a fair playing field, and doesn’t want to unjustly advantage or disadvantage any student. For this reason, we can’t just say, “we’ll add up all of your scores throughout the year, and rank you that way!”, because that wouldn’t necessarily be a true indication of students’ achievement. As such, we need a more objective way of ranking students across subjects and conditions, and for this we use the aggregate.
The "aggregate"
Calculation
Very many students are aware of the ATAR; many fewer are aware of the aggregate. According to VTAC, “[t]he aggregate is a number between 0 and a value that can reach over 210.” [4] As above, your aggregate is what VTAC uses to rank you against other students in the state. You can picture it as your “ultimate high score”, with each study score being analogous to your high score for an individual level of a game.
So, how is the aggregate calculated, exactly? Remember, we need this to be able to work out each student’s ATAR. Well, your aggregate is basically the sum of:
• your highest-scoring English subject (English, English Language, Literature or English as an Additional Language);
• your next three highest-scoring subjects; and,
• if applicable, 10% of your fifth and sixth highest-scoring subjects.
That is, a maximum of six subjects can count toward your ATAR. Your top English subject plus your next three best subjects form your “top four”; your fifth and sixth subjects are referred to as your “bottom two”. The top four (or “primary four” in VTAC language) is required in order to receive an ATAR; the bottom two is not. Anything beyond the sixth subject (so, say you studied a seventh, eighth, ninth etc. subject) is not considered toward calculation of the ATAR. [5]
NOTE: It is not only Unit 3/4 VCE subjects that can count toward your ATAR. You also have the option of:
• “Higher Education Studies” (colloquially, uni extension subjects), which can count as either your fifth or sixth subject. An increment of 3.0 – 5.0 is awarded depending on achievement in that subject; more information can be found here.
• Alternatively, VCE VET sequences (either scored or unscored). The processes for unscored VET subjects are a little different, but they essentially add to your ATAR based on your other scaled study scores. [6]
Example
So, let’s demonstrate how this top four/bottom two jazz works. Let’s take an example of Joseph, who studies seven Unit 3/4 subjects. Note that these subjects don’t need to be completed in Joseph’s year of graduation in order to contribute to his ATAR. Joseph’s subjects, plus his final scaled scores (more on this later), are listed below.
• Biology: 46
• Chemistry: 25
• Health & Human Development: 49
• Literature: 28
• Physical Education: 33
• Specialist Maths: 42
• Studio Art: 30
If we order these from best to worst, we’re left with:
• Health & Human Development: 49
• Biology: 46
• Specialist Maths: 42
• Physical Education: 33
• Studio Art: 30
• Literature: 28
• Chemistry: 25
But remember that an English subject has to be included in the top four, so let’s put Literature (Joseph’s only English subject) right to the top by default
• Literature: 28
• Health & Human Development: 49
• Biology: 46
• Specialist Maths: 42
• Physical Education: 33
• Studio Art: 30
• Chemistry: 25
From these scores, we can now calculate Joseph’s aggregate. Remember, that’s the top four subjects (including an English), plus 10% of the fifth and sixth subjects if applicable. So, in this case, that would be:
28 + 49 + 46 + 42 + 3.3 (10% of Physical Education) + 3.0 + 10% of Studio Art) = 171.3.
Note that the score of 25 for Chemistry doesn’t count at all, because it was Joseph’s seventh subject.
Conversion to ATAR
Now, this aggregate of 171.3 probably doesn’t mean much to you – nor should it, because aggregates are rarely spoken of or considered. But what now happens is all aggregates are ordered, giving VTAC a way in which to rank students, irrespective of what subjects they took. Each student, based on their aggregate, is then attributed a percentile rank, and this percentile rank is converted into an ATAR. [7]
How do they do this? Well, they have what’s called an “aggregate to ATAR conversion table”. It is how it sounds – it just shows what minimum aggregate is needed in order to achieve whichever ATAR. You can find the 2017 aggregate to ATAR table here. But to give a general idea of what’s required:
Restrictions on the ATAR
Just a quick note here: there are a couple of other things that might affect you, too. For example, there is a limit to the number of similar subjects that can count toward your ATAR. VTAC says [8]:
There are some restrictions on how certain combination of studies may be counted for an ATAR.
In each of the study areas of English, mathematics, history, contemporary Australian studies, information technology, languages and music:
• at most two results can contribute to the Primary Four
• at most three results can contribute to the ATAR, the third being counted as a 10% increment for a fifth or sixth subject
This also includes Higher Education studies.
So, let’s say you’re studying the following subjects, and get these scaled results:
• Further Maths: 50
• Maths Methods: 44
• Specialist Maths: 42
• Biology: 41
• English: 36
• Uni Extension Maths: 3.5
In this case, there are four maths subjects (Further, Methods, Spesh and uni extension). Even though the first three are your best three subjects, only two of them can count in your top four. And because only three maths subjects can count toward your ATAR, the uni extension subject won’t count at all. So basically:
• English: 36 [top 4]
• Further Maths: 50 [top 4]
• Maths Methods: 44 [top 4]
• Biology: 41 [top 4]
• Specialist Maths: 4.2 [10% increment]
• Uni Extension Maths: N/A
Okay, great. So now we know that your ATAR is determined by your percentile rank, and that your percentile rank is determined by your aggregate. We also know that your aggregate is the sum of your study scores (with stipulations), which leads us to the question: what is a study score, exactly?
What is a study score?
A study score is sort of like a mini-ATAR: it’s a standardised score from 0-50 you’ll receive for each of your individual subjects. The median study score is 30. [9]
Like with the ATAR, though, study scores aren’t really scores at all – at least, not in the traditional sense. Again, they’re rankings of how you’ve performed in context of the cohort for that subject. So, a study score of 40 certainly doesn’t mean you’ve averaged 40% (or even 80%, as study scores are only out of 50).
Study scores are standardised, meaning they work on a bell-curve. The standard deviation is about 7, meaning that approximately two-thirds of students in a subject will receive between a 23 and a 37 study score. Anything below a 23 or above a 37 is more than one standard deviation away from the median study score of 30. I’ve modified this image below to show that standard distribution, with some study scores in red.
So you can see that the majority of students will receive a study score of between 23 and 37 (for any subject).
How are study scores calculated?
The basic process is this:
• You are awarded a letter grade (from E to A+) for each Graded Assessment. There are three Graded Assessments (or GAs) per subject. These will change from subject-to-subject. For example, for Further Maths (a subject with two exams), GA 1 refers to overall SACs, GA 2 is the first exam, and GA 3 is the second exam; for something like Psychology (just the one exam), GA 1 refers to Unit 3 SACs, GA 2 is Unit 4 SACs, and GA 3 is the exam.
• These scores all contribute to a total score of sorts, which is standardised (against the bell curve mentioned earlier) by VCAA, taking into consideration things such as weighting (as not all GAs are worth the same amount).
• The score is converted to be out of 50. [10]
The standardisation means that, every year, the distribution of study scores is pretty close to this: [11]
So, as you can see, only 9% of students receive a study score of 40+, and only 2% of students receive a study score of 45+. To receive a raw study score of 30, you need to be in the top half of the cohort.
What this means is that, in theory, you probably could roughly predict your study scores if you had all of the necessary information (as you could roughly work out how many standard deviations you were from the median). Here is an old ATAR Notes thread on the matter, but like, quite honestly, I think it’s a waste of time. I know it can be frustrating not really knowing where you sit etc., but ultimately, intricately going through the system and trying to predict your scores won’t actually increase them. Studying might, so that’s a much better use of time!
Now, you might be thinking, “okay, cool – but like, not all SACs are the same; how is that fair?” Well, it wouldn’t be if we didn’t have SAC moderation.
SAC moderation
VCAA wants a fair system. The current system reflects that; it has been devised in a way that aims to eliminate all unfair advantage or disadvantage given to any particular student. As you can no doubt imagine, there are a couple of things that threaten the fair system that VCAA is hoping for.
One of these is the fact that different schools have different SACs and could, therefore, give out extremely easy or extremely difficult tests compared with the VCAA standard. For example, my school might give really easy SACs, and the school in the next suburb might give very difficult SACs. This would obviously be to my advantage – it would be easier for me to get a good mark than those students at the other school.
Another potential flaw in the system is the idea that teachers mark differently. So even if, hypothetically, every school in the state administered the same SACs, we’d still face the issue that some teachers mark stringently, whilst others are more lenient. It would genuinely suck to lose points from your results just because of the way your teacher marked, and VCAA knows this very well.
So, what VCAA does is aligns (or moderates) each school’s SAC results to be more in line with VCAA’s own standards. Let’s consider a few different examples.
We have three different schools here: School A, School B and School C.
Throughout the year, students from School A received a median mark of 50% on their SACs. But in the end-of-year exam, the median mark for that cohort was much higher: 75%. What this suggests is either that School A’s SACs were more difficult than VCAA’s standard, or that they were marked too harshly. This wouldn’t be fair on the students from School A, so VCAA boosts their collective SAC marks accordingly.
Throughout the year, students from School B received a median mark of 80% on their SACs. But in the end-of-year exam, the median mark for that cohort was much lower: 50%. What this suggests is either that School B’s SACs were easier than VCAA’s standard, or that they were marked too leniently. This wouldn’t be fair on other students who sat more difficult SACs, so VCAA lowers School B’s collective SAC marks accordingly.
School C’s median SAC mark and exam mark is about the same, so VCAA makes no change; those SACs already seem to be reflective of VCAA’s standard. This directly from VCAA [[12]]🙁http://www.vcaa.vic.edu.au/Documents/statmod-brochure.pdf)
Students now have moderated School-based Assessments, which are comparable across the entire VCE system. For example, if we compare moderated School-based Assessments in Geography, we can be sure that a student in Horsham with a mark of 75 has a higher achievement than a student in Richmond with a mark of 60. Before moderation we would not have been able to tell which student had the higher achievement because they are on different scales…
Comparing SAC marks before moderation just isn’t logical. In the same way, consider two tubs of bricks: one weighs 50 kilograms, and the other weighs 100 pounds. It’s illogical to say that the second tub of bricks is heavier just because 100 is greater than 50; we can’t really compare them, because they’re set on different scales (kilograms and pounds). When we convert them both to the same measurement scale, we can actually see that 50kg is heavier than 100lbs, but we couldn’t tell that until we’d moderated the scale.
That’s exactly what VCAA is doing with SAC moderation. As such, comparing your raw SAC marks with those of people you know from other schools is a pretty fruitless exercise. It’s a little more complicated than that, as indicated by the quote below, but that’s the basic premise.
[T]he following marks are determined for both the School-based Assessment scale and the external score scale: the highest achievement, the upper quartile, the median and the lower quartile.³ These scores are used as fixed points for aligning the two scales…
[13]
The other take-home point here is that, really, it’s in your interest for your cohort to perform well on the exam(s), and trying to sabotage your cohort’s exam marks is really just sabotaging yourself.
Where it gets really interesting, though, is that this sort of thing also happens on an individual level, and that’s what we’ll look at now in the form of rankings. Also note that these raw study scores don’t consider the competitiveness of that subject in that given year, and we’ll consider this in the “Subject scaling” section later on.
SAC rankings
You might have heard on these forums that “it’s not really your raw marks that matters; what matters is your ranking.” What does this mean?
Well, throughout the year, a hierarchy is established within your cohort for SAC performance. That is, the student in your cohort with the best SAC average will be rank #1, the next best average will be rank #2 and so on. A lot of things can change through SAC moderation/scaling etc., but SAC ranking is not one of them. So, if you finish as rank #1, you’ll stay as rank #1.
[T]he rank order of students’ assessments in a school is not altered by statistical moderation… The VCAA acknowledges that teachers are in the best position to measure students’ academic achievement in School-based Assessment.
[14]
Having a high rank is important because it means you benefit from your cohort’s exam performance. This is most easily explained with an example.
(Please note that the following section is very rough, and in reality it doesn't work as neatly as this. Having the second highest ranking doesn't mean you'll get the second highest exam mark as your scaled SAC mark etc. - it's just an easy way to explain the basic process. Actually considered are gaps between ranks and other factors, so keep this in mind.)
Okay, so there’s a bit to take in here. Basically, this table shows a cohort of 10 students studying one subject – it doesn’t matter which subject. They’re listed in by SAC ranking – so, Student A had the highest average, Student B had the next highest average, then Student C, and so on. It also shows each student’s end-of-year score on the VCAA exam.
What you might notice straight away is that the best-performing students from throughout the year in this cohort didn’t actually perform the best on the exam, as may be the case in reality. But to run you through what happens:
• Rank #1 gets the best exam score as their moderated SAC score. In this case, Student A achieved a 52% on the exam, but benefitted from Student D’s 99% as they’re rank #1. Student A’s exam score isn’t affected, but they get 99% as their moderated SAC score.
• Rank #2 gets the second best exam score as their moderated SAC score. In this case, Student B achieved a 96% on the exam, which just happened to be the second best score of their cohort. As such, their moderated SAC score is 96%.
• Rank #3 gets the third best exam score as their moderated SAC score. In this case, Student C achieved an 89% on the exam, which just happened to be the third best score of their cohort. As such, their moderated SAC score is 89%.
• Rank #4 gets the fourth best exam score as their moderated SAC score. In this case, Student D achieved a 99% on the exam. However, as this was the best exam score of their cohort, it actually goes to rank #1. Student D gets allocated 85% as their moderated SAC score, as this was the fourth best exam score.
As you can see, having a high ranking can be a pretty big advantage. If you’re stuck down near the bottom of your cohort, your moderated SAC score is, at least to a degree, a little contingent on exam performance of those ranked above you.
Statistical moderation does not change the rank order of students, as determined by the school’s coursework scores. A student given the top score for coursework by his/her school will have the top score after statistical moderation, no matter how they perform on the examination(s).
[15]
Now, once all this has been said and done to find your raw study scores, they get scaled according to competitiveness.
Subject scaling
As we know, VCAA is looking for a fair system, and a big consideration in that is how to compare vastly different subjects. For example, how can you really compare Visual Communication & Design, and Chemistry? Very difficult.
The idea is that the raw scores you receive don’t actually take into account how difficult it was to achieve a mark in the middle of the cohort in various subjects. For clarity – and this is something VCTAC also stresses – by “difficulty”, I don’t mean how inherently difficult the subject is, or how hard the content is. After all, those are extremely subjective measures. Instead, “difficulty” here is measured by competitiveness in the subject.
All Study Scores are scaled to adjust for the fact that it is more difficult to obtain a high Study Score in some studies than others. This is NOT because some studies are inherently harder than others. This IS because some studies attract a more competitive cohort of students than others.
[16]
This is the reason that subject scaling changes a little from year to year: not because VCAA is arbitrarily changing its mind on how difficult subjects are, but based on varied competitiveness in subjects from cohort to cohort. There is nothing inherently about Specialist Maths that means it scales up, and it could just as easily scale down if the cohort were right. Similarly, Health & Human Development could scale up if the cohort were right.
Now, how is “competitiveness” measured? Basically, by how well students in a particular subject do in their other subjects. This might seem a bit counter-intuitive, but let me explain it in this way.
Say you’re going to compete in two different 100-metre races. In the first, you’re up against a bunch of slow, old, overweight non-athletes. In the second, you’re up against Usain Bolt, Tyson Gay, Yohan Blake, Asafa Powell, and a bunch of other professional sprinters. Neither of these races is inherently more difficult – they’re both simply a 100-metre running race. But the competition is vastly different, thereby making it much easier to achieve a mid-place ranking in the first race than the second.
The competition, here, is determined by how well the competition does in their other races. So, Bolt, Gay, Blake, Powell etc. are, on average, going to do very, very well in their races; the non-athletes from the first race, probably not so much. The same sort of principle holds with the VCE system.
So, according to the 2017 scaling report, English Language scaled from a raw 30 to a scaled 33. To get to this point:
• We take all of the English Language students studying the subject that year.
• We look at how those students have performed in all of their other subjects.
• We take whatever the EngLang students averaged in all of their other subjects, and that becomes the new average for English Language.
• So, in this case, it was determined that the English Language cohort performed above the average by about three points, which is why the subject scaled up by three.
Equally, we could look at, say, Geography, which scaled from a raw 30 to a scaled 29.
• What we can extrapolate from this is Geography students, on average, scored a 29 for their other subjects.
• And the average study score for Geography, therefore, has been scaled down by one to account for this below-average competition.
Some people see subject scaling as rewarding or punishing those taking certain subjects, but it’s simply not the case. All VCAA is doing is levelling the playing field. Without this scaling, those studying subjects where it’s more competitive and, therefore, more difficult to achieve a middle ranking, would be unfairly treated.
With this in mind, the best thing you can possibly do is to choose subjects you a) need as pre-requisites for your university course; b) are passionate about and enjoy; and c) are good at. There is no way to “game” the VCE system in this way.