The Case for Authentic Assessment in Actuarial Education
An article written by Dr David Pitt, Jim Farmer, John Evans and Adam Butt was published in AJAP (1:63-75) this year and focused on the graduates’ views on the applicability of their university taught skill set to the workforce.
It revealed that almost 50% of the students surveyed found technical Part 1 studies only ‘a little bit useful’ or ‘not useful at all’ in their current employment. To many students this is quite a confronting fact; that half our peers believe they have not gained a significant amount of applicable skills in three of four years of study.
Even if you consider the fact that many actuarial students do not enter ‘traditional’ actuarial roles, one would assume that most actuarial graduate roles would involve some sort of analytical work that could utilise actuarial techniques. Either way, isn’t it an actuarial graduate’s analytical ability which makes them valuable to many employers?
The AJAP article lists a few recommendations stemming from the results of the research with one of the key recommendations being that the Part 1 syllabus should include a greater emphasis on improving non-technical skills, mainly communication. Though this solves one problem, it does not seem to be a direct solution to many graduates’ perceived lack of applicable skills.
I propose that it is instead a lack of a focus on the possible creative applications of the skills taught in the programme that makes many graduates deem them not useful.
In 2009, a large scale, two year empirical study was launched by the Ministry of Education (MoE) in Singapore (Koh, K. H., 2012) which builds the case for the implementation of authentic assessment in the Singaporean education system. The tagline for the initiative was ‘Teach Less, Learn More’.
A quick overview of authentic assessment is that while conventional pencil and paper tests focus on and reward the reproduction of factual and procedural knowledge in artificially constructed contexts, authentic assessment tasks highlight knowledge construction, complex thinking and communication in real-world scenarios.
It has been noted by many academics that conventional assessments may not be the most effective method of assessing a student’s higher order thinking and also may not be a clear indicator of a student’s capacity to perform workforce tasks (Resnick 1987).
Though the context of the study may not seem directly applicable, the insights gained from the study should definitely be noted.
Using control and target schools to test the implementation of the programme, the study found that teachers trained in how to cater for authentic assessment methods had significantly different outcomes on the students to those from comparison schools.
Control school students only performed consistently better in two areas: knowledge reproduction and presentation of given knowledge.
Target schools saw a strong gain in performance for their ability to understand advanced concepts, to critique knowledge, to apply problem solving to new situations and to generate knowledge. These differences applied across the board for mathematics, English and science examinations. Doesn’t this seem to mirror the skill set which actuaries are traditionally valued for?
The heavy focus on final assessments in the Part 1 course, which don’t vary significantly in content from year to year, may be a barrier to many students finding the possibilities for what they have learnt.
The fact that almost a quarter of the graduates find their Part 1 studies ‘very useful’, yet on average 10% found them ‘not useful at all’, makes one ask whether perhaps it is an inability to see the different applications for their skill set which holds students back from realising its potential in any given work context.
It is also interesting to note that although 50% of students found their Part 1s of little or no use, only about 20% wrote that they believed there to be any weakness in the course structure or content. If students believe the content not to be practically applicable in their work context yet can see no weakness in content provided, then it must be the inability to apply their learnt skills which is the issue.
A disparity in the perceived usefulness of Part 1 studies makes one ask whether perhaps it is an inability to see the different applications for their skill set which holds students back from realising its potential in any given work context.
Anderson and Krathwohl (2001) outline that there are three types of knowledge:
- procedural – rules, techniques etc.;
- factual – bits of information; and
- conceptual – an understanding of the interconnection between different types of knowledge.
Authentic assessment focuses on the final type, which allows students to critique and manipulate knowledge because of a systematic understanding of how it is formed.
For university actuarial education, this could mean two things. The first is a change in focus from written examinations to workforce related project and problem solving based assessment. This is something the Actuaries Institute is already implementing in their Part III examinations, through the introduction of computer- based examinations. The second is a shift in teaching methods from mainly knowledge and procedure based tests to long-term problem based learning in which academics work alongside students.
It seems reasonable that such changes could be incrementally introduced with gains being seen in graduate satisfaction at each stage of implementation.
Perhaps this will lead to graduates with a smaller knowledge base, but each graduate will have an improved confidence in their ability to manipulate the knowledge they have to suit any particular situation.
CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.