A comparison of students' responses to automated and manual computer literacy assessments.
The aim of this research was to determine the differences in student responses of two forms of assessment, automated and manual in terms of measuring student capability in the computer literacy programme, The International Computer Drivers Licence. Computer Literacy studies are an integral part of many academic programmes and have become a basic requirement for securing certain employment. Many academic programmes utilise recognised computer literacy qualifications rather than developing their own. In this case study, assessment within one of the most prestigious programmes, the International Computer Drivers Licence (ICDL), is the focus of attention. This qualification has become a benchmark for such computer literacy certification. Formal assessments are conducted to complete the certification. The certifying body, The ICDL Foundation, that controls this qualification, allows institutions to select from two modes of assessments. The modes of assessment are paper- based ‘manual’ (traditional) assessments or approved automated assessment software that is commercially available through different software suppliers. Manual assessments are available from the ICDL Foundation and conducted by external examiners, whilst the automated assessments are designed by software companies and approved by the ICDL Foundation. This case study looks at a comparison between students’ responses of the automated assessments that uses simulation of major software packages such as Microsoft Word and Excel and a manual assessment. The focus of this study was to gain some insight into students’ experience when taking the automated assessment and how it compares to a manual assessment. A case study was conducted in which a group of volunteer students were requested to take two assessments on a particular section of computer literacy. The first assessment was the automated assessment followed by a manual assessment which assessed the same outcomes as the automated assessment. During these assessments certain phenomena were observed and recorded. These observations were then qualitatively analysed and organised into themes. Scores of these two assessments were also compared to establish if the students showed marked differences between the two assessments. However the small sample size means that no conclusions could be made based on statistical differences. Immediately after the two different forms of assessment, six of the students were interviewed. These interviews were conducted using semi-structured questions. The questions revolved around the students’ perceptions of their responses to the automated and manual assessments and in particular how the students perceived both assessments. The transcriptions of these interviews were then qualitatively analysed and common themes were extrapolated. The results of the study show that students’ abilities were not always being assessed accurately in the automated assessment. The data in this study also shows that the automated assessment, whilst highly reliable and objective, does not present an authentic assessment environment. This resulted in high scores being awarded where students were not able to perform the same tasks successfully in the manual assessment. This calls into question the validity of the automated assessment and its ability to assess students’ practical skills accurately. The interview data also suggests that the use of multiple choice questions and discrete tasks in the automated assessment further resulted in students adopting a surface approach to learning in their preparation for this summative assessment.ses