Assessing Students HE
Assessment is integral to good learning and teaching, and effective assessment and moderation is a key foundation of the confidence that students, staff, the University and the community have in our learning and teaching processes and outcomes.
The Following information will guide and assist HE teaching staff to design, implement and moderate assessments at CDU.
The design and selection of assessment tasks needs to include consideration of CDU policies, the approved accreditation documentation for the course, and the learning outcomes of the course and units of study.
CDU has developed key policies that inform how assessments are designed and implemented in HE courses and units of study to ensure they are fair and equitable, valid and reliable, and engage students while also improving their learning skills and outcomes. The following CDU policies are relevant throughout and/or at specific times throughout the assessment cycle:
- Academic Assessment and Moderation Policy (.pdf)
- Academic and Scientific Misconduct Policy (.pdf)
- Higher Education Assessment Procedures (.pdf)
- Grading Policy (.pdf)
- Students - Academic Grievance Procedures (.pdf)
Course accreditation requirements
All HE units of study that are accredited for delivery have had the format, weighting, and learning outcomes for each assessment task reviewed and approved by various teaching and learning committees during the Course Accreditation and Re-accreditation Process. Ensure you refer to the unit’s approved accreditation documents when developing new assessment items or re-designing existing items (contact the Course Coordinator in your school for this information).
The design and selection of assessment tasks needs to ensure that students’ progress towards achieving the unit learning outcomes can be measured and verified, which is often referred to as curriculum alignment. Several approaches can be taken to ensure assessments tools are valid and are correctly assessing the established learning outcomes.
Example: The link below is an example of how a final exam can be constructed to ensure it covers the unit learning outcomes, is distributed across the unit’s topics fairly, and the marks reflect the required effort and complexity of the subject material. This approach also makes it easier to see if adjustments are needed to the content, weighting or spread of the exam questions.
If you would like assistance with evaluating your unit to ensure it meets all of the above design considerations, contact the HE Developer assigned to your School.
Assessment in higher education is a focus for much study and innovation. Some notable frameworks in this field include:
When designing a specific assessment task, consider what you are trying to achieve. What do you expect the student to know after completing the task, and is the primary purpose knowledge acquisition, skill acquisition, synthesis of knowledge, or creation of a product? It is important to ensure that the task leads to the achievement of the unit’s learning outcomes. It is also important to consider assessment tasks that engage students, and this can be achieved by setting tasks that students are likely to value because of the output as well as its obvious links to employability and professional skills.
One strategy to engage students in learning that is productive is to ’incorporate authenticity’. This requires framing the task or activity in such a way that students can see the intrinsic value of the output; for example, a poster to display to peers or in a shopping centre to convey knowledge, a work of art for a competition, or a patient diagnosis. This is more likely to engage students than an output the student is less likely to value, such as a scientific report, or multiple choice tests. Allowing students to practice skills they recognise as important when employed in their field increases their motivation and performance on these tasks (Mueller, 2005, p.5).
Another strategy is to incorporate the principle of ‘authentic assessment’. Gulikers, Bastiaens, and Kirschner (2004, p.69) define authentic assessment as:
An assessment requiring students to use the same competencies, or combinations of knowledge, skills, and attitudes that they need to apply in the criterion situation in professional life.
To learn more about authentic assessment and how to innovate your assessment, you may be interested in the following resources:
- How do you create authentic assessments? - North Central College, II, 2014
- OLT resource library on Authentic Assessment
- Student Assessment for learning in and after courses (pdf) - University of Technology, Sydney, 2010
There is a range of different methods or formats which can be used to assess student learning. The table below lists the common types of assessment tasks that can be used. This list is not exhaustive, but may spark some ideas. If you would like to discuss some ideas contact the HE Developer assigned to your School.
Types of assessment
|Type of assessment task||Methods/formats for assessment|
|Written task||Essay, journal article (for publication), short answer questions, short stories, poetry,short response, new paper articles and legal submissions|
|Practical competency||Simulations, case studies, practice logs, demonstrations, role plays|
|Tests||Final exams, external exams, online quizzes, mobile compatible tests, self-assessment, invigilated exams, take home exams, review questions at the start of lectures, or tutorials.|
|Reflection||Journals, discussion boards, blog, image/picture, painting, poem, posters, musical composition and diary entries.|
|Presentations||In-class presentation, recording, multimedia submission, voice tools, video, online classroom (powered by Blackboard Collaborate), symposium, performance|
- Does the assessment address the relevant learning outcomes (as identified in the accreditation documents)?
- Does the assessment format (test, essay, presentation) align with the accreditation document?
- Is the task assigned clear, and well defined?
- Are the resources the students will require accessible and available (held in the library, available on e-reserve, access to specific equipment, computer programs etc)?
- Will students be able to see value or relevance in completing this task?
- Have you defined the success criteria?
Assessment with Large Groups
When you are preparing to teach large groups of students it is particularly important to plan and design the assessment carefully. This can help to avoid the 5 assessment challenges often faced with large student groups (Centre for the Study of Higher Education, 2002) :
- Avoiding assessment that encourages shallow learning
- Providing high quality, individual feedback
- Fairly assessing a diverse mix of students
- Managing the volume of marking and coordinating staff
- Avoiding plagiarism.
The following video shows how some lecturers at CDU are managing these assessment challenges.
Text transcript (.docx)
Example assessment tasks
Maher, S. (2013). SBI245 Biochemistry Assessment 2: Magazine assignment. Darwin: CDU (.docx 267KB)
To ensure students are able to perform optimally on assessment tasks it is critical to clearly state your expectations for an item. This can be done in a variety of ways:
- Provide a rationale for the evaluation criteria you use
- Include students in the development and evaluation of the assessment tools (or at least allow them to comment and suggest changes)
- Align evaluations tools to current school and university, and professional national or international standards
- Use exemplar submissions at high, middle and low levels to frame discussions with students on quality and expectations.
Rubrics, marking grids, and assessment matrices make it easier for students to see the alignment of the assessment task to one or more unit learning outcomes. They also ensure consistency between student and staff expectations, improve reliability between different markers, and remove some of the subjectiveness from assignment marking, as each level of achievement is clearly defined.
A marking schema or rubric of pre-established criteria against which each assessment task is marked should be provided to both students and markers. The marking schema or rubric should note the relative importance of the criteria and should allow for the identification of areas done well and areas that need improvement. It should also consider English language proficiency requirements such as those recommended in the English Language Proficiency Framework (.pdf 1.83MB) (requires CDU login).
Students should be awarded marks and/or grades reflective of the extent to which required learning outcomes are achieved, as well as individualised and substantive comments based on the criteria. Marks should not be deducted for any incorrect or missing answers. A negative mark cannot be awarded for an assessment task.
In the examples sectionyou will see a collection of assessment tasks, with associated evaluation tools, and lecturer comments. If you would like to learn more about rubrics, please complete the self-paced Professional Development unit Introducing Rubrics, or contact the HE Developer assigned to your School.
- Have you identified the key element of the assessment task?
- Do you have an idea of what the model answer will look like?
- Have you clearly communicated expectations and marking schemes to the student?
- Is the marking scheme weighted to account for the relative importance of different elements?
- Will the marking scheme be easy for you and your markers to use (ordered in an appropriate manner, clear definition of success criteria, no or very limited interpretation required)?
Once an assessment item has been submitted, it needs to be marked (this includes providing feedback), moderated, and returned to the student.
For the student, feedback is one of the most critical elements of assessment, as it identifies areas where students can grow and provides ideas about how to improve. The quality of feedback - what comments are made, how they are phrased and what areas are identified as strengths and weaknesses- allows the student to reflect and improve.
Feedback can include quantitative and/or qualitative information that provides a clear indication of the student’s progress towards meeting the units learning outcomes. The information can be quantified in the form of marks or grades for assessment tasks, and/or in qualitative form such as comments, model answers, or suggestions for further reading.
Effective feedback is timely if it is provided to students soon after they have completed the work and have a high level of interest in their performance. It is also important that students have sufficient time to act on the feedback such as undertaking further learning or receiving further assistance (Gibbs & Simpson, 2005, p.17).
CDU Higher Education Assessment Procedure (.pdf) establishes recommendations on the timing and format of feedback provided by academic staff to students. The excerpt below identifies some of these:
Assessment feedback is essential to assist students to achieve desired learning outcomes. Feedback on assessment tasks will normally be provided to students within two (2) to three (3) weeks of submission and must be:
- Informative, constructive, timely
- Provided throughout the learning process
- Fair, justifiable, reasonable, and
- In keeping with acceptable workload expectations for both students and staff members.
When developing assessment tasks, staff members should consider both staff and student workloads to ensure that assessment due dates and turn-around times provide students with sufficient time to use the feedback in order to prepare for the next assessment.
Wherever practicable, each unit will have an assessment task which is weighted less than 25% of the final mark due within the first four (4) weeks of semester in order to provide students with early feedback and guidance.
What is good feedback?
Nicol and Macfarlane (2006) have developed an evidence based framework for good feedback, and argue that it should:
- Clarify what good performance is (goals, criteria, expected standards)
- Facilitate the development of self-assessment (reflection) in learning
- Deliver high quality information to students about their learning
- Encourage teacher and peer dialogue around learning
- Encourage positive motivational beliefs and self-esteem
- Provide opportunities to close the gap between current and desired performance
- Provide information to teachers that can be used to help shape the teaching
How can I provide feedback?
Substantive comments that are encouraging and supportive of a student’s efforts and provide detailed individual feedback will enable a student to improve his or her work. These comments may include identifying areas that require further study and any other strategies that may assist the student in the learning process.
Generic comments provided to groups of students may be useful, but would not normally substitute for substantive individualised feedback particularly during the early stages of a course. Individualised feedback should give sufficient detail explaining not only what needs improvement but also why and how it can be improved. For example, 'Compare and contrast this example to the literature' is a more instructive comment than 'Issues not sufficiently analysed'.
Holistic feedback addresses overall content structure and formatting, and gives students higher level interpretation and feedback. However, this type of feedback can be difficult for some students to interpret and address:
"The use of complex sentence structures to increase the interest in your writing."
Some students will be able to read this and immediately identify how this can be applied to their writing. Other students will see this comment and not understand what it means – there is no example of a complex structure provided and no direct reference to their assignment where it would help.
Some students require specific error identification to assist them to improve:
“Aaccommodation in line 3 should be accommodation. Proof reading your assignments and using the spell check will help ensure these types of mistakes are corrected before you submit assignments in the future. ”
This comment is looking at a very specific error and provides a suggestion on how to resolve and avoid the error in the future. Depending upon individual student need – either type of feedback might be appropriate or required. For other students simply highlighting the word will convey there is an error, which they should be able to identify and resolve without further explanation.
Discussion is also an important aspect of providing feedback to students, and this can be face-to-face, by telephone or other agreed upon methods. Learnline can be used to allocate and notify students of times when feedback can be discussed.
Example: Learnline itself can also be a tool to provide individualised feedback to students. Ensure that students are able to find the information by providing instructions on where and how to access feedback:
- Have you allowed yourself enough time to review assessments and provide feedback before the next assessment it due?
- Have you provided specific examples or strategies of how to overcome any identified problems?
- If a student is frequently misusing colons – provide one example of how they could more appropriately be used.
- If a student is struggling with their referencing – you could link to the library guide or let them know when the next referencing session is going to be held.
- Have you identified both the strong and weak elements of an assessment task for example,
- "Good effort, but most of your evaluative points are about the positive value of this research. Can you make any critical points about the limitations of this research? For example have things moved on since new technologies were implemented?"
- Many times well written assessment items receive less feedback because there are no obvious areas for improvement. These students should still be provided with feedback, for example highlighting areas where they have provided a critical evaluation of the literature.
Tips from experts
If there is a team of markers for an assessment item ensure there are clear guidelines on the marking criteria, expected return times, volume and quantity and quality of feedback.
Limiting feedback to one format (or avenue) can improve turnaround times without disadvantaging the student.
If there is feedback in multiple places students may not find and reflect on them all. If using multiple markers ensure clear expectations, training and support is provided, consider cross marking and establishing a shared comment bank.
Do as much online as possible to mitigate the risk of losing or missing assignments.
Be reasonable in your time allocations: 10 mins active marking may still require 5 minutes of computer manipulation (printing, scanning, uploading, data entry). For longer assignments 30-60 minutes may be more reasonable.
Evaluation of feedback types
Feedback doesn’t need to be hand written notes on a page, and can make the most of new technologies, and can reflect the task and the mode of assessment. The methods and mechanisms for providing feedback are almost as endless as the types of assessments possible.
|Feedback location||Pros||Cons||Useful for/alternative|
|Within a word document||Easy to read, targeted comments.||File management. Download/upload/data entry time.||Try inline marking|
|Marking up a printed copy||Can do anywhere.||
Returning feedback to online students (requires posting, or scanning and uploading).
Managing paper/filing upon completion.
Environment concerns when printing large number of assignment.
Comments may not be legible.
Hard copy submissions
|Within a rubric||
Comment linked to section of assessment.
Typed, easy to read.Automatic entry into grade book (Learnline).
|Students may have difficulty finding the feedback.||Useful for large cohorts (ensure you provide assistance to find feedback).|
|Within the comments filed at the submission point||
Typed, easy to read.Easy for students to find and access.
|Comments may be presented out of context (students have difficulty identifying why a comment was made.||
Good for holistic comments.Risk of providing less specific feedback.
Can provide more dynamic feedback, emphasising particular points.
Builds report with students.Can demonstrate the failings of punctuation.
Students may not listen to file.
Cannot use a comment bank to populate.
Long comments will end up very large files.
Good for holistic and higher level comments.Can be effective for written and non-written assessment items.
|Automated responses||Instant specific feedback to all students (if set up when test bank created).||
Unable to customise for individual students.Ineffective for essay responses or complex short answer.
|Great for MCQ or single word questions, not good for essay.|
Prevent typing the same advice to multiple students.
Ensures consistency of response.Can build bank from year to year if the same assessment item is used.
May use comments inappropriate for a specific situation.
If comments aren’t specifically crafted may provide misleading information.Provides more detailed advice because you’re not having to type it out each time.
|Great for common problems with longer written assignments (essays, reports etc).|
- I have identified potential markers (before semester starts)
- I have confirmed contracts and availability for marking periods (week 1-2)
- I have identified how I will provide feedback
- I have provided students with instructions on how to access feedback
- I have informed students about when feedback will be available
- I have allocated an achievable workload to markers
Experienced markers may be able to evaluate and provide feedback in 10 -15 minutes for simple assessment items, and 20-30 minutes per assessment for longer of more complex tasks.
- I have provided clear guidance to my markers
- Marking assistants and students know when results will be available
Ensure you keep in touch with markers, to confirm they are able to complete the tasks assigned, in the timeframe established.
There are a variety of moderation methods, and the most appropriate will depend upon the situation and local School procedures. Moderation can occur for a single element of an assessment, or it can be reflective to improve the item for future cohorts. Occasionally there may be a need to undertake whole class moderation, and this will be discussed and approved at the examination meeting for your school. If you would like to learn more about moderation and marking consider taking the Marking & Moderation PD unit.
The University of Southern Queensland brochure provides information on the different types of moderation (.pdf) which should occur for each assessment item.
Monitoring and evaluation of assessment tasks is important to ensure they are meeting their objective. Monitoring can include reflecting upon: student performance on tasks, student attitudes, or the relative ease or difficulty in marking/providing feedback.
A review can be informal and include making notes on: how you feel the assessment went, student performance, timing, or critical-reflections made during marking. Reviews can also be more structured, with an evaluation of student achievement on an assessment task that looks at performance distributions to identify anomalies.
Exam Timetabling Information Collector (eTIC)
Where a unit requires a centrally administered exam, academic staff (usually the unit coordinator) will receive an email request in the first half of each semester (usually week 3 or 4) to update the examination requirements of their course in eTIC. This will enable the examination to be included in the examination timetable, and the information collected (examination length, reading time, and resources required) will be used to arrange examination centres for external students, and to ensure that sufficient booklets, seats, tables, invigilators and exam sessions are available.
Note: You will only be required to update eTIC if you have a centrally administered final exam for your unit approved on the current accreditation documents.
Exam production process
The unit coordinator is responsible for submitting the exam paper to the Academic Liaison Unit (ALU) by the submission deadline (usually week 6 or 7 of semester). It must adhere to the required style guide and quality checklist provided by ALU. Exam papers must be approved by the unit coordinator and a peer, as well as the Head of School.
ALU completes a final format check, ensures all relevant documents are attached, and arranges printing and distribution to the exam locations (local, national and international).
While the unit coordinator is responsible for submitting the exam, it is recommended that the content of the exam be produced by the academic staff responsible for teaching the unit. It is good practice to have the exam checked by a colleague who is knowledgeable in the field to determine if the tasks are relevant to the material delivered, reasonable as an assessment at an appropriate level of understanding and complexity and to confirm all resources or information which may be required to complete the exam are included (such as appropriate formulae). If you are using machine readable MCQ answer sheets ensure you attach these to the exam paper.
Preparing your marking guide while you are producing your exam can help to ensure the mark allocation per question is accurate.
Note: Exam papers (even in draft form) should not be sent as an emailed attachment.
Unit coordinators or their proxies are expected to attend at least the reading time of any examination. It is advisable to take your ID with you when entering the exam room as the invigilators will not necessarily know who you are and may actively prevent you from entering the room ahead of time. During this time students may ask questions and seek clarification on material included in the exam. At the completion of the examination session, the students’ scripts are available for collection from the lead invigilator. Any external papers will be received by the External Examinations Officer, who will contact academic staff with collection instructions.
At CDU the majority of HE units are delivered online, with much of this occurring within Learnline. New technologies overall, and particularly the technologies embedded within Learnline, have the potential to transform the assessment processes. The following table provides examples of how technology can be incorporated into different types of assessment activities and purposes:
|Purpose of assessment||Tools to facilitate learning|
|Open Collaborate rooms, wiki, journal, blogs, file exchange, Skype, email.|
Large student cohort
|Student response systems (clickers), interactive sections based on small groups, comment banks for written assessments, inline grading, self and peer review, group tasks, groups with different deadlines or assessment tasks.|
|Collaborate recordings, or live sessions, voice -over PowerPoints, narrated animations, role-plays, movies, skits, formal presentations, debates.|
Diverse student groups
|Study skills, orientation website, campus tour, support, flexibility, alternative formats.|
The Inline grading tool within Learnline enables assessment items to be submitted electronically by students, and for the teacher to edit and add feedback, and record marks in the Grade Centre, before returning it to the individual student.
The Inline grading tool can convert, display, and annotate Word, PowerPoint, Excel, and PDF file formats within a web browser, making it possible to mark assessments without downloading or scanning the document. The original and the edited documents will remain available to both the student and the lecturer.
For items which need to be marked offline, there is the capacity to bulk download assessment items (which will ensure a unique identifier (student number) is attached to each submission). When marking offline you can use the Grade Centre in Learnline to keep track of items that have been marked. It can be useful to have nested folders, one containing unmarked documents, and the other containing documents with feedback, to minimise the chance of missing an assignment or marking the same assignment twice.
For more information on online marking or strategies for marking assignments contact the HE Developer assigned to your School.
The CDU process
Callista Grade entry
Final student results should not be released via Learnline. Performance in individual tasks may be returned this way – but any results released through Learnline are not final, and are subject to change. HE grade entry is done in Callista. If you do not have access to Callista or the grade entry module, make a request through eCentre and contact the Calista Team for training.
Grades for each item should be confirmed by the academic and a letter grade for overall achievement awarded to each student in accordance with the CDU Grading Policy. It is advisable to have a colleague check your spread sheet for any formula errors before commencing data entry.
Working with a colleague, enter a letter grade against each student. If there is no result available (i.e. delay in exam paper arriving) leave the grade blank. If an extension has been granted enter Assessment Continues (AC).
To check the grades entered, the results for each unit can be downloaded through Area 52, or a final candidates list can be prepared.
School Examiners meeting
All grades are moderated at a School Examiners, Faculty Examiners and University Examinations or Academic Board level, prior to being finalised and released to students.
All student assessment results should be finalised and entered into the system prior to the School Examiners meeting. Unit coordinators are required to prepare a report on the grades awarded and submit it for review at the examiners meeting.
After each stage grades may need to be adjusted. The updated results are processed using a change of grade form.
Once the moderation process has been completed and student grades have been finalised and approved, they are released to students.
Biggs, J. (2008). Aligning teaching for constructing learning. Retrieved from http://www.heacademy.ac.uk/assets/documents/resources/database/id477_aligning_teaching_for_constructing_learning.pdf
Blackboard 9.1 Tool Guide. (n.d.). Retrieved from http://www.ipfw.edu/dotAsset/d4dda0a2-cd6b-4d51-9df5-d8784543025e.pdf
Boud, D. (2010). Student assessment for learning in and after courses. Final report for Senior Fellowship. Retrieved from http://www.olt.gov.au/system/files/resources/Boud%20D%20UTS%20Fellowship%20Report%202010.pdf
Centre for the Study of Higher Education. (2002). Assessing large classes. Australian Universities Teaching Committee. Retrieved from http://melbourne-cshe.unimelb.edu.au/__data/assets/pdf_file/0011/1770707/Large.pdf
Centre for University Teaching. (2014). IRU e-assessment forum (videos), Flinders University
Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students' learning. Learning and Teaching in Higher Education, 1, 3-31.
Gulikers, J., Bastiaens, T., & Kirschner, P. (2004). A five-dimensional framework for authentic assessment. Educational Technology Research and Development, 52(3), 67-85.
Masters, G. (2013). Reforming educational assessment: imperatives, principles, and challenges’. In Australian Education Review (no. 57). Australian Council for Educational Research. Retrieved from: http://research.acer.edu.au/cgi/viewcontent.cgi?article=1021&context=aer
Mueller, J. (2005). The authentic assessment toolbox: Enhancing student learning through online faculty development. Journal of Online Learning and Teaching, 1(1): Retrieved from http://jolt.merlot.org/vol1_no1_mueller.htm
Nicol, D.J. & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218.
Nicol, D. J. & Milligan, C. (2006). Rethinking technology-supported assessment practices in relation to the seven principles of good feedback practice. In G. Gibbs, C. Bryan and K. Clegg (Eds.), Innovative Assessment in Higher Education (pp.110-119).London: Routledge
Race, P. (2010). Learning through feedback. In Making Learning Happen: A guide for Post-Compulsory Education (2nd ed., pp. 95-112). Thousand Oaks, Calif.: SAGE
The University of Queensland (2011). A UQ Assessment Brief on "Assessment of large groups". Brief No. 19 March 2011. Retrieved from http://www.uq.edu.au/tediteach/assessment/docs/brief-19-mar2011.pdf
Vanderbilt University (2014). Writing good multiple choice test questions. Retrieved December 16, 2014, from http://cft.vanderbilt.edu/guides-sub-pages/writing-good-multiple-choice-test-questions/#higher
Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research & Evaluation, 2(2). Retrieved from http://PAREonline.net/getvn.asp?v=2&n=2