Assessment innovation

In a wide-ranging review, Knight (2002) has argued that university assessment is in a “state of disarray” – typically failing in matters of validity, reliability, utility, accountability, and engagement – along with a range of other shortcomings. One matter overlooked by Knight is the shortcoming discussed in the present section: the vulnerability of assignments to undetected false ownership claims – in particular, submissions that arise from acts of social plagiarism. Being alert to this possibility makes it important that tutors reflect upon the ways in which they design, evaluate and administer assessment tasks.

While individual tutors do have this responsibility, some issues can only be addressed at an institutional level. Because they concern the overarching culture of assessment management. Hodgkinson et al (2016) have recruited Crime Opportunity Theory to generate a range of institution-wide tactics for combating forms of academic misconduct around assessment. Their Table 1 identifies 25 techniques of ‘situational prevention’. It is structured around the following 5 overarching imperatives: increase the effort, increase the risks, reduce rewards, reduce provocations, and remove excuses. Five strategies are associated with each of these, along with three possible examples of practice. These are valuable prescriptions for managing the context in which assessment is organised. Yet they do rather dwell on detection and punishment. We are concerned here more with innovating the design of what students are actually asked to create through an assignment and, thereby, give consideration to how those tasks can be more resistant to outsourcing.

Coursework vs. exams: constraining the task

Packed and silent examination rooms may still be the popular image of ‘assessment’. However, Richardson (2015) has charted how coursework has emerged in higher education to challenge that image. Coursework is generally preferred by students: grades tend to be higher, time pressure is absent, expression of ideas need not be through writing alone, problems of student anxiety or health may be reduced, and the protracted nature of coursework can stimulate students towards deeper reflection and analysis. While the alternative – examinations – are said to only entail testing and not teaching, they impose unnatural time pressures, depend on (hand)writing, are a cause of anxiety, and are unfairly vulnerable to student health fluctuations. Of course, it is the very openness of coursework that creates tensions around the legitimacy of authorship – while the closed-door nature of examinations seems to protect against this.

It is this last point that led a leading watcher of contract cheating (Lancaster, 2014) to argue for more widespread use of examination-based assessment. However, recent developments in personal digital devices for exam-based cheating may have prompted a reversal of this judgement (Lancaster and Clarke, (2017). Adams (2011) illustrates how these devices can readily penetrate the door of the examination room, commenting: “..the introduction of ubiquitous connectivity and wearable computing make a complete mockery of any concept of closed book exams”. Moreover, traditional examinations are also vulnerable to outsourcing in the form of candidate impersonation. This is a particularly severe problem for distance or e-learning courses, for which authentication is a challenge to all forms of assessment (Mellar et al, 2018). One project (Malesky et al 2016) conducted a field study in which A-grades were achieved for an online course entirely on the basis of essay mill submissions and without instructor awareness. Indeed, the vulnerability of such courses to fraud is a problem that makes some governments resistant to online courses that do not include some face to face teaching and/or some traditional examination formats.

However, while online proctoring methods might seem a possibility for confronting accreditation scepticism – and those methods have been developed for online courses – Dawson (2016) illustrates the various ways in which they can be hacked by the more sophisticated candidate.

A considered approach to assignment setting

Whether coursework or examination, the tutor setting an assignment must reflect on the required academic demands of the task, as well as the setting in which it is attempted. Hrasky and Kronenberg (2011) regret that when ensuring assessment is managed with respect to academic integrity, academics’ effort in assignment task design can become secondary to the prevailing assumption of ‘student fault’ and, therefore, a strong catch-and-punish management attitude. Nevertheless, they asked staff to identify the assessment designs that they supposed were most resistant to outsourcing. Tasks suggested were those that: (1) integrate theory/examples/experience, (2) that avoid description only, (3) that change from year to year, (4) that depart from standard essay formats, i.e., posters, wikis, weblogs etc. In a related study, Bretag et al (2019) questioned students on their understanding of what would define outsourcing-resistant assignments. The four perceived as least likely to prompt contract cheating were: in-class tasks, personalised and unique tasks, vivas , short-turnaround tasks, and reflections on practical placements. While the designs most like to encourage cheating were: heavily weighted assessment, tasks that assess research, analysis and thinking, those with short turnaround time, and tasks that integrate knowledge and skills core to a programme. These assumptions do not match closely.

In particular, it may seem curious that short turnaround tasks are judged to be both vulnerable and resistant to contract cheating. This may mean a mix of assumptions. Namely, that short notice would not appeal to contract writers and, mixed with this, an assumption that fast-approaching deadlines lead to desperate student measures. However, Wallace and Newton (2014) have demonstrated that setting short turnaround times are unlikely to help, because mystery shopping exercises show contract services to be skilled at fast response.

Otherwise, the advice above on good and poor assignment design may seem to be no more than unexceptional common sense. However, it may not be sense that is commonly acted upon by staff. David Tomar – who has become the articulate voice of professional assignment ghostwriters makes the following observation about assignment-setting in a recent blogpost:

When I worked as a ghostwriter, lazy students helped me to make my living, but it was the lazy professors that made my life easier. The task of pretending to be a student in somebody’s class is greatly simplified when the professor takes no special steps to differentiate the course, its content, or its assignments from the many millions of other courses that have been taught on the same exact subject from time immemorial. In other words, if I am assigned the same five page paper asking the same three or four general questions about Plato’s Republic that I’ve been assigned and asked four dozen times in the past, I can presume the professor is taking as much care to grade the assignment as he or she did to write it

In the remainder of this section we consider some alternatives that staff might explore in relation to assignment innovation.

Varying assignment media

It is a natural mistake to equate ‘coursework’ with ‘essay’. Perhaps because web-based companies marketing social plagiarism are popularly referred to as ‘essay mills’ . Nevertheless, the essay is a popular genre for assessment. Does so much academic assessment need to be this way? After all, the hallowed status of the essay is sometimes challenged by academics. Moreover, essay mill rhetoric often proclaims writing is an easily outsourceable skill because “the importance of written work does not carry forward into the job scenario”. Yet, contrary to such views, critical and expository writing remains a key experience in attaining disciplinary identity (Lea and Street, 1998) and is likely to remain so.

Nevertheless, it is true that educational theory (and popular culture) is increasingly preoccupied with multimodal and multimedia expression. Thus, advice to combat essay mills often focuses on assignment tasks that demand working in non-standard media or formats (e.g., videography – Jorm et al, 2019). Sadly, this is not an easy way forward. Agents of assignment outsourcing are alert to these trends and are now quite able to provide authored materials that respond to them – as, for example, presentationscase studiesreflective practiceposterslab reportsannotated bibliographiesfinancial analysessource codebook reviewsmind maps, discussion board posts, and even poems.

Voice also counts as a medium for intellectual expression. Under most regimes of integrity management, a student suspected of social plagiarism would be summoned, and then questioned in ways that explored their understanding of what they had submitted. One might, therefore, ask whether such conversations could be made central to the original assessment. The student would be invited to enter into a live oral exposition, one that is potentially available for credit. For example, Sotiriadou et al (2019) describe a successful assignment partly designed in these terms for 93 online students of a Sports Management module. The final part was an assessed interview with the client for whom their main report was written. Such practices may still be unfamiliar to academics in an assessment context. However, Joughin (2010) has written a handbook of oral assessment which considers the logistics of this method in detail. As working examples, Iannone and Simpson (2012) illustrate how it can be carried out for mathematics assessment, Rawls et al (2015) demonstrate it in use for business studies, and Haque (2016) for medicine. Sinclair (2016) illustrates how the process can be achieved using audio with distance learners and, finally, Nash et al (2016) describe how students can be successfully inducted into confident oral practices that are assessed.

For many assignments, the incorporation of viva-type procedures may effectively diagnose depth of understanding as it applies to tasks that were submitted as expository writing. However, traditional conversational vivas may not be the kind of task that can do this confirmation effectively for certain academic disciplines. For example, Simon (2016) reports the confusion that many students feel exists around legitimate ownership claims in disciplines such as Computer Science, Visual Arts and Music. Vivas may be more complex encounters in these contexts.

Assignments that engage

Many studies in which students are asked to reflect on circumstances that might discourage cheating find that engagement with an assignment task is an important protective factor. Unfortunately, ‘engagement’ is a term readily used but not so easily defined. A simple starting point might be to give students more choice in the assignments that are offered. Indeed, Patall and Leach (2015) find that this can reduce the incidence of cheating.

Beyond that, there must be a range of views as to what features make an assignment task ‘engaging’. However, this direction of practice has recently attracted some systematisation – through the phrase “authentic assessment”. Villarroel et al (2018) identify three features of such an assignment that is felt engaging through its authenticity. Realism: linking knowledge to everyday life, contextualisation: situations where knowledge can be applied in an analytical and thoughtful way, and problematisation: sensing that what is learned can solve a problem or meet a need. Evidently such a pattern may particularly appeal to those who seek a more employment-focussed orientation towards assessment. But, on the other hand, that very perspective will be contested by others. Examples of positive student responses to assessment managed in this way have been reported, although they are likely to be in disciplines with a more vocational orientation. For instance, James and Casidy (2018) report a successful ‘authentic assignment’ project in Business Studies.

The integration of assignment activities

The term ‘integration’ is used generously here. First of all, it may refer to an integration that is organised between key actors in the assessment process – tutors, students, and peers. Consider the case of coordination among learning peers: earlier studies tended only to concentrate on the reliability with which students can grade the work of other students (e.g., Kearney et al, 2016). However, the responsibility to comment on the assignment drafts of a fellow student (as ‘critical friend’) is activity that can be folded into the assessment process. If such pairings are reciprocal, credit can be attached to the insights that a critic brings to their review, but also to the way in which the author then responds to that review. Requiring that the draft, critique, and final piece are all submitted gives a rich collection for assessment, while its mutuality and interpersonal flavour could deter attempts at outsourcing.

However, ‘integration’ identifies a more general approach to assessment that takes seriously the process of authorship, rather than only the product. Assessment tutors will not normally have access to the research journey by which a student reaches the ‘destination’ of their final submission. There are several reasons why making that journey visible is useful. First, it will have a learning function. Students will be positioned to reflect more deeply, or meta-cognitively, about their own inquiry practices. Second, the resulting submission will allow the reader to more fully understand the final piece and thereby give richer feedback. The present situation allows Wrigley (2017) to comment:

…most of what students write is assessed: there seems precious little time to focus on the students as writers. Their development as authors is stymied by not having a chance to grow outside the write-submit-mark regime of university academic writing. In another context, this would surely be absurd: the professional footballer who only plays football matches without engaging in any other form of training and the learner driver who can only practise in a driving test.

Finally, and more relevant to integrity concerns discussed here: if the documentation of a student’s research progress is designed to be protracted, and if the activity is required to be time-stamped and made visible to a tutor, then the student will find outsourcing a process record of this kind much more difficult.

Such ‘logs’ of project progress have traditionally been described and championed by information scientists and academic librarians (e.g. review by Fluk, 2015). However, documenting the development of a student assignment in this way should also provide tutors with telling access to those wider cognitive processes that underpin students’ research strategies. While this process-orientation remains a rare assignment design choice, there are nevertheless some examples published. For instance, Walden and Peacock (2006) describe an assessment method they call “i-map” which requires students to make detailed reporting of their research process. They are encouraged to use strongly visual representations of their inquiry; to create “a working record of the way ideas have been developed and information gathered”. The authors report finding a rich diversity of such visual expression; but this may touch on its problem (and perhaps why it is not yet widely adopted, at least through this visual format) – these inquiry journeys depicted visually can be hard for staff to evaluate in a consistent manner.

Winter (2003) describes a method termed ‘patchwork text’ in which an assignment is constructed over time from a sequence of small sections. Towards the end of the assessment period the student is required to write an integrating reflection – making what is submitted more a ‘pattern’ than a ‘collection’. There are evaluations by Trevelyan and Wilson (2011) and Dalrymple and Smith (2008). This may also be more difficult to outsource. However, assessment methods of this kind generate a trail of activity that may prove hard to oversee and time consuming to evaluate.

Portfolio as integration

These examples above refer to the integration of assessment activity within the space of a single assignment. However, integration can be pursued at a higher level. This would entail a whole sequence of assignments being assembled by a student into a portfolio. That might be a digital service – such as Mahara or Pebblepad, both of which are popular within the higher education sector.

There are strong reasons for developing this practice and some of those reasons relate to how cases of social plagiarism are detected and confronted. While it is to be expected that students will develop intellectually across their university career, that development trajectory would normally be relatively gradual. Equally, there may be some continuity to be expected in their chosen style of expression for assignments. In a survey of over 1000 Australian university teachers Harper et al (2018) report that: “almost 70% of teaching staff have suspected outsourced assignments at least once. The most common signals that prompted their suspicions were their knowledge of students’ academic and linguistic abilities”. This should encourage the development of methods that make the level of such performance visible for monitoring.

Therefore, a portfolio in which there is found anomalous assessment outcomes (in either stylistic or grade terms) could be a signal that requires tutorial attention. In the best of worlds, such progress spikes (or dips) are important information for the diligent personal tutor who wishes to monitor a student and provide support where it might be needed. In the worst of worlds, spikes or dips could mean something more sinister relating to assignment ownership. Developing assessment portfolios should therefore be a priority for many reasons. This does not demand that they are graded as whole (although such practices are possible), but it would mean that a student could enjoy a more authoritative form of progress monitoring.

Conclusions on assignment innovation

This discussion has, regrettably, highlighted the versatility of contract outsourcing services in responding to new forms of assessment. Accordingly, Peytcheva-Forsyth (2019) have challenged any expectation that assignment design alone might be the solution to cases of lapsed assessment integrity. When students were asked about the likelihood of contract cheating across a wide variety of assessment tasks then all such tasks were judged to be potentially vulnerable. Indeed research based upon scrutinising outsourced assignments (Ellis et al, 2019) reveals that essay mill sites are accomplished at meeting the demands of apparently innovative tasks.

The most promising forms of assignment innovation seem to be the following. (1) Those where students’ knowledge is grounded in live classroom activity. (2) Where knowledge is explored in the privacy of the viva conversation. (3) Practices that recruit critical student peers into a (documented) draft reviewing process. (4) Process trails, whereby the development of assignment inquiry is recorded and submitted along with the final product. (5) Portfolios of work, each element of which can be assessed in a traditional manner, but the assembling of which acts as a monitor on a student’s productive progress (or destructive regress).

Slade et al (2019) report on an academic workshop convened to address the issues discussed in this section. Their experience provides a sobering caution. Although the educators who attended the meetings were well aware of contract cheating, they were distressed by the idea of changing their assessment practice “to combat the sophistication and unbounded nature of the contract cheating services”. The biggest concern they still held was how to enact the solutions within a manageable workload.

Finally, a study by Bretag et al. (2019) reminds us not to get totally preoccupied with assignment design. They report that the more satisfied students were with the teaching and learning in their courses, the less likely they were to think that contract cheating would take place. Perhaps this simple truth needs to be set against a daunting struggle to make assignment design tamper-proof.