Let's do a classic word association test: what's the first thing that comes to mind when you think of the word "artificial intelligence"? All too often in an academic environment, the spontaneous response is "cheating". And vice versa: artificial intelligence is what comes to mind for far too many people when they hear the word cheating. How did this happen?
Cheating is cheating also when AI is involved. But responsible use of AI is not cheating.
At the University of Turku, we keep statistics on cases of misconduct on the University level according to the faculties’ reports. In 2024, a total of 40 cases of misconduct were reported, more than half of which were classic cases of cheating in an exam and where AI was not used. Six of the cases of misconduct that concerned unsupervised written work, such as theses, involved the use of AI in violation of the guidelines. There are no statistically significant differences between faculties.
The Board of the University of Turku recently decided on a policy of zero tolerance for indisputable and purposeful cases of misconduct. If a student commits this kind of clear misconduct, the penalty is always the most severe known to university law: temporary suspension from the University, i.e. revoking the study right for a few months or even up to a year. The Board decides on the suspension. While both teachers and students have mostly welcomed the tightened policy, it has also created uncertainty and even fear: can misconduct be committed accidentally?
Show your true competence
What is essential in the policy is that the misconduct is purposeful and indisputable. If the situation is unclear, suspension will not be carried out lightly, but the matter will be carefully investigated. Only an indisputably proven purposeful act is punishable. All the cases that have led to the suspension of study rights have involved classic, systematic cheating in an examination. In the current academic year, disciplinary action has been taken in one case of misconduct – and it did not involve the use of artificial intelligence. So, you still cannot accidentally commit planned misconduct.
At the University, we have taken the approach that the definition of misconduct does not need to be changed because of the expanding use of artificial intelligence. In the assessment of competence, a student commits misconduct when they claim to have competence that are not actually their own. The situation is very similar to plagiarism. What matters is not whether the thesis is produced by a human or a machine, but whether it is produced by the student themselves or by someone else.
Prohibited plagiarism becomes a permitted citation when it is properly referenced. The same applies to the use of artificial intelligence. The starting point is clear: the use of AI in learning is allowed and its use must be reported. Especially now when new technology is implemented, it is important to be as open and transparent as possible about how AI has been used in the thesis writing process. However, there are no established practices yet, and we are living in a time of expanding use and early experimentation. The University's common guidelines will be updated as we gain experience and insight.
Study for learning, not for credits
More important than formulating and following the guidelines is that everyone’s understanding and competence for the responsible use of AI is improved at the University. It is essential to remember that the primary objective of learning is to learn, not to pass a course and get credits. The aim of evaluation, on the other hand, is a fair assessment of what has been learned, not an approval or rejection of an attainment. I encourage students to use AI in their studies in responsible ways that they feel will help their own learning.
Universities have freedom of teaching, and the teacher responsible for the course can encourage, instruct or restrict the use of AI in different ways. The aim of this kind of pedagogical guidance is to improve and enhance learning. Using AI on a course can also be prohibited altogether, when it is appropriate for achieving the learning objectives. The prohibition must be duly justified. Mutual trust is key to the interaction between teacher and student. Let our zero tolerance for purposeful misconduct create a basis for the ethicality of our actions and the equal treatment of students in creating a climate that encourages the responsible use of AI.
I encourage students to actively seek their own individual ways of using AI to support their own learning. Different methods can suit different people and different situations. It is also recommended to share good practices, as this improves learning for the whole community. You should also share bad experiences. In the bigger picture, it is not only about the success of the individual, but also of the University and the nation as a whole, and even about the future of Europe. It is worth remembering that AI does not take people's jobs – instead, it seems that people who use AI insightfully take the jobs of people who oppose change.
Responsibility for evaluating learning cannot be outsourced
The AI revolution is shaking up not just students but also teachers. It has been reassuring to notice that, once again as a community, we are in the same boat facing a great change. AI offers a wide range of new tools for teaching, and also for the evaluation of learning outcomes. Some teachers are quick to embrace new technology and are excited to try out new methods, while others shy away from the very idea. I encourage teachers to experiment with AI in their work in a creative and open-minded way, but also to be tolerant of preference for old, tried and tested methods. It is hard to force creativity, and diversity can be a winning strategy for the community in this as well.
Caution is particularly important when assessing academic performance: it has to be remembered that, as an employee of the University, the teacher exercises public authority over the students. The teacher(s) responsible for the course always have the authority and responsibility for the evaluation of the coursework and the related decision-making. This responsibility cannot be outsourced and AI cannot be the decision-maker. Keeping this in mind, I encourage teachers to adopt new ways of incorporating AI into their teaching and, with the reservations mentioned above, also to help them in the evaluation.
The challenges that AI brings to evaluation are not always new. In group work-based learning, assessing individual competence on the basis of the group's output is a challenge familiar to teachers. In fact, the use of AI can often be compared to teamwork: if the thesis is produced in a collaboration between the student and AI, the challenges of assessing the contribution and skills of the parties are similar to those of traditional teamwork. In addition to the final result, it can be helpful to look at the process through version history, a learning diary or discussions with the students.
In fact, the use of AI in the evaluation of theses is not entirely new either: plagiarism detection has been used for years. The idea is that the (AI) software shows the teacher the points in the thesis that should be checked when making the decision on approving the thesis. The program itself does not therefore make the decision on prohibited plagiarism.
Specific AI software has also been developed to identify the use of generative AI. Their use is not forbidden either, and in some situations, they can be helpful in the same way as plagiarism detection: by focusing the teacher's attention on the relevant issues to be taken into account in the decision-making. However, there are problems with the use of AI, precisely from the point of view of responsibility in decision-making: AI must not be allowed to make a decision even on whether AI was used in the thesis. It may be that in many situations, common sense trumps artificial intelligence. The values of the University and the support of the community will carry us through this transition.
To ensure that the use of AI is developed in the best possible way to support all work at our University – teaching and learning, research and administration – we have developed the University's own generative AI service chat.ai.utu.fi. The service was opened to staff and doctoral researchers in the summer, with the intention to expand its use to the whole community in a controlled way in the near future. The service makes it possible to use several commercial AI models as easily, safely and cost-efficiently as possible.
I wish our entire University community inspiring experiments with AI in this darkening autumn, sincerely.
Tapio Salakoski
The author is the Vice Rector of the University, responsible for education and artificial intelligence.