Integrating Generative AI-based assessment with Thesis Writer
As a proof of concept, this project develops a Generative AI-based text quality assessment tool for Thesis Writer (a ZHAW- based dissertation writing support platform). The approach uses prompt engineering to produce relevant assessment for texts written in the digital platform at scale through API.
Description
The thesis writing process involves iterative text development, where cycles of feedback, assessment, and revision are
essential. For writing instructors, this presents challenges, given high workloads. Automated essay scoring tools (AES) can address this. However, as this approach requires a large corpus of human rated texts to develop an algorithm based on automatically extracted and processed text features (NLP), it also has limitations because of cost and effort.
Developments in Generative AI have opened new possibilities for automated assessment. One approach is to fine-tune LLMs for specific purposes. Another more flexible approach is to use prompt engineering to produce an effective, cost-efficient system to provide needed assessment and feedback. This project, as a proof of concept, proposes to develop a prompt-based assessment system, designed to support thesis writing, through API integration and code-based utilization of generative AI to assess student texts at scale.
Key data
Projectlead
Project team
Jakob Ott, Dr. Christian Rapp, Prof. Dr. Otto Kruse
Project status
completed, 08/2024 - 07/2025
Institute/Centre
Institute of Language Competence (ILC)
Funding partner
ZHAW digital / Digital Futures Fund for Research
Project budget
20 CHF