AI-generated Practice Content
Goals
The creation of high-quality educational content, particularly practice questions and assessments, requires a significant time investment for lecturers in higher education. Traditional approaches require educators to manually craft questions that not only cover course material comprehensively but also address different cognitive levels of learning. This process becomes increasingly challenging as class sizes grow and course content evolves.
Large Language Models (LLMs) have emerged as powerful tools for automating educational content genera-tion, offering the potential to significantly reduce the workload on educators while maintaining pedagogical quality. These models can generate diverse question types - including single choice, multiple choice, kprim, free text, and numerical response questions - from basic recall to complex analytical problems, aligned with established educational frameworks like Bloom's taxonomy. Research shows that LLM-generated questions can achieve quality comparable to manually crafted ones, with some metrics even indicating potential improvements in areas such as content coverage and learning objective alignment.
The integration of AI-powered content generation into educational platforms enables a more systematic and efficient approach to creating learning materials directly embedding questions into platforms familiar to students, thereby eliminating the need for export and import processes.
Furthermore, students benefit significantly from AI-powered content generation. The system provides questions of increasing difficulty, allowing learners to progressively develop their understanding from basic concepts to complex applications. Through varied question formats and comprehensive coverage of course materials, students remain engaged while ensuring no critical topics are missed in their learning journey.
Looking ahead, this technology could potentially be made available directly to students, empowering them to generate their own practice materials based on specific topics they want to review or areas where they need additional reinforcement. Such a self-directed approach to content generation would further enhance the personalized learning experience while maintaining pedagogical quality through structured question generation aligned with educational frameworks.
Background
AI technologies, particularly generative AI, have the potential to improve content creation in higher education . Generative AI can produce diverse and immersive educational content, facilitating the development of interactive learning materials that cater to various learning styles and preferences. By leveraging AI, educators can create personalized quizzes and practice questions that align with individual student needs, thereby enhancing engagement and retention (Kadaruddin, 2023; Murtaza et al., 2022).
However, the use of AI in generating educational content is not without its challenges. Concerns regarding the accuracy and reliability of AI-generated materials are prevalent in literature. For instance, biases inherent in AI models can lead to the production of misleading or inappropriate content, raising questions about the ethical implications of using such technologies in educational settings (Alrayes, 2024; Alasadi & Baiz, 2023). Additionally, the ethical implications of AI-generated content, including issues of plagiarism and academic integrity, are critical concerns for educators and institutions (Alasadi & Baiz, 2023; Kanont, 2024).
As AI technologies become more sophisticated, the risk of students relying on AI-generated materials without proper attribution or understanding increases. Therefore, it is essential for educational institutions to establish clear policies and guidelines regarding the use of AI-generated content to mitigate these risks (Jose, 2024). Furthermore, the lack of transparency in AI algorithms can hinder educators' ability to assess the quality of the generated content, potentially compromising the quality of the generated content (Kanont, 2024). As such, it is crucial for institutions to establish guidelines and best practices for the ethical use of AI in content generation.
In addition to ethical concerns, the acceptance of AI technologies by students and educators plays a significant role in their successful implementation. Research indicates that factors such as perceived usefulness, ease of use, and trust in AI systems influence students' willingness to engage with AI-generated content (Kanont, 2024). Understanding these factors can help educators design AI tools that are more likely to be embraced by learners, ultimately enhancing the effectiveness of AI in educational contexts.
Moreover, the integration of AI in content generation can facilitate personalized learning experiences, which are increasingly recognized as essential for student success. AI systems can analyze student performance data to tailor content delivery, ensuring that learners receive materials that match their proficiency levels and learning goals (Murtaza et al., 2022; Roshanaei, 2023). This personalized approach not only fosters greater engagement but also supports diverse learning pathways, accommodating students with varying backgrounds and abilities (Jian, 2023).
Collaborating with AI technologies allows educators to enhance their teaching practices while keeping education human-centered. AI can support tasks like creating practice quizzes aligned with course objectives, freeing instructors to focus on facilitating deeper learning experiences. This partnership positions educators as learning facilitators, promoting a more interactive and engaging educational environment.
Scenario Description with KlickerUZH
By leveraging AI-driven content generation in KlickerUZH, lecturers can develop diverse educational materials. This setup allows lecturers to efficiently generate questions and learning materials while retaining full control over the final content.
1. Content Upload and Processing
As a lecturer, you begin by uploading your teaching materials (PDF lecture scripts, slides, or other documents) to KlickerUZH . The system uses advanced text segmentation algorithms to maintain the hierarchical structure and coherence of your content, preserving context and relationships between topics for more effective question generation.
2. AI-Powered Content Analysis
The system analyzes the provided materials to create a comprehensive topic overview and extract key knowledge points. This analysis helps identify learning objectives and suggests appropriate question types for different content segments. You can review this analysis and adjust the focus areas or learning objectives as needed, ensuring alignment with your course goals. You can also further parametrize the generation to, e.g., focus on questions of a specific type or format.
3. Question Generation and Selection
Based on the content analysis, KlickerUZH generates various question types (Single Choice, Multiple Choice, Kprim, Free Text, Numerical Response, Flashcards, and Content Elements) that align with different levels of Bloom's taxonomy. The system ensures balanced coverage across cognitive levels while maintaining pedagogical effectiveness. You can review, edit, or reject suggested questions, and your feedback helps improve future generations.
4. Quality Assurance and Integration
Generated questions undergo automated quality checks for relevance, fluency, and answerability. You maintain full editorial control, with the ability to modify questions or generate alternatives as needed . The approved questions can be directly used in learning activities, which integrate seamlessly with your course structure in KlickerUZH or your learning management system.
Our Learnings
In collaboration with the Department of Informatics (IFI) and a student team doing their master project, we are currently exploring the potential of generating learning materials with AI-based approaches directly in KlickerUZH.
To systematically validate and further extend these findings, we will conduct comprehensive pilot studies during the spring term of 2025. Should you be interested in participating, please fill out the form at https://forms.office.com/e/K8CXM2pKhJ so that we may contact you. The results of the piloting will be evaluated and summarized as part of this use case.
Our initial assessment of this use case has also provided several significant insights and preliminary learnings regarding the general use of AI that are relevant for lecturers regarding the implementation of AI use cases. Information about the associated challenges, limitations, and remediation strategies for IT can be found here .
Some of our most important preliminary findings include:
- Automating question generation through course structure understanding:Question generation works well when asking for a specific topic and/or question type based on given material. However, to achieve significant gains in terms of efficiency and to improve content coverage, an approach that further automates this step is required. The AI system needs to be able to grasp the overall learning goals and structure of a course/domain (based on, e.g., a lecture script) and should be able to derive practice material in a balanced way, making sure that all the content is covered by appropriate questions and question types. This essentially results in a two-stage process, where the first stage is purely about understanding the domain and planning the didactical approach, while the second stage is focused on generating content as specified by the defined approach. This process can be facilitated by using a dedicated prompt for the first and second stage respectively, by providing good examples for both stages, as well as by using reasoning models.
- Addressing challenges in generating higher-order and difficulty-specific questions:While question generation using prompting strategies works well (especially for foundational material), parametrizing for, e.g., a specific target difficulty can prove more challenging, due to limited reasoning capabilities of the model on this parameter. Creating questions on a higher level of Bloom's taxonomy that require networked thinking can therefore become a challenge. This could be improved by using reasoning models or by giving models additional examples (i.e., few shot prompting) of what would be classified in what level of Bloom's taxonomy/what difficulty.
- Critically analyzing AI-generated questions: While AI excels in generating diverse questions efficiently, it remains crucial for lecturers to manually review each question to that the questions align with the course objectives and make sense within the educational context. This critical analysis by educators ensures that the AI-generated content meets the required quality standards and effectively supports the learning process.