Students tend to rely on AI rather than learn from it, study finds

A recent study explored how students respond to AI-powered assistance in the context of peer feedback and its impact on their ability to self-regulate their learning. The researchers found that while AI tools improved the quality of student feedback, there was a tendency for students to rely on these systems rather than learn from them. When AI assistance was removed, students’ feedback quality declined unless supported by self-regulation strategies like checklists. The research was published in Computers & Education.

Recent years have seen a rapid development of artificial intelligence (AI) technologies and their fast integration in various aspects of life and work. It can be argued that the emergence of AI technologies is profoundly changing the way humans obtain information, how they work, and how they live.

One of the areas particularly affected by AI is education. AI-powered education technologies (AI-EdTech) are increasingly used to automate and support learning by suggesting personalized learning resources, providing feedback on students’ work, reminding students of upcoming deadlines, and even tailoring instruction to the needs of individual students.

However, there is a question of whether AI systems will support students’ ability to organize and regulate their own learning or if students will become dependent on AI, reducing their own self-regulation.

Study author Ali Darvishi and his colleagues sought to investigate the effects of AI assistance on student agency when working on peer feedback. They wanted to know whether AI, employed to monitor the quality of peer feedback, helps students develop the ability to provide effective feedback without relying on AI assistance.

To conduct the study, the researchers used RiPPLE, an adaptive education system that involves students in creating and evaluating learning resources. The resources created by students must go through a review process, which is conducted by other students assigned to this task by the system. The reviewers rate the resource on several characteristics, provide feedback to the author, and communicate their confidence in the ratings. If an instructor is available, the system allocates resources that could benefit from expert review to the instructor, and these reviews are final.

Recently, AI tools have been added to this system to evaluate and enhance the quality of the feedback students provide. These AI tools analyze the feedback for important characteristics, and if the feedback is lacking, they provide prompts to encourage students to improve it.

The study participants were students enrolled in 10 courses that used RiPPLE during the second semester of 2020, covering various disciplines. The experiment lasted eight weeks. In the first four weeks, all students received AI prompts when writing feedback. For the second four weeks, the researchers randomly divided them into four groups. One group continued to receive AI assistance as usual. The second group did not receive any AI prompts. In the third group, AI prompts were replaced with a self-monitoring checklist and a set of guidelines in the peer review interface. The fourth group had both AI prompts and the self-monitoring checklist in their peer review interface.

During the second four weeks, 1,625 students submitted 11,243 peer reviews on 3,573 resources across the 10 courses. The researchers tracked various indicators of review quality, such as whether the review was flagged as requiring revision, how similar it was to students’ previous comments, how relevant it was to the reviewed material, its length, how long the student took to write it, and how many other reviewers liked the review, indicating that they found it helpful.

The results showed that, in the second four weeks, students who worked without AI assistance produced much poorer reviews than those who worked with AI. Their reviews were flagged more often, were more similar to their previous texts, and were less relevant to the materials. The reviews were also shorter, although the time spent writing them was not significantly less than the time invested by the AI-assisted group.

The group working with self-monitoring checklists instead of AI assistance produced similarly poor reviews. Their feedback was worse than the group that used AI assistance. Interestingly, the group that had access to both AI assistance and self-monitoring checklists did not show any improvement in the quality of their reviews compared to the group using just AI assistance.

“Our study showed that the integration of AI in learning environments could impact students’ agency to take control of their own learning,” the study authors concluded. “Through a randomized controlled experiment, we found that while students can effectively self-regulate their learning with the aid of AI, removing this support would significantly change their performance.”

“While the hybrid human-AI approach in the SAI group [the group using both AI assistance and self-monitoring checklists] had the highest average performance among other groups, its improvement was not significant compared to the AI-only approach. These findings suggest that as AI becomes more prevalent in education, it is important to consider the role it plays in shaping student agency. There is a rising awareness of the hazards of outsourcing self-regulated learning to technology, which may impede students’ cognitive and metacognitive growth.”

The study sheds light on how AI assistance affects the way students review learning resources. However, it could be argued that writing peer reviews of learning resources may not fully represent students’ broader learning activities, and the results might differ if other types of learning tasks were examined.

The paper, “Impact of AI assistance on student agency,” was authored by Ali Darvishi, Hassan Khosravi, Shazia Sadiq, Dragan Gašević,  and George Siemens.