Human-computer interaction (HCI) is the study of how humans interact with computers, with a specific focus on designing innovative interfaces and technologies. One aspect of HCI that has gained prominence is the integration of large language models (LLMs) like OpenAI’s GPT models into educational frameworks, specifically undergraduate programming courses. These AI tools have the potential to transform the way programming is taught and learned. However, questions around their impact on student learning, confidence, and career aspirations also arise.
Traditionally, programming education used lectures, textbooks, and interactive coding assignments. While some institutions have included simpler AI tools to assist in code generation and debugging, using complex LLMs is still a relatively new concept. However, these models have significant potential to assist students’ learning by generating, debugging, and explaining code. Researchers need to understand how students adapt to these tools and how these models impact learning outcomes and confidence.
A comprehensive study by the University of Michigan explored the social dynamics influencing the integration and adoption of LLMs in undergraduate programming courses. Using a mixed-methods approach, which included surveys, interviews, and detailed data analysis, researchers examined how student perceptions, peer influences, and career expectations impact LLM usage. One key finding was that students’ career expectations and peer usage perceptions heavily influenced their decision to use LLMs. Also, self-reported early usage of LLMs showed correlation with lower confidence and midterm scores. Notably, the perception of over-reliance on LLMs, rather than actual usage, was associated with decreased confidence later in the course.
The research indicated that the integration of LLMs into programming curriculum had mixed results. While some students found that these AI tools helped them comprehend complex coding concepts and error messages better, others felt their self-confidence was negatively affected. Those students who felt an over-dependence on LLMs scored lower in confidence. This brought to light the importance of balancing the use of such tools to ensure they don’t undermine the learning process.
In conclusion, the integration of LLMs into an undergraduate programming course is a complex process, highly influenced by social factors like peer usage and career aspirations. While these models can significantly enhance learning experiences, over-reliance can have negative impacts on student confidence and performance. Thus, a balanced approach in using LLMs is key, ensuring that students build strong foundational skills while also utilizing the benefits of AI tools. This calls for thoughtful integration strategies that take into account both the technical capabilities of LLMs and the social context of their use in an educational setting.