About the project

Banner with the text "Student Perspectives on AI in Higher Education. AIinHE.org." Followed by the logos of Monash, UQ, Deaking, UTS.

Many studies are emerging on student use of Generative Artificial Intelligence technologies (GenAI). Our project is unique, because we are:

  • Talking to students about their everyday uses and perceptions.
  • Reaching out to a broad group of students from different subjects, disciplines and levels of study.
  • Collaborating across institutions. 
  • Taking a relational view, in which students’ beliefs, understandings, backgrounds and learning conditions form part of how we understand AI in use.

Let’s take a closer look at each of these. 

Talking to Students

Our approach to understanding student uses of technology and perspectives on GenAI means allowing students to talk about their everyday uses, thoughts, beliefs and worries about GenAI in a safe space.  

We believe it’s important to talk to students about their diverse motivations, ideas, and – importantly – the tensions in their minds about how, when and why to use AI in their learning. Our study has already shown that students don’t make choices about GenAI lightly. They take the risks, and benefits, seriously, and struggle with the challenge of weighing these together.

Our Breadth of Focus

Another thing that makes us unique is our broad focus. Many studies are currently being done on nursing students, or law students, or some subset of students distinguished by course, enrollment method, or task in question. There is a lot of value in these projects (indeed, some of us within the research team are running discipline-specific projects). However, our study takes a broader focus to all students within four Australian universities, online and in-person, postgraduate and undergraduate, young and old, across subjects and disciplines. The results we are finding are all the richer for this diversity. If you are a student over 18 and currently enrolled in one of our universities, you can take part in our study (we will be sending out links to our survey between August and September, 2024).

Our Collaborative Foundation 

Our project is intentionally built on a broad collaborative foundation, bringing together expertise from four Australian universities. This collaboration allows us to draw on a wide knowledge base, including experts who have been studying AI since well before 2022 (when ChatGPT was released), enriching our research with diverse insights.

By working across institutions, we are also leveraging an innovative, shared funding model to foster evidence-informed understandings of student experiences with AI. This collective effort enables us to pool resources, share methodologies, and engage in meaningful dialogue, ensuring a comprehensive analysis of how AI is impacting higher education.

Our Relational View

We take our definition of AI from Bearman and Ajjawi (2023, p. 4): 

“We define AI according to a relational epistemology, where, in the context of a particular interaction, a computational artefact provides a judgement about an optimal course of action and that this judgement cannot be traced.” 

This means that any AI technology (e.g. ChatGPT, Claude, Research Rabbit) is always part of a bigger sociotechnical ensemble (a combination of multiple technologies being engaged with in a social context) (see Bjiker, 1993). Rather than seeing the use of an application by a student in isolation, we see it as always entangled in other artefacts, behaviours, social arrangements and meanings (Johnson and Verdicchio, 2017).

Methods

We agreed that direct conversation with students is essential to get a sense of their experiences and understandings of AI. We started with focus group conversations. We have run 20 focus groups with 79 students across four Australian universities. Participants were selected to represent a diverse range of disciplines, study levels, and demographic backgrounds. Each focus group session was conducted online, enabling broader participation and ensuring a mix of voices from different campuses. The sessions were structured around key themes, including the use of AI platforms, student-AI interactions, trust and emotions, and future implications of AI in education.

We employed thematic analysis following Braun and Clarke’s (2021) six-phase process to analyse the data. This method allowed us to identify patterns and themes within the focus group transcripts. Through discussions with the research team and reflexive engagement with the data, we developed themes, based on our relational understanding of AI, that we believed captured the diversity of perspectives shared by the students.

We are now well into our next phase, involving a large-scale survey in which we explore student use, perceived usefulness, AI literacy and issues of trust of generative AI. To date we have collected more than 7000 responses across the four universities, representing one of the largest surveys of this kind in the Higher Education sector. Analysis will begin at the end of September.

Our Timeline:

Quarter 1 2024: ethics and set up

Quarter 2 2024: focus groups and analysis

Quarter 3 2024: survey and analysis

Quarter 4 2024: reporting, publications and presentations

Quarter 1 2025: further analysis, publications and presentations

Quarter 2 2025: stay tuned…

We’ve still got a long way to go, but we’re seeing some fascinating insights from students so far.