Generative AI Use Among Students in CPED Doctor of Education Programs

Doctoral research study by Lindsay O’Neill

How are Doctor of Education students really using AI?

Generative AI tools like ChatGPT are already embedded in doctoral education. Yet many institutions are still reacting to AI rather than proactively shaping how it is used.

My research explored how Doctor of Education (EdD) students across Carnegie Project on the Education Doctorate (CPED) institutions are using generative AI, how they make ethical decisions about its use, and how institutional guidance (or the lack of it) shapes their behavior.

This work is designed to help education leaders move beyond fear-based policy toward thoughtful, strategic integration.

What I Found

1. AI Use is Frequent but Often Quiet

Most EdD students use generative AI regularly. However, unclear policies and perceived stigma influence how openly they disclose that use.

2. Students Are Setting Their Own Rules of Engagement

In the absence of clear guidance, students create personal boundaries about what is “appropriate” use of generative AI. These decisions are often grounded in identity rather than institutional policy.

3. AI is Becoming Frontline Academic Support

Many students describe AI as a tutor, writing coach, collaborator, or even a “team member.” In many cases, AI is consulted before faculty. This suggests a reorganization, though not replacement, of mentoring relationships.

4. Evaluation Confidence Is High, but Technical Knowledge is Uneven

Students report strong confidence in evaluating AI-generated content. However, gaps in technical understanding and prompting strategies remain.

What This Means for CPED Leaders and Faculty

Institutions cannot afford to ignore generative AI or rely solely on prohibition.

Based on this research, effective institutional responses should:

  • Provide clear, operational guidance on AI use and acknowledgment
  • Address stigma directly and normalize open conversation
  • Teach critical evaluation and AI literacy, not just compliance
  • Design mentoring structures that account for AI-mediated support
  • Connect AI use to professional identity and ethical practice

The question is no longer whether students will use AI.
The question is whether institutions will shape that use intentionally.

About the Study

This multi-methods study included:

  • A survey of 190 doctoral students across CPED institutions
  • Four focus groups
  • Analysis grounded in metaliteracy and generative AI literacy frameworks

The goal was to generate actionable insights for policy and practice.

Read the full dissertation on ResearchGate.

Interested in Bringing This Conversation to Your Institution?

I work with schools, universities, and leadership teams to:

  • Develop AI policy frameworks
  • Design AI literacy elearning and workshops
  • Facilitate faculty conversations
  • Support ethical integration strategies