GenAI Survey for Higher Education Assessment Professionals

Summer 2025 Data Snapshot

Opened Jul 29 – Closed Aug 31, 2025 • 227 completes, 52 in-process • 164 included (148 complete, 16 partial)

Executive Summary: Where Adoption Stands Today

The summer 2025 pulse survey (n=164) indicates a field moving past initial hype into a more pragmatic, and at times challenging, phase of GenAI adoption. Regular use is growing among a core group of practitioners, but persistent barriers related to accuracy, policy, and time are tempering broad enthusiasm and leading to a call for better institutional guidance and infrastructure.

Change in Use

Use of GenAI in assessment work is relatively similar to the spring 2025 capture. Twenty-eight percent of professionals now report using GenAI regularly and minimal use is 41%. The most notable difference in the summer capture is a 7% increase of no use.

Institutional Tools Gaining Traction

Reliance on institutionally paid tools like Copilot is increasing, with 90% of its users accessing it through their institution, signaling a shift toward sanctioned platforms.

Privacy & Ethics are Top of Mind

Privacy and ethical considerations are the primary concern for 74% of respondents, followed by environmental impact (50%), highlighting a demand for responsible AI frameworks.

Policy & Guidance Remain a Blocker

Lack of or unclear institutional policy is a key concern for 45% of professionals. While 31% report a policy is in development, only 18% have rolled-out policy to guide their work.

Who Responded

The Summer 2025 Pulse Survey (n=164) drew mostly staff/administrators (82%) with one-fifth faculty (20%). Most work in academic affairs (71%), though more than a quarter represent student affairs (26%). Importantly, almost half (46%) have 11+ years in higher education, showing this is likely a seasoned group of professionals. Compared to spring, the summer survey had slightly more faculty participation and lower academic affairs representation, perhaps reflecting normal seasonal variation.

Primary Role Comparison

Primary Area Comparison

Years in Higher Education

Gender Identity
Other categories <5% include: gender queer, gender fluid, gender non-conforming, non-binary, queer, transgender, two spirit

Race/Ethnicity
Other categories <5% include: Indigenous North American, Middle Eastern/North African, Mixed, Native Hawaiian/Pacific Islander, Prefer to self-describe

Use & Tools

In summer 2025, 28% of respondents reported regular use of GenAI in assessment work, 41% occasional use, and 30% no use. ChatGPT was the most commonly used tool (83%), followed by Copilot (56%), Gemini (32%), Claude (21%), and Perplexity (11%). Reported enhancement of assessment work was described as somewhat (45%), greatly (39%), and minimally (16%). Institutionally paid subscriptions were highest for Copilot (90%), Gemini (38%), and ChatGPT (27%).

Current Use Comparison

Tools Used Comparison

Perceived Enhancement (n=112)

Who Pays (Total n=110)

Assessment & Administrative Tasks

In summer 2025, respondents reported using GenAI in assessment tasks for learning outcomes (61%), rubrics (53%), assessment planning (50%), and data analysis (47%). In administrative work, use was noted for drafting communications (68%), ideation/innovation (60%), meeting-related work (43%), and survey design (41%). The spring survey asked for frequency of use per task and the summer survey asked whether or not using per task.

Assessment Tasks Comparison

Administrative Tasks Comparison

Benefits & Needs

In summer 2025, the most frequently reported benefits of GenAI were time savings (88%), communications (54%), reduced workload (48%), data analysis and reporting (44%), assessment task quality (40%), and innovation in assessment design (32%). Only 2% reported no benefits. Supports identified as most needed to move forward included training or workshops (69%), peer examples or templates (68%), guidelines (66%), subscriptions (43%), time (41%), and budget (29%). Only 8% reported no supports needed.

Biggest Benefits Experienced (n=106)

Supports Needed to Move Forward (n=152)

Concerns (n=155)

The top concerns are privacy and ethics (74%), environmental impact (50%), and lacking or unclear policies (45%). Other concerns include lack of training, limited usefulness, faculty resistance, job threat/de-skilling, and cost. Only 3% expressed no concerns.

Training & Policy

Most respondents are learning on their own (81%) or attending trainings or workshops (67%), with a smaller group developing and/or leading training or workshops (23%). On policy, only 18% report an institutional policy in place, while most are in development or expected soon. A quarter of respondents either do not expect a policy in the foreseeable future or do not know about the status of an institutional policy.

Training Involvement (n=154)

Institutional Policy Status (n=154)

Collaboration with Faculty on GenAI

Out of 154 respondents, 12% reported frequentlycollaborating with faculty on AI as part of assessment-related work, 28% occasionally, 30% rarely, and 30% never. When asked whether this collaboration should be considered part of their role, 21% strongly agreed, 27% agreed, 21% neither agreed nor disagreed, 10% disagreed, 5% strongly disagreed, and 16% selected not applicable. On the effectiveness of collaboration, 2% described it as very effective, 9% as effective, 26% as neither effective nor ineffective, 13% as ineffective, 4% as very ineffective, and 46% indicated not applicable.

Frequency (n=154)

Part of Assessment Role (n=154)

Effectiveness (n=154)

Purpose

To track adoption, application, and implications of GenAI in assessment practice and identify needs for policy, training, and infrastructure. Given the rapidly evolving technology landscape, the survey recurs approximately every four months (Spring, Summer, Fall).

Scope & Methods

Respondents represent diverse institutional types and assessment roles. The initial launch (January-April 2025) yielded 199 valid responses after data cleaning. The second pulse survey (launched and closed in August 2025) produced 164 valid responses. The study employs a mixed-methods approach combining descriptive statistics with qualitative thematic coding of open-ended responses using constant comparative analysis.

Limitations

Convenience sampling may favor certain practitioner types. Recruitment likely introduced selection bias toward professionally engaged individuals. Geographic and demographic distributions may not fully represent all U.S. assessment professionals.

Survey Instrument

Download the full survey instrument: GenAI Pulse Survey August 2025 Instrument

Next Survey

The fall pulse survey will be available mid-October 2025.

Thank You

Thank you to Tyton Partners for granting permission to adapt their policy question from Time for Class 2025 .

Definitions

Assessment professional: Individual with central role in developing, implementing, managing, and reporting academic, co-curricular, or student affairs assessment practices in higher education

Generative AI (GenAI): Large language model-based tools that create new content (text, images, music, video, code). Examples: ChatGPT, Claude, Copilot, Gemini

Research Team

Ruth Slotnick, Ph.D. (lead PI); Joanna Boeing, Ed.M.; Bobbijo Grillo Pinnelli, Ed.D.; Yu Bao, Ph.D.; John Hathcoat, Ph.D.; Will Miller, Ph.D.; Nama Wells, Ph.D.

Results generously hosted by the Center for Leading Improvements in Higher Education, Indiana University Indianapolis by the Assessment Institute in Indianapolis . For more information, contact Dr. Ruth Slotnick, rslotnick@bridgew.edu. IRB #2025055, Bridgewater State University.