Executive Summary
Compared to prior administrations, 76% (234 of 307) identify as assessment leads. Among Fall 2025 leaders (n=201, 84% of respondents), 71% use GenAI regularly or occasionally, primarily self-taught (84%), while navigating concerns about privacy, accuracy, and limited support. The data reveal a collaboration gap (only 10% frequently collaborate with faculty) and a possible influence paradox (48% office/team-level vs. 10% institutional-level influence), suggesting structural barriers to broader integration.*
Theme 1: Efficiency & Creative Capacity
The dominant benefit remains efficiency (87%), but respondents describe deeper value: GenAI "allows us to quickly become the expert in the room on a diverse range of topics" and "increases my creativity and skills level in areas where I am typically weak." Communication-related benefits surged from 54% to 78% from summer to fall.
"I am excited to learn and be a part of using AI to assist with assessment."
Theme 2: Tool Diversification & Payment Realities
From spring to fall, ChatGPT's dominance is cooling (82%→74%) as professionals discover alternatives. Claude doubled (13%→27%); Copilot reached 61%. Overall, fewer report using free versions. Copilot leads institutional support (86%), followed by Gemini (46%), and ChatGPT (33%).
"Easier to find troubleshooting help for data visuals/data analysis tools."
Theme 3: The Faculty Collaboration Gap
Frequent faculty collaboration dropped from 12% in summer to 8% in fall, while "never collaborate" surged from 30% to 45%. Yet 58% believe faculty GenAI integration should be part of their role somewhat or to a great extent.
"My main concern is lack of training on how to use AI as a tool and how to correctly prompt."
Theme 4: High Interest, Limited Influence
Assessment professionals have the most influence within their immediate office or teams (46%) and less at the institutional (10%) and external (2%) levels—a 36-point gap.
"Primarily I need a community of others validating my own usage and collaborating on strategies both for use by my office and implementation institution-wide."
Demographics
Respondents are predominantly staff/administrators (82%) working in academic affairs (79%), with strong representation of mid-career professionals (6–20 years: 56%). Women comprise about two-thirds; approximately three-quarters identify as White.
Primary Role
Reporting Lines
Lead Assessment at Institution?
Years in Higher Ed Assessment
Gender Identity
Race/Ethnicity
Institution
Respondents represent diverse institutional contexts: nearly half from public institutions (47%), a quarter from private non-profits (27%), and less community college representation (17%). Medium to large institutions (10K–50K students) account for about half. Geographic reach spans all regions with notable shifts between survey administrations.
Student Enrollment
Institution Type*
Top 15 States
Key Shifts Spring to Fall
Massachusetts ↓8 pts
12% (n=21/173) → 4% (n=8/210)
California ↑4 pts
6% (n=11/173) → 10% (n=22/210)
Indiana ↑4 pts
2% (n=3/173) → 6% (n=12/210)
Texas ↓6 pts
9% (n=15/173) → 3% (n=6/210)
Highlighted = ≥3% either wave
Use & Tools
GenAI adoption appears to be maturing: regular use increased to 33% while non-use dropped to 7%. ChatGPT remains dominant but declining (82%→74%) as alternatives gain traction. Copilot benefits from institutional support (86–90%), while Claude and others rely heavily on free tiers or personal payment.
Current GenAI Use in Assessment Work
How Much Has GenAI Enhanced Your Practice?
GenAI Tools/Platforms Used
Summer Payment
Fall Payment
Tasks
Assessment professionals use GenAI most for learning outcomes (56%), rubrics (54%), and data analysis/reporting (54% each). Administrative applications center on communications (78%) and ideation (62%). Notable: Coding/Excel tasks jumped from 1% to 44% between Spring and Fall.
Assessment Tasks Using GenAI
Administrative Tasks Using GenAI
Benefits & Training
Efficiency dominates (87%), with communications benefit surging 24 points (54%→78%). Training is largely self-directed: 84% are self-taught, though 72% have attended formal sessions. Top needs: more training (79%), peer examples (72%), and clear guidelines (58%).
Benefits from GenAI Use
Supports Needed for More Effective Use
GenAI Training Activities
Training: Taken vs Developed vs Neither
Concerns
Privacy and ethical concerns lead (71%), followed by lack of institutional policy (50%) and environmental impact (43%). Qualitative responses emphasize accuracy issues (12+ mentions of "hallucination"), concerns about cognitive offloading, and equity implications of AI's environmental footprint.
Concerns About Using GenAI
Policy
Only 23% report having institutional GenAI policy, though 29% say one is actively developing. Spring data revealed a policy vacuum at division (66% no policy) and office levels (75% no policy). The gap between institutional attention and local guidance persists.
Has Institution Developed Formal GenAI Policy?
Policy at Each Organizational Level
Spring 2025 Policy Landscape
Institution: 39% Yes, 39% No, 23% Unsure
Division: 8% Yes, 66% No, 27% Unsure
Office: 9% Yes, 75% No, 16% Unsure
Collaboration
Faculty collaboration is declining—frequent collaboration dropped from 12% in summer to 8% in fall while "never" rose from 30% to 45%—despite majority agreement (58%) that such collaboration should be part of assessment roles.
How Often Collaborate with Faculty on GenAI
Fall: Should Faculty Integration Be Part of Role?
Summer: Role Includes Supporting Faculty
Fall: Effectiveness
Summer: Effectiveness
Influence
Forty-six percent of assessment professionals report substantial individual office or team influence, but only 10% at institutional level and 2% externally.
Degree of Influence on GenAI Integration
36-point gap reveals localized authority without systemic voice.
Purpose
To track adoption, application, and implications of GenAI in assessment practice and identify needs for policy, training, and infrastructure. Given the rapidly evolving technology landscape, the survey recurs approximately every four months (Spring, Summer, Fall).
Scope & Methods
Respondents represent diverse institutional types and assessment roles. The initial launch (January–April 2025) yielded 199 valid responses after data cleaning. The second pulse survey (launched and closed in August 2025) produced 164 valid responses. The third pulse survey (October–November 2025) produced 234 valid responses from assessment leaders. The study employs a mixed-methods approach combining descriptive statistics with qualitative thematic coding of open-ended responses using constant comparative analysis.
Limitations
Convenience sampling may favor certain practitioner types. Recruitment likely introduced selection bias toward professionally engaged individuals. Geographic and demographic distributions may not fully represent all U.S. assessment professionals.
Survey Instrument
Download the full survey instrument: GenAI Pulse Survey Fall 2025 Instrument
Next Survey
The Spring 2026 pulse survey will be available late Febuary.
Thank You
Thank you to Tyton Partners for granting permission to adapt their policy question from Time for Class 2025.
Definitions
• Assessment professional: Individual with central role in developing, implementing, managing, and reporting academic, co-curricular, or student affairs assessment practices in higher education
• Generative AI (GenAI): Large language model-based tools that create new content (text, images, music, video, code). Examples: ChatGPT, Claude, Copilot, Gemini
Research Team
Leads: Ruth Slotnick, Ph.D.; Joanna Boeing, Ed.M.; Bobbijo Grillo Pinnelli, Ed.D.
Team: Yu Bao, Ph.D.; John Hathcoat, Ph.D.; Will Miller, Ph.D.; Naima Wells, Ph.D.
Results generously hosted by the Center for Leading Improvements in Higher Education, Indiana University Indianapolis by the Assessment Institute in Indianapolis . For more information, contact Dr. Ruth Slotnick, rslotnick@bridgew.edu. IRB #2025055, Bridgewater State University.