Executive Summary: A Field at a Crossroads
This dashboard synthesizes findings from the Spring 2025 GenAI in Assessment Pulse Survey, triangulating quantitative data from 199 assessment professionals (of 285 total respondents prior to data cleaning) with qualitative themes and the project’s strategic framing. It reveals a field that is actively experimenting with GenAI—but also navigating significant professional uncertainty. The most urgent finding is a critical disconnect between internal practitioner anxieties and the field’s outward-facing mission: supporting student learning and outcomes.
Widespread but Shallow Adoption
A large majority have tried GenAI, but most report only light or pilot-level use in assessment workflows.
Practitioners are Upskilling
At least 75% have engaged GenAI training; 20% are providing training or consultation to others, signaling emergent internal expertise.
Professional Identity Stress
Open-ended data show strong anxiety about deskilling, professional devaluation, and misalignment with core assessment values.
The Critical Gap
Qualitative responses reveal dominant anxieties around “deskilling” and professional “devaluation”—with relatively little attention to GenAI’s impact on student learning, readiness, or equity.
Student Learning Gap
Far fewer comments link GenAI to student learning, readiness, or equity than to practitioner workload — a critical disconnect in the field’s mission.
Policy and Access
A lack of clear institutional policies on data privacy and security, coupled with inequitable access to licensed tools, significantly hinders deeper implementation.
A Field Divided
About half (45%) of respondents are “enthusiastic” about GenAI’s potential, while 19% remain “skeptical.” This mirrors a split between those leaning into efficiency gains and those prioritizing ethical and validity concerns.
Immediate Need
Balanced, evidence-based guidance that protects data, preserves rigor, and re-centers student learning while leveraging efficiency gains.
Who Responded
This panel shows who participated in the GenAI in Assessment survey. These background data help you interpret differences in GenAI adoption and readiness across functional areas, sectors, and career stages. Percentages are based on those who answered each item. Because some questions allowed more than one response, totals in certain charts may exceed 100%.
(Note >> HE: Higher Education; Private NP: Private Non-Profit; Private FP: Private For-Profit)
Primary Role
Area of Assessment
Years in HE Assessment
Age Range
Institution Type
Gender Identity
Race/Ethnicity
Use & Interest
How widely GenAI is being used in assessment work, how curious and/or cautious people think or feel about it, which tools they are trying, where it is most applied, and how much training they have engaged in.
Level of Use
Level of Interest
Perceived Usefulness
Types of Training
Top Tools Used
Task Usage by Professional Area
Comparing use of GenAI for top tasks between Academic and Student Affairs professionals.
Challenges & Concerns
What makes GenAI hard to use now, and what worries assessment professionals about using it. Includes implementation challenges, reported resistance, and key risk domains. Policy presence is shown as a readiness marker.
Resistance Encountered
GenAI Policy Presence
Key Qualitative Themes: Challenges & Concerns
Click the arrow below each theme to view supporting quotes.
1. Accuracy & Reliability
Outputs can be wrong or shallow; verification erodes efficiency.
View Supporting Quotes
“GenAI sometimes creates more work for me than if I just did the work without it.”
“There are many inaccuracies for specific data. AI has generated content that is false and conclusions that are lacking.”
“ChatGPT is not guaranteed to be reliable in it's data analysis, so I have to double check every single thing.”
2. Data Privacy & Policy
Unclear rules prevent uploading institutional data, stalling experimentation.
View Supporting Quotes
“My university does not allow us to us AI using university files or anything considered proprietary, so it is not clear how it would be possible to move forward.”
“Institution won't let us use anything but Co-Pilot, which is inferior. And, won't let us do API work.”
3. Access Inequities
Limited subscriptions and IT constraints create barriers.
View Supporting Quotes
“I’d love an institutional subscription so I can upload large data sets and get recommendations—securely.”
“Licensed access to Gen AI tools and a clear institutional AI policy.”
4. Time & Capacity
Learning and validation compete with already thin staffing.
View Supporting Quotes
“I need time, more than anything.”
“More time. My office is short staffed, and we don't have the capacity for professional development in this area.”
5. Cultural Resistance
Skepticism, especially from faculty, frames GenAI as "cheating."
View Supporting Quotes
“50% of colleagues or co-workers see it as 'cheating'.”
“We have a lot of improper student use of AI (i.e., plagiarism) that's messing up our student work samples.”
6. Integrity & Validity Risks
Fear of "AI evaluating AI" and loss of authentic student evidence.
View Supporting Quotes
“I am concerned about AI being used to evaluate assessment reports written with AI, on data analyzed by AI....what's the point???”
“We have a much harder time gathering authentic evidence of student learning because of GenAI.”
7. Ethical & Environmental Concerns
Energy use, bias, and authorship matter; some delay adoption on principle.
View Supporting Quotes
“Ethical considerations about intellectual property and energy use.”
“Less environmental impact, more discussion of ethics, clarity on why we need GenAI in some instances at all.”
8. Principled Non-Use
A vocal minority rejects GenAI until core issues are addressed.
View Supporting Quotes
“I have never put data into any GenAI product and think it would be unethical to do so.”
“I hope this group will seriously consider whether supporting the wide-spread adoption of GenAI in assessment is CURRENTLY a good idea.”
Future & Action
Where assessment professionals think GenAI is headed, what support they need, and which response strategies and collaborations are working. Use this tab to prioritize campus action plans.
Top Support Needed to Move Forward
1. Hands-On, Assessment-Specific Training, and Exemplars
“Too few people explain how they use GenAI… I need specific, concrete instruction for assessment with assurances on accuracy and privacy.”
2. Secure, Licensed Access, and Policy Permission
“I’d love an institutional subscription so I can upload large data sets and get recommendations—securely.”
3. Time, Staffing, and Funding to Experiment
“I need time, more than anything.”
How Resistance is Addressed
Collaborations Advancing Student Success
- Rubric and scoring pilots: AI-drafted rubrics and first-pass artifact scoring to streamline faculty review.
- Capacity building → governance: PD exchanges and cross-unit committees translating pilot learning into AI policy.
- Workflow augmentation and student support: AI-generated report feedback, chatbots for advising, and formative feedback loops.
Top Collaboration Patterns
Quotes
“We mapped 2,500 courses to Bloom using AI—faculty just cleaned up the outliers.”
“Three student affairs exchanges, 90% participation; we drafted campus AI policy the next week.”
Future Considerations: What Practitioners Expect
- Automation of drafting, scoring, and summarizing frees staff for interpretation, improvement, and faculty coaching.
- Reliable large-scale qualitative analysis is the most coveted—and least solved—capability.
- Performance and authentic assessments gain importance as AI blurs authorship of written work.
- Adoption speed hinges on data privacy guardrails and embedded tools in LMS/assessment platforms.
- GenAI becomes a force multiplier for small or shrinking assessment teams (equity and workforce implications).
About this Study
The GenAI in Assessment Survey is a national, recurring study of higher education assessment professionals. This research addresses a significant gap in documenting how practitioners are adapting their practices in response to generative AI.
Purpose
To track adoption, application, and implications of GenAI in assessment practice and identify needs for policy, training, and infrastructure. Given the rapidly evolving technology landscape, the survey recurs approximately every six months.
Scope & Methods
Respondents represent diverse institutional types and assessment roles. The initial launch (Jan-Apr 2025) yielded 199 valid responses after data cleaning. The study employs a mixed-methods approach combining descriptive statistics with qualitative thematic coding of open-ended responses using constant comparative analysis.
Limitations
Convenience sampling may favor certain practitioner types. Recruitment likely introduced selection bias toward professionally engaged individuals. Geographic and demographic distributions may not fully represent all U.S. assessment professionals.
Next Survey
A shortened pulse survey is available now. Take the survey here.
Definitions
• Assessment professional: Individual with central role in developing, implementing, managing, and reporting academic, co-curricular, or student affairs assessment practices in higher education
• Generative AI (GenAI): Large language model-based tools that create new content (text, images, music, video, code). Examples: ChatGPT, Claude, Copilot, Gemini
Research Team
Ruth Slotnick, Ph.D. (lead PI); Joanna Boeing, Ed.M.; Bobbijo Grillo Pinnelli, Ed.D.; Yu Bao, Ph.D.; John Hathcoat, Ph.D.; Will Miller, Ph.D.; Naima Wells, Ph.D.
Results generously hosted by the Center for Leading Improvements in Higher Education, Indiana University Indianapolis. For more information, contact Dr. Ruth Slotnick, rslotnick@bridgew.edu. IRB #2025055, Bridgewater State University.