GenAI Survey for Higher Education Assessment Professionals

Pulse Longitudinal Survey 2025–2026

Executive Summary

Assessment leaders across the 2025–2026 GenAI Pulse Survey show strong GenAI adoption, clear efficiency gains, and largely self-directed learning. By Spring 2026, 79% reported regular or occasional use, 87% reported efficiency gains, 87% said they were self-taught, and 71% still reported privacy and ethical concerns. At the same time, the results point to a broader higher education tension: GenAI use is growing faster than institutional support, faculty collaboration, and cross-campus influence. What distinguishes this dashboard is its focused longitudinal view of assessment professionals as active users navigating both real value and real structural limits.

79%
Regular + Occasional Users
87%
Report Efficiency Gains
87%
Self-Taught
71%
Privacy and Ethical Concerns
How to Read These Findings in Context

This dashboard adds a focused longitudinal view of assessment professionals to a broader higher education conversation about GenAI. The results suggest that assessment leaders are not sitting on the sidelines: use is high, perceived value is strong, and learning remains largely self-directed. At the same time, the findings reflect a wider institutional pattern in higher education, where experimentation is moving faster than coordinated support, faculty partnership, and broader strategic influence.

What Stands Out

Assessment leaders report substantial use, strong efficiency gains, and a high degree of self-teaching. Taken together, these patterns suggest a profession actively adapting to GenAI in real time rather than waiting for formal structures to catch up.

What Remains Constrained

The same data point to ongoing structural limits. Institutional support is uneven, collaboration with faculty remains inconsistent, and influence weakens considerably beyond the immediate office or team. The issue is not simply whether assessment professionals are using GenAI, but whether institutions are positioned to support and scale that work well.

*Note: Spring 2026 results shown in this dashboard represent respondents identifying as the primary assessment leader or as part of the team leading assessment work (n=278). This is a convenience sample rather than a nationally representative one, but it provides a rare longitudinal view of how assessment professionals are experiencing GenAI across roles, institutions, and contexts.

Demographics

Across the four survey administrations, respondents show a notably consistent professional profile. Most are staff or administrators working in academic affairs, with mid-career professionals forming the largest share of the sample. This section provides context for interpreting the rest of the dashboard by showing who is represented in the survey over time.

Primary Role

Primary Role

Spring 2025 n=199, Summer n=164, Fall n=234, Spring 2026 n=278
Reporting Line

Reporting Lines

Spring 2025 n=199, Summer n=164, Fall n=234, Spring 2026 n=278
Assessment Leadership

Lead Assessment at Institution?

Spring 2026 n=410 pre-filter • Dashboard = leaders only (n=278)
Years in Profession

Years in Higher Ed Assessment

Spring 2025 n=177, Summer n=151, Fall n=218, Spring 2026 n=260
Gender Identity

Gender Identity

Spring 2025 n=218, Summer n=151, Fall n=216, Spring 2026 n=250

Other categories <5% each include: Gender Queer, Gender Fluid, or Gender Non-Conforming, Nonbinary, Queer, Transgender, Two Spirit, Prefer to self-describe

Race/Ethnicity

Race/Ethnicity

Spring 2025 n=171, Summer n=150, Fall n=216, Spring 2026 n=251

Other categories <5% each include: Indigenous North American, Middle Eastern or North African, Mixed, Native Hawaiian/Pacific Islander, Prefer to self-describe

Institution

Respondents represent a diverse range of institutional settings, with public institutions, private non-profits, and community colleges all well represented. Spring 2026 includes the strongest community college participation to date, alongside continued concentration in medium and large institutions. This section situates the findings by showing the institutional and geographic contexts in which assessment leaders are working.

Institution Size

Student Enrollment

Spring 2025 n=178, Fall n=216, Spring 2026 n=259 • Not asked Summer
Institution Type

Institution Type*

Spring 2025 n=177, Fall n=217, Spring 2026 n=260 • MSI includes HBCUs, HSIs, AANAPISIs • Not asked Summer
Geographic Distribution

Top 15 States

Spring 2025 n=173, Fall n=210, Spring 2026 n=244 • Not asked Summer

Key Shifts Spring 2025 to Spring 2026

Massachusetts ↓8 pts

12% (n=21/173) → 4% (n=9/244)

New York ↓4 pts

9% (n=16/173) → 5% (n=13/244)

Florida ↑3 pts

4% (n=7/173) → 7% (n=16/244)

Virginia ↑3 pts

3% (n=6/173) → 6% (n=14/244)

All States (Spring 2025 → Spring 2026)
AL1%→1%
AK0%→0%
AZ3%→3%
AR1%→0%
CA6%→8%
CO0%→1%
CT2%→0%
DE1%→1%
DC0%→2%
FL4%→7%
GA3%→2%
HI1%→0%
ID0%→1%
IL4%→5%
IN2%→4%
IA1%→1%
KS1%→0%
KY2%→1%
LA1%→2%
ME1%→1%
MD2%→3%
MA12%→4%
MI2%→2%
MN2%→3%
MS1%→0%
MO1%→3%
NE3%→1%
NV1%→0%
NH1%→0%
NJ3%→3%
NM1%→0%
NY9%→5%
NC3%→3%
ND0%→0%
OH3%→2%
OK1%→0%
OR1%→1%
PA5%→4%
RI1%→0%
SC1%→3%
SD0%→0%
TN2%→2%
TX9%→10%
UT0%→1%
VT0%→0%
VA3%→6%
WA1%→1%
WV0%→0%
WI0%→1%
WY0%→0%
Terr0%→0%
Intl1%→3%

Highlighted = ≥3% in either administration

* Institution type is “select all that apply.” Geographic data not collected Summer 2025.

Use & Tools

GenAI use is now well established in the day-to-day work of many assessment leaders. ChatGPT remains the most widely used platform, while Copilot, Gemini, and Claude show continued growth and a more differentiated tool landscape. This section highlights both adoption patterns and the uneven institutional support shaping which tools professionals can access.

Current Use

Current GenAI Use in Assessment Work

Spring 2025 n=199, Summer n=162, Fall n=233, Spring 2026 n=276
Perceived Enhancement

How Much Has GenAI Enhanced Your Practice?

Summer n=112, Fall n=164, Spring 2026 n=216 (users only) • Not asked Spring 2025
Tools Used

GenAI Tools/Platforms Used

Spring 2025 n=166, Summer n=112, Fall n=163, Spring 2026 n=216 (users)
Payment Source

Summer 2025 Payment

Who pays? (Summer users)

Fall 2025 Payment

Who pays? (Fall users)

Spring 2026 Payment

Who pays? (Spring 2026 users)
Open-ended responses in Spring 2026 (n=58) pointed to a broader range of tools than the checklist captured. Beyond the listed options, leaders frequently mentioned NotebookLM, along with DeepSeek and Gamma. Several respondents also noted that GenAI capabilities are increasingly appearing inside platforms they already use — including Qualtrics, Excel, Zoom, and Blackboard Ultra. These responses suggest that AI is not only being adopted as standalone tools but is quietly weaving itself into existing workflows.

Tasks

Assessment leaders are using GenAI across both core assessment work and routine administrative tasks. Common applications include rubric development, learning outcomes work, data analysis, communications, and presentations, with technical uses such as coding and Excel support becoming more visible over time. This section shows where GenAI is entering practice most directly.

Assessment Tasks — Core Applications

Assessment Tasks Using GenAI: Core Applications

Spring 2025 n≈190, Summer n=109, Fall n=162, Spring 2026 n=208
Assessment Tasks — Additional Applications

Assessment Tasks Using GenAI: Additional Applications

Spring 2025 n≈190, Summer n=109, Fall n=162, Spring 2026 n=208
Administrative Tasks

Administrative Tasks Using GenAI

Spring 2025 n≈190, Summer n=109, Fall n=161, Spring 2026 n=207
Open-ended responses added further insight. Leaders described using GenAI for qualitative coding, data summaries, meta-assessment of existing reports, and syllabus analysis. One respondent captured a thoughtful, bounded approach: “I give AI my ideas and prompt for streamlining, removal of redundancies, communication clarity, comparative analysis to industry & other institutions. I try not to rely on AI for ideas or building materials.” This reflects a common theme — some of us are deliberately using GenAI as a supportive tool rather than a replacement for our own expertise and judgment.

Benefits & Training

Reported benefits from GenAI remain substantial, especially in efficiency, communication, and data-related work. At the same time, skill-building continues to be driven primarily by self-teaching, supplemented by workshops and other forms of professional learning. This section highlights both the value respondents see and the supports they say would help them use GenAI more effectively.

Benefits Experienced

Benefits from GenAI Use

Summer n=106, Fall n=160, Spring 2026 n=204 • Not asked Spring 2025
Supports Needed

Supports Needed for More Effective Use

Summer n=152, Fall n=219, Spring 2026 n=256 • Open-ended in Spring 2025
Training Participation

GenAI Training Activities

Summer n=154, Fall n=227, Spring 2026 n=261
Open-ended responses highlighted some needs that go beyond the checklist, particularly the importance of greater GenAI literacy among institutional decision-makers. A few leaders also shared that no amount of training would address their concerns, as their reservations were rooted in deeper values-based objections rather than skill gaps.

Concerns

Concerns about GenAI remain prominent even as use has grown. Privacy and ethical issues continue to lead, while environmental impact, unclear policy, and uneven training also remain significant. This section shows that adoption and concern are developing side by side rather than as opposites.

Concerns

Concerns About Using GenAI

Summer n=155, Fall n=228, Spring 2026 n=262
71%
Privacy/Ethics (Sp26)
52%
Environment (Sp26)
43%
Policy Lacking (Sp26)
37%
Lack of Training (Sp26)
Open-ended responses revealed important additional layers. Many comments focused on accuracy and hallucination issues, with leaders noting the extra time required to double-check outputs. Other concerns that surfaced included equity in access to paid tools across institutions and a sense of professional identity tension — the feeling that using GenAI sometimes feels like “cheating.” These responses show that our concerns are not only practical but also deeply tied to questions of trust, fairness, and professional integrity.

Policy

Assessment leaders are seeing a gradual increase in the presence of formal institutional policies related to GenAI. In the most recent administration, more leaders reported that their institution had developed and rolled out a formal policy, while another sizable group indicated that policy work was actively underway.

Institution-Wide Policy

Has Institution Developed Formal GenAI Policy?

Summer n=154, Fall n=227, Spring 2026 n=262
32%
Have Policy (Spring 2026)
30%
Actively Developing
11%
Not Expecting Policy
Open-ended responses added important texture. Several leaders emphasized that having a policy in place does not necessarily mean it is clear or enforceable. One noted, “We have policies, but they are murky and ill-defined. Almost impossible to implement and even harder to enforce.” Others described how current policy decisions are directly shaping tool access — for example, institutions developing in-house systems and temporarily restricting external applications. These comments highlight the gap between policy on paper and policy in practice.

Collaboration

Collaboration with faculty around GenAI remains uneven across the sample. Some assessment leaders are engaging regularly in this work, while many report only limited collaboration or note that faculty-facing partnership is not central to their role. This section highlights both the promise and the structural complexity of collaboration at the intersection of assessment, teaching, and learning.

Collaboration Frequency

How Often Collaborate with Faculty on GenAI

Summer n=154, Fall n=227, Spring 2026 n=199 (excl. N/A) • Not asked Spring 2025
Role and Effectiveness

Fall 2025 & Spring 2026: Should Faculty Integration Be Part of Role?

Fall n=226, Spring 2026 n=260

Fall 2025 & Spring 2026: How effective is your collaboration?

Fall n=125, Spring 2026 n=127 (collaborators only)
Open-ended responses added rich context to the collaboration findings. Faculty openness to GenAI was often described as higher than expected when conversations began with idea generation rather than prescriptive examples. One respondent captured this well: “Faculty are surprisingly open to AI once you start with ‘you do not need to use any of these examples, sometimes it’s just a helpful starting point... it enables you to remain an expert in your field and not in assessment. It gives them permission to try.’” At the same time, several leaders highlighted an ongoing tension between efficiency and relationship-building. Simply sharing AI-generated outputs, they noted, can sometimes diminish the human interaction that strengthens collaborative partnerships and helps faculty see the deeper value of assessment. As one person put it: “I’m not sure that just sending an output back to faculty is the best way to ensure that faculty see the value of assessment as a tool to continuously improve student learning. It removes some of the human factor and interaction that comes with developing collaborative partnerships.”

Influence

Assessment leaders report their strongest influence on GenAI integration at the most local level, within their immediate office or team. Influence drops noticeably as the setting expands to departments, institutions, and external contexts, revealing a persistent gap between local practice and broader strategic reach. This section shows where assessment professionals perceive real authority and where that influence remains constrained.

Sphere of Influence (Spring 2026)

Degree of Influence on GenAI Integration

Spring 2026 n=255–256 per item • Excludes “Unsure”
Substantial Influence by Level (Spring 2026)
47%
Team
9%
Course
10%
Program
11%
Dept
11%
Institution
3%
External

36-point gap between team-level and institutional-level substantial influence.

Open-ended responses offered an important nuance: having influence does not always mean choosing to exercise it by pushing for widespread adoption. One leader captured this perspective well: “I have influence in processes at the various levels (program, institution, etc.), but I’m not pushing (and the Assessment Committee is not pushing) for routinized integration... Just like we’re not going to tell someone whether they have to use Excel or R or Python to do quantitative analysis, we’re not currently planning on telling someone they must use AI in their process.” This tension between local influence and limited broader reach remains one of the most significant findings for the field.

Purpose

To track adoption, application, and implications of GenAI in assessment practice and identify needs for policy, training, and infrastructure. Given the rapidly evolving technology landscape, the survey recurs approximately every four months (Spring, Summer, Fall).

Scope & Methods

Respondents represent diverse institutional types and assessment roles. The initial launch (January–April 2025) yielded 199 valid responses after data cleaning. The second pulse survey (launched and closed in August 2025) produced 164 valid responses. The third pulse survey (October–November 2025) produced 234 valid responses from assessment leaders or those identifying as part of the team leading the assessment work. The fourth pulse survey (January–April 2026) produced 278 valid responses from the primary assessment leaders (68% of 410 total respondents). The study employs a mixed-methods approach combining descriptive statistics with qualitative thematic coding of open-ended responses using constant comparative analysis.

Who Responds

Respondents span the full spectrum of relationships to GenAI — from those who are active and enthusiastic daily users, to those who are deliberately experimenting, to those who are uncertain or cautious, to those who have principled objections and choose not to use GenAI at all. This range is a feature of the data, not a flaw. It reflects the actual distribution of perspectives within the assessment profession at this moment in time, and it means that percentages reporting use or non-use should be read in the context of a field in active deliberation — not a field with a settled consensus.

Limitations

Convenience sampling may favor certain practitioner types. Recruitment likely introduced selection bias toward professionally engaged individuals — those already attending conferences, engaged in professional networks, or following assessment-focused communications. No comprehensive membership list of U.S. assessment professionals exists from which to draw a probability sample; unlike EDUCAUSE, AIR, or NASPA, the assessment field does not have a single large-membership organization that can generate a representative sampling frame. Geographic and demographic distributions may not fully represent all U.S. assessment professionals. Findings are best understood as a field mirror rather than a nationally representative census.

Survey Instrument

Download the Spring 2026 survey instrument: GenAI Pulse Survey Spring 2026 Instrument

Next Survey

TBD

Thank You

Thank you to Tyton Partners for granting permission to adapt their policy question from Time for Class 2025. Results are disseminated with the generous support of Dr. Stephen Hundley and the Assessment Institute at Indiana University Indianapolis. The survey is an independent research initiative and is not sponsored by the Assessment Institute.

Definitions

Assessment professional: Individual with central role in developing, implementing, managing, and reporting academic, co-curricular, or student affairs assessment practices in higher education

Generative AI (GenAI): Large language model-based tools that create new content (text, images, music, video, code). Examples: ChatGPT, Claude, Copilot, Gemini

Research Team

Primary Survey Leads: Ruth Slotnick, Ph.D.; Joanna Boeing, Ed.M.; Bobbijo Grillo Pinnelli, Ed.D.

Extended Research Team: Yu Bao, Ph.D.; John Hathcoat, Ph.D.; Will Miller, Ph.D.; Naima Wells, Ph.D.

For more information, contact Dr. Ruth Slotnick, rslotnick@bridgew.edu. IRB #2025055, Bridgewater State University.