Required General Education Program Evaluation

This article analyzes Required General Education Program Evaluation as a communicative and collaborative challenge rather than a purely technical process. Through a co-designed online data-gathering system, the study shows how faculty engagement, shared meaning, and institutional alignment can bridge the gap between educators and administrators, improving assessment quality, faculty buy-in, and accreditation.
Allegorical illustration showing educators and administrators standing on opposite sides of a gap, connected by a bridge made of data, dialogue, and shared assessment tools within a university setting.

Bridging the Gap Between Educators and Administrators

By James Lipuma, Ph.D., Cristo León, Ph.D., and Jeremy Reich, last reviewed January 6, 2026
Blog post about the article published in the Journal of Systemics, Cybernetics and Informatics (2025)

Introduction

Programmatic evaluation of General Education Programs (GEPs) is often treated as a technical or compliance-driven task. In practice, however, it is an intensely communicative and relational challenge. Faculty are asked to produce assessment data, while administrators are tasked with reporting outcomes to accreditation agencies, frequently operating within disconnected systems of meaning, incentives, and timelines.

This article examines how a collaborative, co-designed online data-gathering system can function as a bridge between educators and administrators, transforming evaluation from a bureaucratic obligation into a shared academic practice.

The Problem: A Structural Disconnect

The study identifies a persistent gap between the two groups:

  • Educators are responsible for teaching, mentoring, and assessing students in authentic learning contexts.
  • Administrators are responsible for aggregating, standardizing, and reporting data for institutional accountability and accreditation.

The literature review revealed that this disconnect often undermines faculty buy-in, leading to assessment fatigue, skepticism, and minimal engagement. Evaluation becomes something done to faculty rather than with them.

Methodological Approach: Collaborative Co-Design

To address this issue, the authors implemented a Collaborative Co-Design (CCD) process. Faculty were actively involved in:

  • Defining evaluation goals
  • Designing the data-gathering instrument
  • Testing and refining digital tools through feedback sessions

Rather than imposing a pre-built system, the evaluation infrastructure was co-created, aligning institutional requirements with pedagogical realities.

Pilot Case: Oral Communication Outcome

The pilot focused on a single General Education outcome: Oral Communication. A four-point Likert-style rubric was developed collaboratively and implemented through an online data-gathering system.

Key results included:

  • Increased faculty clarity about evaluation criteria
  • Greater alignment between instructional practice and reporting requirements
  • Improved trust in the evaluation process
  • Actionable feedback loops for both faculty and administrators

Faculty observations emphasized that the system supported reflection rather than surveillance.

Findings and Implications

The study demonstrates that effective GEP evaluation depends less on technical sophistication and more on communication design. When educators are treated as epistemic partners rather than data providers, evaluation systems gain legitimacy and durability.

The approach outlined in this article can be adapted to:

  • Other General Education outcomes
  • Program-level assessment
  • Institutional accreditation processes
  • Any context requiring alignment between teaching practice and administrative reporting

Cite This Article

Lipuma, J., León, C., & Reich, J. (2025). Required General Education Program Evaluation: Bridging the Gap Between Educators and Administrators. Journal of Systemics, Cybernetics and Informatics, 23(4), 57–61. /Research/Education. https://doi.org/10.54808/JSCI.23.04.57

ISSN: 1690-4524
Publisher: International Institute of Informatics and Systemics
Keywords: General Education Programs, Program Evaluation, Faculty Buy-In, Collaborative Co-Design, Accreditation, Online Data-Gathering

Acknowledgements

Special thanks to the blind reviewers, whomever you may be, for your notes and observations that improved the final version.
Nonblind Peer-Reviewer
Marcos O. Cabobianco. Jefe de trabajos prácticos (Historia). Universidad de Buenos Aires, Buenos Aires, Argentina.
ORCID: https://orcid.org/0000-0002-9178-6840

Disclosure statement
No conflict of interest pertains to the research presented above.

PDF download


Versión en Español

Evaluación de Programas de Educación General

Cerrando la Brecha entre Docentes y Administradores

Este artículo analiza el diseño de un sistema digital de recolección de datos para la evaluación programática de programas de Educación General en una universidad politécnica pública de Estados Unidos. A partir de un proceso de codiseño colaborativo, el estudio muestra cómo es posible alinear las necesidades de acreditación institucional con las prácticas pedagógicas reales del profesorado.

Los resultados evidencian que la evaluación es, ante todo, un proceso comunicativo. Cuando los docentes participan activamente en el diseño de los instrumentos, aumenta la apropiación, la claridad y la utilidad de los datos generados.

¿Te interesa explorar o colaborar?

Si trabajas en evaluación educativa, acreditación, diseño curricular o innovación institucional, esta investigación ofrece un modelo replicable para fortalecer la relación entre práctica docente y gobernanza académica.

📩 Contáctame, comparte esta entrada o cita el artículo para seguir construyendo puentes entre educación y administración.

Copyright

© International Institute of Informatics and Systemics 2025