Planning for Higher Education Journal

Connecting the Dots

Accountability, Assessment, Analytics, and Accreditation
Journal Cover
From Volume 46 Number 1 | October–December 2017
By Linda L. Baer

Challenging times continue for higher education: calls for more accountability, assessment of outcomes, evidence of institutional performance, relevance for life and careers, and student success. Integrated strategic planning and decision making in this ever-changing environment is critical for campus leadership. To maximize accountability, assessment, analytics, and accreditation, it is imperative for campus leaders to develop an integrated approach that begins with standard metrics, aligns with performance-based goals, and connects with accreditation requirements. Analytics, learning analytics, and learning management systems provide the crucial link to learning standards that support the reporting, monitoring, and improvement of student learning.


There is now a renewed sense of urgency to improve accountability, transparency, and performance in higher education—the result of a perfect storm of state budget challenges, the ongoing transition from a manufacturing to a knowledge economy, and the inability to appropriately articulate the value of a postsecondary education. Stakeholders are demanding more from higher education, searching for an overall return on this investment from the student, state, and federal perspectives. People are now asking, is college worth it? (Barone 2017; Dossani 2017). These challenges cannot be met with simple changes. Institutions must strive to develop analytics or actionable intelligence in all institutional areas—particularly those related to learning (Baer and Campbell 2012). Strategic planning and decision making in this ever-changing environment is critical.

“Our country is moving from a national, analog, industrial economy to a global, digital information economy. And our social institutions and education system were built for the former—a world that is dying” (Kauffman 2017, ¶ 3). Arthur Levine, president of the Woodrow Wilson Foundation, made these comments at the first meeting of HLC Partners for Transformation, a blue ribbon panel created by the Higher Learning Commission. Noting that accreditation dates back to the late 1800s, he continued, “When accreditation was created, higher education looked like the wild west and required standardization. Now we are at the beginning of a revolution. How do we encourage innovation and create standards for a new era?” (Kauffman 2017, ¶ 4). Levine believes we must work to determine the standards, processes, and opportunities that will support the changing needs of society, the changing face of higher education, and the changing educational environment.

As MacTaggert (2017, p. 1) noted, “Twentieth-century leadership approaches will no longer suffice. Skepticism over the value of a college degree, higher expectations for performance from institutions at all levels, student unrest, intense competition for students and resources, and political divisions are among the most prominent challenges. In addition, a new wave of technological change will most likely alter higher education as we know it. Artificial intelligence, virtual reality, big data, and cognitive mapping are more than buzz words. They will define the future of higher education and society just as the Internet does now.”

This changing environment calls for more efforts to build integrated planning models that take into account the many sectors of the campus. Particularly important are the connections between institutional accountability, assessment, analytics, and accreditation. Too often, these dimensions have been disconnected, resulting in less efficiency and effectiveness. Ultimately, student success suffers in a fragmented institutional environment.

Ultimately, student success suffers in a fragmented institutional environment.


Accountability is demonstrating to stakeholders the effectiveness of a college, program, service, or initiative in meeting its responsibilities (Suskie 2015). However, as Lingenfelter (2003, p. 20) noted, “Policy makers and educators have been struggling for decades to design satisfactory approaches to educational accountability. Yet progress has been slow, both in developing satisfactory approaches and in improving performance. The objective of accountability systems generally is to stimulate more effective, innovative approaches and greater effort and discipline in implementation.”

There has been a long history of conversation about accountability across higher education. Burke (2004, p. 1) described the many faces of accountability:

Accountability is the most advocated and least analyzed word in higher education. Everyone uses the term but usually with multiple meanings. Writers say it faces in every direction—“upward,” “downward,” “inward,” and “outward.” It looks, in turn, bureaucratic, participative, political, or market centered. [It may appear] two-faced, with sponsors and stakeholders demanding more services while supplying less support.

He continued, “The conflict over accountability is eroding what was once a national consensus—that higher education is a public good for all Americans and not just a private benefit for college graduates” (Burke 2004, p. 1).

Public higher education was built on the premise of a social compact: that is, access to a college education was both a public good for society and a private good for students. Access to college was seen as the gateway to quality and equality. Taxpayers accepted the obligation to provide adequate operating funding to public colleges and universities while expecting they would keep tuition relatively low. Public support of basic scientific research was part of the compact, which held up until the early 1970s when revenues and enrollments began to decline and demands for increased outcomes and performance started to emerge. Over time, as fiscal issues grew, demands for accountability grew as well: “Like most compacts, the one between American society and higher education became strained when rights and responsibilities moved from vague generalities to specific demands and competed for funding with other public services” (Burke 2004, p. 6).

In the mid-1980s, states began to require colleges and universities to report on performance, and in the 1990s, Congress developed the Student Right-to-Know Act, which mandated significant new disclosure of information on graduation rates and school safety. Policy makers expressed further concerns regarding higher education outcomes as related to cost. In 1983, A Nation at Risk was published, which warned of declining learning standards in both primary and secondary schools (National Commission on Excellence in Education 1983). In 1986, concerned governors published a report titled Time for Results that extended the call to examine the quality of learning to the collegiate level (National Governors Association 1986). These efforts resulted in an important nationwide movement that led to the development of systematic research on student learning outcomes. Driven by the federal government, regional accreditors began to increase their emphasis on quality and improved graduation rates (Burke 2004; Carey 2007).

Ewell (2014, ¶ 1) noted,

For many years, judgements about “quality” in higher education were determined almost solely by institutional reputation, productivity, and factors such as fiscal, physical, and human resources. Regional accreditors, charged with examining the adequacy of public and independent institutions alike, looked mostly at the overall level of institutional resources and at internal shared-governance processes. Over the past three decades, however, interest on the part of external stakeholders in the actual academic performance of colleges and universities has steadily risen.

Ewell determined that there are a number of reasons for this. One is the growing emphasis on accountability, particularly as it relates to student learning outcomes. There is increased competition in higher education, and the environment is putting a premium on visible evidence of academic performance. In addition, the ongoing fiscal constraints under which most colleges and universities operate demand strong evidence-based academic management practices as much as fiscal discipline.

In 2006, the Secretary of Education’s Commission on the Future of Higher Education offered a strong indictment of American higher education (U.S. Department of Education 2006). The commission focused on costs that were too high, graduation rates that were too low, especially among low-income and minority students, and learning outcomes that remained a mystery. Overall, higher education responded as it had to previous calls for more accountability by developing strong defenses against the criticism and ultimately indicating that it was already accountable in many ways. In addition, leaders argued that higher education institutions are so diverse and unique that no single form of accountability could be used to assess all fairly (Carey 2007).

The No Child Left Behind legislation imposed unprecedented federal requirements on the K–12 system to use regularly administered standardized tests to document annual improvements in all student ethnic and socioeconomic subpopulations, and many thought higher education was next. However, since higher education still didn’t have standards by which to measure learning outcomes, the proposals focused on graduation rates.

Indeed, many panels, commissions, acts, and proposals were created to move toward a stronger sense of accountability. States tried to build accountability systems that mattered, and during the 1990s, the number of state-level accountability systems grew. Yet, in determining metrics, states often used information they already had, such as graduation rates, which resulted in mountains of data without context or meaning.

Mark Warner, chair of the National Governors Association in 2005, worked on a governors’ compact on high school graduation rates. He stated,

Clearly better data alone will not increase graduation rates or decrease dropout rates, but without better data states cannot adequately understand the nature of the challenge they confront. Knowing the scope of the problem, why students are leaving, and what their educational and personal needs are can help leaders target resources more effectively in support of those young people who are at-risk or who have already dropped out. (National Governors Association 2005, ¶ 9)

Lingenfelter (2016, pp. 50–51) described four high-profile initiatives that were created to improve the scope, quality, and utility of information used to inform policy and practice in the measurement of outcomes:

  • “Measuring Up, an effort to measure state-level performance in higher education and generate effective policy responses to improve performance.”
  • “The Data Quality Campaign, State Longitudinal Data Systems, and Common Education Data Standards, all closely related initiatives to improve the availability and quality of educational data.”
  • “Common Core State Standards for College and Career Readiness, an initiative to establish common learning objectives for K–12 education and assess student achievement throughout elementary and secondary education in order to promote more widespread attainment.”
  • “Assessing Higher Education Learning Outcomes (AHELO), a feasibility study launched by the Organization for Economic Cooperation and Development (OECD).”

He also listed a number of fundamental questions that could be used to frame further meaningful discussion (Lingenfelter 2016, p. 51):

  • “What should be measured? What is important? What matters to policy and practice?”
  • “What data, collected with what definitions and procedures, and what combinations of data into metrics, will be credible and widely accepted?”
  • “What meaning can legitimately be derived from the information provided by the measures?”
  • “How should the information be used, by whom, and in what ways?”

The Accountability Triangle (figure 1) provides insight into the expectations of multiple stakeholders and gives context to the environment in which the quest to improve accountability, assessment, analytics, and accreditation resides (Burke 2004).

Figure 1 The Accountability Triangle

Source: Burke 2004, p. 23

As the Accountabilty Triangle illustates, higher education resides in a space where there are many internal and external stakeholders, all expecting accountability. However, the demands and interests of these stakeholders differ:

  • State priorities reflect the public need and desire for higher education programs and services, often as expressed by state officials but also by civic leaders outside government.
  • Academic concerns involve the issues and interests of academic communities, particularly professors and administrators.
  • Market forces cover customer needs and the demands of students, parents, businesses, and other clients of colleges and universities (Burke 2004).

State priorities represent political accountability, academic concerns reflect professional accountability, and market forces push market accountability.

Clearly, there are conflicting and dynamic demands in terms of accountability to whom, for what purpose, and with what outcomes. These multiple forces create tension between civic and collegiate interests and between commercial and entrepreneurial cultures, all of which reside within the higher education environment. These tensions are manifested as a series of accountability paradoxes:

  • Institutional improvement versus external accountability
  • Peer review versus external regulation
  • Inputs and processes versus outputs and outcomes
  • Reputation versus responsiveness
  • Consultation versus evaluation
  • Prestige versus performance
  • Trust versus evidence
  • Qualitative versus quantitative evidence (Burke 2004)

Since higher education is accountable to multiple stakeholders, a variety of metrics must be used in its assessment. These metrics are fundamental to building linkages between assessment and accountability.


In education, the term “assessment” refers to the wide variety of methods or tools that educators use to evaluate, measure, and document the academic readiness, learning progress, skill acquisition, or educational needs of students (Glossary of Education Reform 2015).

Related to accountability is the assessment of what is important. In fact, assessment is the foundation upon which accountability and accreditation reside. Questions include what is assessed, by whom, and for what purpose.

However, it must be determined whether the intent is to use assessment for accountability, for improvement, or for both. These two dimensions of assessment often reside in creative tension.

Peter Ewell of the National Center for Higher Education Management has added significant insight into the assessment, accountability, and accreditation conversation. Ewell (2001) developed a helpful taxonomy for understanding assessment in terms of units of analysis, ways of looking at performance and outcomes, and ways to review performance (figure 2).

Figure 2 Taxonomy of Terms Commonly Used in Connection with Student Learning Outcomes

Source: Ewell 2001, p. 8

The assessment movement, as Ewell (2009, abstract ¶ 1) characterized it, emerged in the mid-1980s from “the demand by policymakers for better and more transparent information about student and institutional performance, the press by accreditors on institutions to collect and use student learning outcomes data, and the availability of more and better assessment instruments and approaches.”

As noted by Schray (n.d., p. 6), “Many proponents of greater public accountability in higher education and accreditation argue that the most important evidence of quality is performance, especially the achievement of student learning outcomes. This has led to a number of national and state efforts to identify a broad range of performance indicators or measures including access, productivity and efficiency, student learning, degree completion, and economic returns from postsecondary education. Many of these performance measures and indicators are represented in Measuring Up: The National Report Card on Higher Education.”

Historically, higher education has relied on input measures such as student enrollment, number of faculty, investment in new buildings, and research grants and contracts received. A report by the Hechinger Institute on Education and the Media (n.d., p. 2) presented a similar conclusion: “Examinations of the quality of higher education usually focus on statistics representing the number of books in the library, the size of the endowment, test scores of incoming freshmen, graduation rates and the like.”

These metrics are lagging metrics since they report past activity and often have little direct bearing on student learning. Recently, more emphasis on outcomes-based performance measures has led to outcomes-based metrics such as student retention and persistence rates, graduation rates, rates of placement in jobs, and post-graduate income levels. However, these are still “after-the-fact” measures that are being used because of the lack of standardized learning outcomes data. Today, the use of analytics tools is bringing higher education more ability to monitor at-risk student behavior, factors related to persistence, and what interventions are working for which students. As outcomes-based education expands, more standards for learning are becoming available. The list of metrics now includes student success, student access and diversity, meeting workforce needs, and research and innovation that benefit the academic community and society (Miller 2016).

In addition, “as societal and economic factors redefine what skills are necessary in today’s workforce, colleges and universities must rethink how to define, measure, and demonstrate subject mastery and soft skills such as creativity and collaboration. The proliferation of data mining software and developments in online education, mobile learning, and learning management systems are coalescing toward learning environments that leverage analytics and visualization software to portray learning data in a multidimensional and portable manner. In online and blended courses, data can reveal how student actions contribute to their progress and specific learning gains” (Adams Becker et al. 2017, pp. 8–9).

More recently, independent private colleges and schools organized as for-profit institutions have been confronted by additional outcomes measures, including the rate at which students default on loans post-graduation (Cohort Default Rates) and the numerical relationship between the price of education and the post-completion earning performance of the completer (so-called Gainful Employment regulations). However, it is unclear what the future of these regulations will be (Barrett 2017; Mayotte 2015).

The American Association of State Colleges and Universities (AASCU) publishes an annual Top 10 Higher Education State Policy Issues list. Among the many issues on the list across recent years is performance-based funding. The AASCU January 2017 Policy Matters brief stated, “As a discretionary state budget item, higher education will be among lawmakers’ top targets to balance state budgets.… Higher education’s role in economic and workforce development will be a top-tier concern for lawmakers looking to guide state residents into available jobs” (AASCU Government Relations 2017, p. 1). Given the limited revenues of many states, increased emphasis is being placed on incentivizing improved institutional outcomes using existing resources, and several models of performance-based funding have emerged. Linking state funding to performance has also been a top-tier policy recommendation of many major foundations, such as the Gates and Lumina Foundations. Evidence is still being collected on the actual effectiveness and unintended consequences of performance-based funding (AASCU Government Relations 2017).

Over time, there have been many attempts to develop performance measures. National and state efforts have identified several measures including access, productivity and efficiency, degree completion, and economic returns from postsecondary education. For example, Measuring Up 2008: The National Report Card on Higher Education, authored by the National Center for Public Policy and Higher Education (2008, p. 4), focused on six measures that apply to sets of institutions, an entire community or state, or a set of communities. These measures by implication integrate social and economic conditions into the performance evaluation of postsecondary education. The key indicators were selected because they are broad gauges for understanding success in key performance areas:

  • “Preparation for college: How well are high school students prepared to enroll in higher education and succeed in college-level courses?”
  • “Participation: Do young people and working-age adults have access to opportunities for education and training beyond high school?”
  • “Affordability: How difficult is it to pay for college when family income, the cost of attending college, and student financial aid are taken into account?”
  • “Completion: Do students persist in and complete certificate and degree programs in college?”
  • “Benefits: How do college-educated and trained residents contribute to the economic and civic well-being of each state?”
  • “Learning: How do college-educated residents perform on a variety of measures of knowledge and skills?”

Another source of information about the aggregate performance of colleges and universities is Complete College America (CCA). Established in 2009, CCA “is a national nonprofit with a single mission: to work with states and consortia to significantly increase the number of Americans with quality career certificates or college degrees and to close attainment gaps for traditionally underrepresented populations” (Complete College America, n.d., ¶ 1). Thirty-four states currently participate in CCA, which advocates structural changes to an institution’s approach to student course-taking behavior. The CCA model promotes more intentionality in course taking, course scheduling, and developing pathways for student success.

CCA has identified six institutional “game changers” that appear to contribute to student success:

Through research, advocacy, and technical assistance, we help states put in place the six GAME CHANGERS that will help all students succeed in college: 15 to Finish, Math Pathways, Corequisite Support, Momentum Year, Academic Maps with Proactive Advising, A Better Deal for Returning Adults. (Complete College America, n.d., ¶ 4)

While the “15 to Finish” approach is intended to move students to completion in a more timely manner, it is important to understand that full-time does not work for all students. A more comprehensive metric includes time to degree and on-path progress to completion for part-time students. “Time to degree is a major concern for students, one that colleges often do not take seriously enough. Research shows that students who can take more classes on a focused path to a degree, should, because it helps them succeed at higher rates. Whether it’s 15 in a term, 30 in a year, or just one more class,” said Dr. Davis Jenkins, Civitas Learning advisor and senior research scholar at the Community College Research Center (Civitas Learning 2017, ¶ 5).

How can higher education improve the impact of performance measurement as related to student success? By developing and using outcomes measures for student learning.

How can higher education improve the impact of performance measurement as related to student success? By developing and using outcomes measures for student learning.

Ewell (2001) provided a strong summary of this topic in the report Accreditation and Student Learning Outcomes: A Proposed Point of Departure, listing six core questions that need to be addressed:

  • “What is a ‘student learning outcome’?”
  • “What counts as evidence of student learning?”
  • “At what level (or for what unit of analysis) should evidence of student learning outcomes be sought?”
  • “To what extent should particular student learning outcomes be specified by accreditors?”
  • “What models are available to accreditors when choosing an approach?”
  • “What issues should be anticipated:
    • What standards of evidence will be used?
    • How will evidence be used in determining quality?
    • How will faculty be involved?
    • How will the interests and concerns of external stakeholders be addressed?” (Ewell 2001, pp. 14–15)

Note that this was written in 2001. Over a decade and a half later, we are still having the same discussions about student learning outcomes and adequate standard measures for reporting those outcomes.

More recently, Lingenfelter (2016) used the table of performance indicators and sub-indicators from Measuring Up 2008 (National Center for Public Policy and Higher Education 2008). The table includes metrics for preparation, participation, affordability, completion, and benefits. Preparation indicators include high school completion, K–12 course taking, K–12 student achievement, and teacher quality. Participation indicators include numbers of young adults graduated from high school and enrolled in college and working-age adults enrolled in postsecondary education. Affordability indicators include family ability to pay, strategies for affordability, and reliance on loans. Completion indicators include persistence and completion measures. Benefits indicators include educational achievement, economic benefits, and civic benefits.

Significant in the field of learning assessment has been the rise of competency-based education (CBE). While more concrete fields such as nursing, engineering, and technology have long assessed students’ skill levels in their particular areas, CBE programs have not been adopted widely. However, this may change given the demand for more accountability, the rising cost of education, and the focus on streamlining pathways to credentials. CBE is an approach that allows students to progress toward mastery of content, related skills, or other competencies:

Competency-based education (CBE) awards academic credit based on mastery of clearly defined competencies.… CBE is built around clearly defined competencies and measurable learning objectives that demonstrate mastery of those competencies.… CBE replaces the conventional model in which time is fixed and learning is variable with a model in which the time is variable and the learning is fixed. (EDUCAUSE Learning Initiative 2014, sections 1, 2, 4)

The Competency-Based Education Network, supported by the Lumina Foundation, released the report Quality Framework for Competency-Based Education Programs in September 2017. This work developed definitions of quality related to CBE in order to establish “Shared Design Elements” and “Emerging Practices of Competency-Based Education.” It listed eight elements of quality related to CBE programs:

  • “Demonstrated institutional commitment to and capacity for CBE innovation”
  • “Clear, measurable, meaningful and integrated competencies”
  • “Coherent program and curriculum design”
  • “Credential-level assessment strategy with robust implementation”
  • “Intentionally designed and engaged learner experience”
  • “Collaborative engagement and external partners”
  • “Transparency of student learning”
  • “Evidence-driven continuous improvement” (Competency-Based Education Network 2017, p. 4)

The work in CBE is paving the way for more foundational definitions and standards that institutions and academic programs can use to develop the infrastructure for improved learning. Research indicates that students are more active, engaged, and motivated when involved in coursework that is challenging but within their capacity to master. CBE accomplishes this by linking progress to mastery (EDUCAUSE Learning Initiative 2014).

Learning Analytics

While student learning outcomes are still considerably under development, improved measures for assessing student learning are evolving. The emerging field of learning analytics brings tools for the analysis of learning behavior to decision makers to improve student learning in real time.

Learning science, smart technology, and the pressure for more accountability have created a perfect storm for the development of a learning analytics environment. In fact, in its annual Horizon Report that focuses on current and future trends in higher education, NMC noted that one trend it has been following is learning analytics (Adams Becker et al. 2017).

There is a wide continuum of activities within the ecosystem of analytics. Long and Siemens (2011, p. 36) noted, “Analytics spans the full scope and range of activity in higher education, affecting administration, research, teaching and learning, and support resources. The college/university thus must become a more intentional, intelligent organization, with data, evidence, and analytics playing the central role in this transition.”

The growing focus on measuring learning encompasses the development of methods and tools to evaluate, measure, and document academic readiness, learning progress, skill acquisition, and other educational needs of students. This is critical as societal and economic factors redefine what skills are necessary in today’s workforce. Colleges and universities need to rethink what demonstration of skills and mastery of subject matter look like:

Twenty-first century learning outcomes emphasize academic skill along with interpersonal and intrapersonal competencies for complete learner success. To evaluate these learning gains, next-generation assessment strategies hold the potential to measure a range of cognitive skills, social-emotional development, and deeper learning, giving students and instructors actionable feedback to foster continued growth. The foundation for facilitating this kind of assessment is learning analytics (LA)—the collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs. LA continues to gain traction at institutions as a means to assess and fundamentally improve student learning. Data mining software captures rich datasets that enable learners and instructors alike to monitor learning and generate personalized feedback to ensure continued progress. As the LA industry matures, the emphasis has shifted from data accumulation to garnering nuanced insights on student engagement through data aggregated across multiple sources and courses. (Adams Becker et al. 2017, p. 14)

More campuses are participating in gathering and analyzing data on student learning in order to recognize learning challenges, improve student outcomes, and personalize the learning experience:

A recent report by the National Institute for Learning Outcomes and Assessment found that student assessment is emerging as a leading priority for institutions of higher education because of pressure from accrediting and governing entities and the growing need for more and better evidence of student achievement. They reported that in 2013, nearly 84% of colleges and universities surveyed adopted stated learning outcomes for all of their undergraduates, up from 10% in 2009, and the range of tools and measures used to assess student learning has expanded greatly. (Johnson et al. 2015, p. 12)

There has been significant growth in data mining software, and learning management systems are developing that provide analytics and visualizations to report and monitor learning. These learning management platforms provide the foundation for instructors to determine and evaluate learning metrics, learning behavior, student performance, and individual interventions. As Shacklock (2016, p. 23) noted, “Academic performance can be further enhanced by more timely data being accessible to students and their academic mentors (personal tutors), so that interventions to enhance and support student learning can be built into the student interaction more regularly during a period of study.”

Academic performance is enhanced “through data-informed solutions that reduce the time to degree completion, improve student outcomes, and target students for recruitment … learning analytics are benefiting a range of stakeholders beyond learners and instructors, to bodies of governance, researchers, and institutions. Learning analytics has developed in three stages, moving from a focus on hindsight to foresight; the first stage was describing results, the second stage was diagnosing, and the third and current stage is predicting what will happen in the future. Creating actionable data is a hallmark of adaptive learning, which is the latest focus of experiments and pilot programs within various educational settings” (Johnson et al. 2016, p. 38).

Metrics development and use continue to mature, supported by data mining techniques, learning management system use, and the development of predictive analytic models that assist faculty and advisors in determining areas of concern and demonstrating the effectiveness of specific interventions. At-risk behavior can be anticipated, and interventions tailored to individual student learning needs are now possible.

In a recent article, Mark Milliron stated, “We have at our fingertips the capabilities to have more students succeed than ever before by leveraging the technology tools we have at our disposal” (Roscorla 2014, ¶ 6). The article continued, “The problem is actually getting student learning data to the front lines where faculty can use it to test innovations, create interventions and predict actions such as likelihood of course completion and graduation” (Roscorla 2014, ¶ 7). Campuses need to have the right infrastructure to get the right data to the right people in the right way. If faculty, advisors, and students have access to learning data, then they can make more informed decisions.

This is the missing connection between assessment, accountability, analytics, and accreditation. With the advent of learning outcomes, learning analytics, and predictive analytics, decision makers can identify and access student learning data and determine appropriate interventions. This brings the demands of accreditation full circle to increase emphasis not only on student learning outcomes but also on what the institution does to act on the information and demonstrate continuous improvement in instruction.


For more than 100 years, accreditation has been the primary vehicle for defining and ensuring quality in U.S. postsecondary and higher education: “In this complex public-private system, recognized accreditation organizations develop quality standards and manage the process for determining whether institutions and programs meet these standards and can be formally accredited” (Schray, n.d., p. 1). A long-standing debate persists regarding accreditation’s role in ensuring the government and the public that higher education institutions and programs are effective in achieving results, especially student learning outcomes: “Currently, accreditation standards focus primarily on resource and process standards (e.g., faculty qualifications, facilities and support services)” (Schray, n.d., p. 6). However, regional accreditation agencies are now working to establish quality standards for student assessment to ensure that institutions can and do provide valid and reliable evidence of student learning.

Thus, accreditation resides in a creative tension between an audit function and one that supports continuous quality improvement. The audit function includes the regular review, assessment, and reporting of whether an institution is maintaining and sustaining quality. It focuses on after-the-fact reporting on what an institution is doing to fulfill its mission and serve its students. The continuous improvement side of accreditation is intended to support the institution in improving demonstrated academic performance, institutional effectiveness, and fiscal stability. In addition, accrediting organizations play a “gatekeeper” role in higher education because accreditation is used to determine whether institutions receive federal and state grants and loans annually. This provides the primary means to protect consumers against fraud and abuse (Schray, n.d.). The many functions of accreditation are illustrated in figure 3.

Figure 3 Accreditation Functions

In addition, accreditation operates within a triad of overlapping state, federal, and accreditor interests. States are responsible for licensing institutions to protect against consumer fraud, the federal government recognizes accrediting agencies and ensures compliance with Title IV on financial aid, and accrediting agencies ensure quality and effectiveness (Burke 2004).

Higher education has always been accountable, whether to religious orders, the government, or the public. The questions remain, Accountable for what? To whom? In the era following World War II, emphasis was placed on educating returning veterans, and the G.I. Bill of 1944 increased the focus on expanding campuses, balancing missions, and designing statewide governance and coordinating structures. As societal and economic needs evolved, the focus of institutions evolved to increasing college access and opportunity and developing a skilled, educated workforce to support economic expansion (Burke 2004). Today, institutions are focusing on developing a globally competitive workforce with the skills needed in a restructured economy and on maintaining a high quality of life and healthy democracy.

Given the strong movement toward objective performance-based activities, a paradigm shift was in order to move the focus of performance measures from indicators of teaching to indicators of learning. Today’s measures support institutional effectiveness, quality improvement, and student learning outcomes.

Ewell and Steen (n.d., ¶ 10) noted, “Accreditors were first mandated to look at learning outcomes as a condition of recognition in Department of Education rules established in 1989, but these directives were not very specific.” Accreditors are now being asked not just whether they examine student learning outcomes in light of an institution’s mission, but also why they don’t establish and enforce common standards of learning that all must meet (Ewell and Steen, n.d.). The answer is that there are still no standards of learning outcomes so institutions must continue to use the measures they do have: graduation rates, persistence rates, and, in some cases, employment statistics.

Connecting the Dots

The fundamental questions for creating connections across the higher education ecosystem between assessment, accountability, analytics, and accreditation relate to a common language and focus on who, for what purpose, using what methods, and with what outcomes and actions. While higher education has always been accountable in one way or another, assessment targets and goals have changed as states have increasingly moved to performance-based funding, although the actual targets for performance vary across the states. Assessment models define the metrics. Accountability systems report results to various stakeholders. Accreditation ensures that an institution’s work is of high quality and that it is continually improving.

While these dynamics have been in play for some time, analytics now provide a strong platform for reporting, monitoring, and evaluating progress as well as acting on outcomes to improve student success. The science of learning, the development of powerful data systems, and the advancement of predictive student solution platforms have come together to enhance our ability to assess, account, and accredit actual student learning and institutional performance. The competency-based learning model is a precursor to the ability to establish what students need to demonstrate in terms of mastery of competencies and skills, how they will demonstrate that mastery, and what can be done in the learning environment to support progress toward timely mastery.

Based on the research around learning science and competency-based learning, we know learning is measurable—and more flexible. We are close to supporting teachers and mentors in ways that improve student success. The new analytics platforms can monitor and assess student learning behavior and accomplishments. The systems can alert faculty and advisors of high levels, adequate levels, and inadequate levels of accomplishment. The focus is on competencies and mastery, in which accomplishments are certified in micro-credentials (Kelly 2016).

Learning management systems are integral in this effort, providing the behind-the-scenes platform in a student’s learning experience and serving as the course hub and connector for management and administration, communication and discussion, creation and storage of materials, and assessment of subject mastery (Lang and Pirani 2014). These systems enable the enhancement of learner information in real time for faculty, students, and advisors.

Figure 4 connects the components of accountability, assessment, accreditation, and analytics. The model begins with the analytics dimension, which provides a data platform from which decision makers can access information, develop insight, and review data on what is working in the institution. Learning analytics specifically provide invaluable information on student behavior and learning, including insights into what works to support student learning. The assessment dimension provides the foundation for improvement and is based on the strength of the analytics environment. Stronger institutional evidence results when cross-functional units agree on what is assessed, how it is assessed, and what can be accomplished from standardizing outcomes metrics. At this point, agreed-upon performance-based measures of accountability are strengthened beyond after-the-fact graduation, persistence, and employment rates. The institution can move from data overload to targeted measures that can make a difference to individual students. Further, adaptive assessment can assist students in their current learning environment. The institution can move from fragmented student success measures to a fully integrated set of student success efforts spanning the student life cycle. This integrated approach provides the institution with strong evidence to meet the multiple demands of stakeholders and accrediting bodies.

Figure 4 Connecting the Dots: A Model for Integrated Decision Making

As stakeholders demand more of education, a coordinated, aligned approach to assessment, accountability, analytics, and accreditation will result in stronger, more sustainable outcomes. Data must move from reportability to action.

A critical component is moving from data to insight. As Kamal (2012, ¶ 2) wrote, developing metrics is easy; developing insights is hard: “In contrast to this abundant data, insights are relatively rare. Insights here are defined as actionable, data-driven findings that create business value. They are entirely different beasts from raw data. Delivering them requires different people, technology, and skills—specifically including deep domain knowledge. And they’re hard to build.”

Why is this important? In order to connect the dots of assessment, accountability, analytics, and accreditation, insight makers are required. These are people across functional areas of the institution who can go beyond the numbers to understand the implications, impact, and inspiration behind the numbers. They can lead collaborative conversations that use statistics, reporting, and visualization tools to help maximize data alignment across the institution’s accountability agenda. Good data are fundamental, but analysis for impact is crucial for real change. This is true for each of the dimensions, whether assessment, accountability, analytics, or accreditation.

So, what are the steps to connecting the dots?

  • Start by assessing your institution’s approach to integrated planning.
  • Review what metrics, data, and indicators are being used for which part of the accountability and accreditation requirements.
  • Come to an agreement on data definitions and standards while moving to actionable outcomes.
  • Leverage the power of data through deeper insights and action.
  • Inventory which stakeholder groups require what data and information and align these with the integrated planning process. These include the requirements of state performance mandates, federal data mandates, and both regional and disciplinary accreditation mandates.
  • Stay intentional about the focus on improving student learning and institutional performance.
  • Pay attention to the tools, applications, and services that are available to support analytics and decision making based on data.
  • Create a culture of measurement, performance, and action.
  • Continue to assess outcomes. Establishing an environment that encourages the use of research in seeking the answers to questions about student success will enable your institution to thrive.

It is crucial that we rethink our educational models. We need to ask ourselves to whom we are accountable and whether we are making the best use of the data we have. We need new relationships with diverse partners. Michael Crow of Arizona State University speaks of moving from the industrial age, one-size-fits-all model of education toward one that focuses on inclusion rather than exclusion (Millichap and Dobbin 2017). This will require greater collaboration across the institution and the establishment of partnerships with K–12 schools, other higher education institutions, communities, and business/industry. Data collaboration and federation will bring increased strength to the integrated planning environment.


AASCU Government Relations. 2017. Top 10 Higher Education State Policy Issues for 2017. Policy Matters, January. Accessed December 5, 2017:

Adams Becker, S., M. Cummins, A. Davis, A. Freeman, C. Hall Giesinger, and V. Ananthanarayanan. 2017. NMC Horizon Report: 2017 Higher Education Edition. Austin, TX: The New Media Consortium. Accessed December 5, 2017:

Baer, L., and J. Campbell. 2012. From Metrics to Analytics, Reporting to Action: Analytics’ Role in Changing the Learning Environment. In Game Changers: Education and Information Technologies, ed. D. G. Oblinger, 53–65. Washington, DC: EDUCAUSE.

Barone, M. 2017. Is College Worth It? Increasing Numbers Say No. Washington Examiner, June 8. Accessed December 5, 2017:

Barrett, B. 2017. How Much Would It Cost for For-Profit Colleges to Pass Gainful Employment? New America, June 15. Accessed December 5, 2017:

Burke, J. C., ed. 2004. Achieving Accountability in Higher Education: Balancing Public, Academic, and Market Demands. San Francisco: Jossey Bass.

Carey, K. 2007. Truth Without Action: The Myth of Higher Education Accountability. Change 39 (5): 24–29.

Civitas Learning. 2017. New Data Reveal Key Opportunities to Improve Part-Time Student Success. News release, October 11. Accessed December 5, 2017:

Competency-Based Education Network. 2017. Quality Framework for Competency-Based Education Programs. Accessed December 5, 2017:

Complete College America. n.d. About. Accessed December 5, 2017:

Dossani, R. 2017. Is College Worth the Expense? Yes, It Is. The Rand Blog, May 22. Accessed December 5, 2017:

EDUCAUSE Learning Initiative. 2014. 7 Things You Should Know About Competency-Based Education. February 11. Accessed December 5, 2017:

Ewell, P. T. 2001. Accreditation and Student Learning Outcomes: A Proposed Point of Departure. CHEA Occasional Paper, September. Washington, DC: Council for Higher Education Accreditation. Accessed December 5, 2017:

———. 2009. Assessment, Accountability, and Improvement: Revisiting the Tension. Occasional Paper #1. Urbana, IL: National Institute for Learning Outcomes Assessment. Accessed December 5, 2017:

———. 2014. The Growing Interest in Academic Quality. Trusteeship, January/February. Accessed December 5, 2017:

Ewell, P., and L. A. Steen. n.d. The Four As: Accountability, Accreditation, Assessment, and Articulation. Mathematical Association of America. Accessed December 5, 2017:

Glossary of Education Reform. 2015. Assessment. Accessed December 5, 2017:

Hechinger Institute on Education and the Media. n.d. Beyond the Rankings: Measuring Learning in Higher Education. An Overview for Journalists and Educators. New York: Hechinger Institute on Education and the Media. Accessed December 19, 2017:

Johnson, L., S. Adams Becker, M. Cummins, V. Estrada, A. Freeman, and C. Hall. 2016. NMC Horizon Report: 2016 Higher Education Edition. Austin, TX: The New Media Consortium. Accessed December 19, 2017:

Johnson, L., S. Adams Becker, V. Estrada, and A. Freeman. 2015. NMC Horizon Report: 2015 Higher Education Edition. Austin, TX: The New Media Consortium. Accessed December 5, 2017:

Kamal, I. 2012. Metrics Are Easy, Insight Is Hard. Harvard Business Review, September 24. Accessed December 5, 2017:

Kauffman, S. 2017. Higher Learning Commission Blue Ribbon Panel Tasked to Set Agenda for New Initiatives and Innovation in College and University Accreditation. News release, July 19. Accessed December 7, 2017:

Kelly, R. 2016. 7 Things Higher Education Innovators Want You to Know. Campus Technology, March 14. Accessed December 5, 2017:

Lang, L., and J. A. Pirani. 2014. The Learning Management System Evolution. ECAR Research Bulletin, May 20. Accessed December 5, 2017:

Lingenfelter, P. E. 2003. Educational Accountability: Setting Standards, Improving Performance. Change 35 (2): 18–23.

———. 2016. “Proof,” Policy, and Practice: Understanding the Role of Evidence in Improving Education. Sterling, VA: Stylus Publishing.

Long, P., and G. Siemens. 2011. Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, September/October, 31–40. Accessed December 5, 2017:

MacTaggart, T. 2017. The 21st Century Presidency: A Call to Enterprise Leadership. Washington, DC: Association of Governing Boards. Accessed December 5, 2017:

Mayotte, B. 2015. What the New Gainful Employment Rule Means for College Students. U.S. News & World Report, July 8. Accessed December 5, 2017:

Miller, T. 2016. Higher Education Outcomes-Based Funding Models and Academic Quality. Lumina Issue Papers, March. Accessed December 19, 2017:

Millichap, N., and G. Dobbin. 2017. 7 Recommendations for Student Success Initiatives. EDUCAUSE Review, October 11. Accessed December 5, 2017:

National Center for Public Policy and Higher Education. 2008. Measuring Up 2008: The National Report Card on Higher Education. San Jose, CA: National Center for Public Policy and Higher Education. Accessed December 5, 2017:

National Commission on Excellence in Education. 1983. A Nation at Risk: The Imperative for Education Reform. Washington, DC: National Commission on Excellence in Education. Accessed December 5, 2017:

National Governors Association. 1986. Time for Results: The Governors’ 1991 Report on Education. Washington, DC: National Governors Association.

———. 2005. Governors Sign Compact on High School Graduation Rate at Annual Meeting. News release, July 16. Accessed December 5, 2017:

Roscorla, T. 2014. How Analytics Can Help Colleges Graduate More Students. Center for Digital Education: Converge, July 15. Accessed December 19, 2017:

Schray, V. n.d. Assuring Quality in Higher Education: Key Issues and Questions for Changing Accreditation in the United States. Issue Paper, Secretary of Education’s Commission on the Future of Higher Education. Accessed December 5, 2017:

Shacklock, X. 2016. From Bricks to Clicks: The Potential of Data and Analytics in Higher Education. London: Higher Education Commission. Accessed December 19, 2017:

Suskie, L. 2015. Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability. San Francisco: Jossey-Bass.

U.S. Department of Education. 2006. A Test of Leadership: Charting the Future of U.S. Higher Education. Washington, DC: U.S. Department of Education. Accessed December 5, 2017:

Author Biography

Dr. Linda Baer is a senior consultant with Civitas Learning. She has served over 30 years in numerous executive-level positions in higher education, including senior program officer, postsecondary success for the Bill & Melinda Gates Foundation, senior vice chancellor for academic and student affairs in the Minnesota State College and University System, senior vice president and interim president at Bemidji State University, and interim vice president for academic affairs at Minnesota State University, Mankato. Her ongoing focus is to inspire leaders to innovate, integrate, and implement solutions to improve student success and transform institutions for the future. She presents nationally on academic innovations, educational transformation, the development of alliances and partnerships, the campus of the future, shared leadership, and building organizational capacity in analytics. Recent publications have been on smart change, shared leadership, successful partnerships, innovations/transformation in higher education, and analytics as a tool to improve student success.