• 在线学习的质量评估(九)

    普通类
    • 支持
    • 批判
    • 提问
    • 解释
    • 补充
    • 删除
    • Introduction

    With the proliferation of online learning providers, and the challenges presented by the distance education sector to state regulators and accrediting bodies, it is not surprising that “buyer beware” is the watchword for students, institutions, and public agencies alike. In the current environment, organizations must demonstrate the quality of their services in ways that are intelligible to potential students and their employers; faculty and staff; regulators; and government agencies. The admirable attempts to define quality standards and best practices for online education, however, have done little to assuage the scepticism of representatives in the academy who are more accustomed to face-to-face delivery, directed to bounded communities. Fully addressing the roots of such scepticism is beyond the scope of this paper; however, its presence informs much of the technical discussion around quality assurance frameworks, in higher education in general, and in online delivery in particular.

    Purveyors of online learning programs may be inclined to attribute a lack of broad acceptance among their colleagues to the paradigm shift that higher education has been undergoing in the past 15 years. In many cases, however, it must be admitted that the potential of electronic delivery modes has not been fully realized in the implementation of online courses. Some have suggested that these shortcomings are the result of trying to replicate the classroom environment, instead of maximizing the new configurations of knowing and community formation possible in an interactive online environment (Schank, cited in Caudron, 2001). Others have traced some of the potentials and limitations of online education to issues resident in the founding principles of distance education (Larreamendy-Joerns & Leinhart, 2006).

    Finding appropriate comparators for the efficacy of any particular mode of delivery is difficult when the broader questions of quality assurance in higher education are far from settled. The spectrum ranges from detailed critiques of the regulatory burdens and dubious outcomes of quality assurance audits in Australia (Reid, 2005) and England (Harvey, 2005) on one end, to the accreditation debates spawned by the Spellings Commission in the United States on the other (Zemsky, 2007). An examination of definitional issues points to a long-standing conflict in values between business modelling and public services. It is important to acknowledge these tensions fully before turning to the more technical, but admittedly value-laden, exercise of reviewing the standards proposed by different quality assurance agencies.

    After a discussion of the contexts of quality assurance activities in higher education in general, and of the competing paradigms highlighted by online learning, this chapter examines quality standards that have been proposed for the delivery of online instruction in four different jurisdictions. The full range of state licensing, voluntary accreditation, and market-driven seals of approval reveals tensions between externally driven compliance and internally driven improvements. Although the regulatory frameworks for quality assurance vary dramatically in Australia, England, Canada, and the United States, there is still enough common ground to establish some general characteristics for a scholarly approach to online teaching and learning. At a basic level, the characteristics of quality educational delivery demonstrated in these frameworks include providing clear statements of educational goals; sustaining the institutional commitment to support learners; and engaging in a collaborative process of discovery; which contributes to improving the teaching and learning environment.

    Another area of commonality is the fact that, while self-review can be a key component for any of the frameworks, to a large degree they are being driven by external concerns. Changes in the sources and levels of funding, the rise of an international market, and the ever-present concern over “rogue operators” have challenged higher education institutions and their state regulators alike. These issues, in turn, have spawned an international dialogue around accreditation processes and guidelines for the transnational – or cross-border – delivery of higher education made viable through web-based technologies (UNESCO/ OECD, 2005). While the articulation of standards may propose base levels of operational integrity, the rhetoric of most regulatory bodies and accrediting agencies suggests much more than minimal compliance.

    On a wider level, each of the projects seeking to establish quality standards for online education appears to aim at inculcating a set of values that prizes management by measurement. A confluence of what might be considered “best practices” is mixed in with suggestions for regulatory minimums in a number of these statements of standards. In the past decade, the process of measurement has gained greater complexity, with the various iterations of e-learning benchmarking projects undertaken in New Zealand (Marshall, 2006), Australia (Bridgland & Goodacre, 2005), and England (Morrison, Mayes & Gule, 2006). A consistent area of contention, the degree to which quality assurance activities can or should be targeted to outcomes, as opposed to internal processes, is addressed in a separate section. Recognizing that the terms quality and online education are burdened with assumptions enough to create their own problematic is a necessary prelude to what follows.

    • Definitional Issues

    The greatest challenge for trying to define quality in any product or service is that quality remains a relative experience, realized in large part through an individual’s level of expectation. Since quality necessarily rests in the eye of the beholder, at first glance, systems developed around the concept must necessarily be exercises in subjectivity. In higher education, quality is a construct

        relative to the unique perspectives and interpretations of different stakeholder groups (students, alumni, faculty, administrators, parents, oversight boards, employers, state legislatures, local governing bodies, accrediting associations, transfer institutions, and the general public). (Cleary, 2001, p. 20)

    It follows, therefore, that the effectiveness of any quality improvement activities will be as much a function of the ability to foster agreement around common goals as of any substantive input or process adjustments attempted by an institution. Fostering agreement, however, is much more difficult when the term quality is burdened with the legacy of failed management fads.

    In many circles, quality is understood as shorthand for Total Quality Management (TQM) or its close cousin, Continuous Quality Improvement (CQI). Some may believe that these fads peaked and retreated in the last century (Birnbaum, 2001). However, recent modeling (Widrick, Mergen & Grant, 2002), and examples of the pursuit, by individual institutions, of the Malcolm Baldridge Awards (Spahn, 2000) or ISO9000 recognition suggest that TQM still has a foothold in higher education, in spite of the problems posed by the fact that its language carries a corporate flavour (Banta & Associates, 2002). The Sloan Consortium “Quality Framework” explicitly references CQI in its aims to “establish benchmarks and standards for quality” for asynchronous learning networks (Moore, 2005. p.1). The pressure to apply management techniques to higher education came from a perceived crisis in confidence with post-secondary systems, and from the growth of statesponsored accountability systems.

    For supporters, it “has long been understood in organizations that when you want to improve something, you first must measure it” (Widrick, Mergen, & Grant, 2002, p. 130). But measurement systems are about much more than the technical specifications of various indicators – they are about control. The first iteration of TQM/CGI provoked a debate about its social as well as technical implications, and demonstrated the “disconnect between the philosophy of the management process and the purposes of the institution[s] for which it was being proposed” (Birnbaum, 2001, p.107). The engineering (or re-engineering) of systems designed to guarantee that manufacturing processes would meet technical specifications seems to imply a uniformity that may not be possible, or even desirable, in the dynamic and heterogeneous environment of higher education. The International Standards Organization (ISO) makes clear the central principle of the pursuit of quality: to establish processes that will maximize service to customers. To many within the academy, the “learner as consumer/information as commodity” world presupposed by the business model of higher education remains antithetical to independent scholarship in pursuit of the advancement of knowledge (Bok, 2003).

    Traditionally, universities achieved quality in intellectual endeavours through the professionalism of academics, the principles of scholarship, and the rigours of peer review; they gained standing in society by communicating those standards to political and social elites. More recently, massification, diversity, and cuts to funding, along with a wider political movement to demonstrate efficiency and responsiveness, have spawned different conceptions of accountability (Brennan & Shah, 2000). The attempt to lift the meaning of quality education to something beyond short-term fiscal efficiencies and taxpayer benefits is a matter of trying to regain some of the ground lost in previous decades. It is also an encounter with what has been represented as a paradigm shift in higher education, highlighted by the advent of online education.

    It must also be admitted at the outset that, with the shift to mobile wireless technologies, “online” education may well appear to be outmoded shorthand for computer or web-enabled activities. The term has appeal, however, since it carries the sense of a linked community of learners. It still resonates of bounded communities with the possibilities of transformative experiences, rather than the sporadic or strictly utilitarian viewing of information on screens. It has been suggested that online learning is best conceptualized as “an environment that integrates collaboration, communication, and engaging content with specific group and independent learning activities and tasks” (Sims, Dobbs, & Hand, 2002, p. 138). More particularly, the ability of students to engage in “asynchronous interactive learning activities” has been described as the “signature characteristic of this technology” (Phipps & Mertisotis, 2000, p. 6). The importance of the flexibility inherent in asynchronous activities challenges the assumption that emulating the classroom constitutes best practice in online teaching and learning environments. However, the degree to which technology has driven, or simply enabled, the paradigm shift in higher education, is debatable. Whether their adherents have overstated the changes that have taken place as a result of web-enabled learning technologies is another question worthy of consideration.

    • Paradigm Shift

    Although there had been many examples of applications of computer technology in classrooms for at least a decade before 1995, Michael Dolence and Donald Norris have been credited with issuing a wake-up call for higher education administrators. In Transforming Higher Education, Dolence and Norris (1995) purport to offer ways for colleges and universities to survive the transition from the Industrial Age to the Age of Information. Even though their vision for the future has not been realized on a wide scale, many of the conceptual juxtapositions they offer have gained currency in higher education. These juxtapositions include a shift from episodic access to clusters of instructional resources, to integrated perpetual learning, with a separation of teaching and certification of mastery, and a re-conceptualized role for faculty – from deliverers of content to mentors and facilitators of learning. The most pervasive of these changes is the shift from a provider focus to a learner focus, with its suggestions for mass customization through individualized learning systems.

    Elaborations on this theme indicate that the capabilities of the Internet have overturned “the traditional roles of the college or university as the leading (1) research source and knowledge creator, (2) archivist and gateway to knowledge, (3) disseminator of advanced knowledge, and (4) referee and evaluator of truth” (Quinn, 2001, p. 32). If the production and dissemination of knowledge are no longer the restricted purview of higher education, the roles of post-secondary institutions in the worldwide network are increasingly vulnerable. Students and faculty alike need to be more open and to promote capacities to analyze, interrelate, and communicate about facts gleaned from network-based knowledge.

    The traditional quality measures associated with accreditation or state-administered quality assurance frameworks do not match this new climate of teaching and learning. One of the most common measures, “seat time” does not translate to an online or even a blended environment. Even when adapted to an online environment, other common measures rely on inputs (averages of entering students; number of students; qualifications of instructors; systems development) or outputs (numbers completing courses; satisfaction ratings by students and alumni; revenue generated from tuition, intellectual property, or commercial partnerships), but lack in measures to address the fundamental integrity of the online learning environment.

    Wallace Pond (2002) summarizes some of the old and new paradigms for accreditation and quality assurance as follows. The old paradigm measures could be characterized as teacher-institution-centred, centralized, hegemonistic, “one-size-fits-all,” closed “us versus them,” quantitative, prescriptive, time-as-constant with learning-as-variable, teacher-credentialed, consolidated experience, regional/national, static, single-delivery mode, process, infrastructure. In contrast, the new paradigm measures can be seen as learner-centred, local, tailored, open, collaborative, respectful, qualitative, flexible, learning-as-constant with time-as-variable, teacher-skilled, aggregated experience, international/ global, dynamic, distributed-delivery model, outcomes, services (Pond, 2002). The degree to which these measures might apply is discussed the next section, but they do not address some of the other questions generated by the university’s entry into online course delivery

    The first questions must ask the degree to which online learning environments have delivered, or can deliver on, their promises. The greater access afforded through web-based delivery systems has been one of the key advantages cited by observers of the technological transformation in higher education. Whether depicted as an advantage in developing greater economies of scale for delivery systems or in ameliorating social inequalities, broader access has been lauded as a key feature of the new paradigm. Electronic learning systems, however, are not always as billed. Academic leaders doubt the faculty’s acceptance of the legitimacy of online education (Allen & Seaman, 2006). Potential employers also remain doubtful (Adams & DeFleur, 2006). Despite student-focused rhetoric, the administrative momentum for distance delivery can overwhelm the voices of mature students who may not be as confident with technologies, and of younger students with expressed preferences for face-to-face instructional contact (Arthur, Beecher, Elliot, & Newman, 2006). Some faculty doubt that the necessary social integration, particularly needed to improve the success of first-generation students, can be provided in a distributed environment (Allen, 2006). Another caution rests in the comparative completion rates between online and classroom delivery. If intended economic and social transformations are to be realized, access must be examined at more than just the point of entry.

    The promise that economies of scale will make education more affordable is perhaps even less persuasive to most academics. That “proprietary institutions are likely to enter the market by contracting with the best professors to provide video-based courses with exclusive rights to their distribution and use” was a vision of higher education in the 1990s (Hooker, 1997, p. 8). Obviously, the proponents of such models have missed the significance of interactive technologies. Providing more efficient delivery of “lectures by famous faculty” would recreate in cyberspace the “world of the passive listener and single speaker that has marked much of what passes for higher education” (Lairson, 1999, p. 188). Despite the growing popularity of pod-casting on campuses, making the doubtful system of mass lectures more efficient does not appear to be much of an advancement over the correspondence school’s traditional course-in-a-box. Another tension emanates from the fact that the bulk of what is delivered in the online environment consists of discrete training modules directed to particular job skills or competencies. While there seems to be slippage between what is articulated in the realm of learning outcomes (the skills we expect graduates to demonstrate) and our expectations around the values associated with the liberal arts, it is fair to say that higher education aims should be broader than the goals of the corporate training sector.

    Critics such as David Noble (2001) present almost apocalyptic views on the incursion of educational technologies into the classroom. The Web’s “dark side” is depicted as the “rapidly growing trend of university corporatism” and the exploitation of knowledge workers (Kompf, 2001). Challenges from the for-profit sector, the influence of corporate training agendas, and “the ‘rush to serve’ different clienteles”are described as jeopardizing the position of the post-secondary sector as the “source of objective analysis of the society in which it exists” (Crow, 2000, p. 2). Acting as the conscience of civil society speaks to a much broader purpose than meeting the immediate training needs of corporations. If this ideal is taken seriously, then one should expect that faculty would lead the debate from a perspective broader than their own protectionist instincts.

    An alternative vision of democratic ideals in the digital age would have education enabling “people to learn about, with, and beyond technology” to open the “doors of economic, educational, and personal empowerment” (Milliron & Miles, 2000, p. 61). However, the reconceptualization of higher education should be done by – not to – the academy. Establishing the terms through which to assess online education should not be left either to the marketplace or to self-perpetuating bureaucracies. Taking back some of the momentum will be a challenge, since the articulation of regulator standards and consumer-focused best practices are already well underway. Attempts to transform codes of practice into benchmarking tools, which may provide frameworks more compatible to academic traditions of self-reflection and collegial review, have inherited many elements from these early efforts but are, as yet, largely unproven.

    • Standards from Four Jurisdictions

    The formulation of quality assurance systems for online education, while most frequently regulated at a regional or national level, has in recent years been driven by international developments. The global reach of the Internet and the lack of ways to regulate transnational commercial activities allow fraudulent operators to spring up. One possible approach is to promote consumer education through online directories or consortiums. Another possibility is free-lance course reviews from former students, similar to the book reviews found on the sites of online booksellers such as Amazon.com (Carnevale, 2000). This possibility was echoed in the findings of the symposium sponsored by the Pew Charitable Trust, which observed deficits in consumer-focused information, especially at the course level (Twigg, 2001). Student dialogues in facebook.com and the growth of sites like ratemyprofessor.com, along with the development of “viral marketing” campaigns, all point to the demand for information. Not surprisingly, the appetite is not large for allowing the marketplace to determine outcomes in a wide-open, for-profit model. Simply stated, it does not seem either ethical or efficient to leave students to bear the full risks for product testing various online-education ventures.

    In the past two decades, there has been a marked increase in the size and influence of the cadre of higher-education, quality-assurance technologists working directly for government or in semi-autonomous agencies. Various quality assurance agencies are engaging in international discussions aimed toward at least equitable, if not reciprocal, recognition of accreditation processes. For example, the potential of harmonizing systems of higher education in Europe under the Bologna Declaration (European Ministers of Education, 1999) provided impetus for commission-supported projects sponsored by the European Quality Observatory (see http://www.eqo.info) and its parent organization, the European Foundation for Quality in e-Learning (EFQUEL). These projects include advocating for a federated approach to establish a European Quality Mark, to address an obvious “lack of credibility” with potential consumers of e-learning (EFQUEL, 2007, p.1). The UNESCO/OECD joint statement on cross-border delivery is another example of the intentions for international cooperation that would reduce the potential for abuses left open by regulatory gaps (UNESCO/OECD, 2005). Even with these international aspirations, however, the regulation of higher education, like the selection processes of most potential students, is a much more localized matter.

    Responses from national and local quality assurance interests have varied. Some of the differences rest in the degree to which state-sponsored quality-auditing procedures have become entrenched in the past decade; others reflect the suspicions or traditions associated with distance education in general. The elaborate state licensing approach has been depicted as excessive and a sign of the erosion of the autonomy of higher education. To some, these measures demonstrate the drive to “harness the universities to perceived economic priorities” (Greatrix, 2001, p.12). In that light, it is interesting that the criteria of the state licensing agency have largely subsumed standards first developed for a peer review model of accreditation. In other locales, it appears that efforts have been made to use quality assurance standards to inform “buyer’s guides.”

    The legislative and accountability frameworks for universities in Australia are confounded by the federal governance structure and the changes in funding sources. Under the Australian constitution, education is a matter within the jurisdiction of the states/territories, but the universities established through their own state’s enabling legislation are directly funded by the Commonwealth (DEST, 2002a, p.5). The split in legislative authority and oversight, and the increase in non-governmental sources of revenues provided an impetus for the joint Ministerial Council on Education, Employment, Training and Youth Affairs to endorse protocols for state approval processes and to establish the Australian Universities Quality Agency, with the power to audit universities over a five-year cycle, using institutional self-assessment and visits from expert panels. The rationale for the development of the national system was explicitly framed in terms of competitive challenges, domestic and international, and of policies that have encouraged the universities to “align themselves more closely with industry needs” (DETYA, 2000, p.1). Under the revised regime, creditable quality assurance systems, providing evidence of the quality of service and skills of graduates, were explicitly intended to make the universities more attractive to business investors. The systems include national qualification frameworks to communicate expected standards for each level of post-secondary achievement

    The use of the term “university” in Australia is restricted by state or territorial legislation, and in order to be “self-accrediting,” universities must demonstrate that they have appropriate quality assurance procedures in place. Within this framework, “universities are expected to engage in a pro-active, rigorous and ongoing process of planning and self-assessment which will enable them to ensure the quality outcomes expected by their students and the wider community” (DETYA, 2000, p.17). The Australian government policy framework has been presented as a marketing tool to address the advantages that global competitors enjoy by having “centralised, separate, and highly visible” bodies responsible for quality assurance (Vidovich, 2001, p. 258). Yet only two years after the Australian Universities Quality Agency was established, a more broadly framed review of the higher education system was initiated. Concerns expressed about the quality assurance system included “too much emphasis on institutional quality assurance and not enough on learning outcomes,” and deficits in both the presentation and form of data (DEST, 2002b, p. ix-x). Concerns raised about e-learning initiatives included the introduction of a new range of costs, along with what appear to be the standard questions of “equity of access, cost-effectiveness, the quality of courses, the impact on learning outcomes and the impact on academic work” (DEST 2002b, p. 6). The results of these consultations and the intentions to simultaneously increase diversity in the range of recognized providers and improve the clarity and effectiveness of standards have met with mixed reviews (King, 2006; Nunan, 2005).

    The selected examples of quality assurance frameworks from the United Kingdom centre on open and distance learning, with e-learning issues as acknowledged variables within a spectrum of delivery mechanisms. Three different external approaches to assessing the offerings by individual institutions include licensing procedures under the auspices of a government agency, a voluntary accreditation association, and a scheme for certification through quality marks. Again, much of the drive to enhance quality assurance schemes has been presented in the context of potential regional and global competition. Each of these examples also demonstrate ongoing tensions between external regulatory approaches and internal aspirations for improvement. It should be noted that the full network of subject-based auditing includes benchmark information linked to the national frameworks for higher education qualifications.

    It has been suggested that the Quality Assurance Framework in the United Kingdom is not just comprehensive; it is “the most complex anywhere in the world” (Brown, 2000, p.335). The Quality Assurance Agency for Higher Education (QAA) was incorporated in 1997, with the aim of reducing some of the reporting burdens created by a combination of external assessments by funding agencies and quality assurance processes driven by peer review. Its mission is to “promote public confidence that the quality of provision and standards of awards in higher education are being safeguarded and enhanced” (QAA, 2000, p.1). While the purpose of reviews has remained the same, the 2004 revision of the handbook describes the features of academic review as

    • a focus on the students’ learning experience; • peer review; • flexibility of process to minimise disruption to the college; • a process conducted in an atmosphere of mutual trust; the reviewers do not normally expect to find areas for improvement that the college has not identified in the self-evaluation; • an emphasis on the maintenance and enhancement of academic standards and the engagement with the academic infrastructure; • use of self-evaluation as the key document; this should have a reflective and evaluative focus; • an onus on the college to provide all relevant information; any material identified in the self-evaluation should be readily available to reviewers; and • evidence-based judgements. (QAA, 2004b, p. 3)

    While the less proscriptive tone of these statements would seem to signal more recognition for the expertise of academic institutions, it may not appease the vocal critiques of the “audit culture” (Shore & Wright, 2000).

    Initiated in 1998 through 2001 with revisions starting in 2004, the QAA also developed Codes of Practice for ten areas: post-graduate research programs; collaborative provision and flexible and distributed learning (including e-learning); students with disabilities; external examining; assessment of students; program approval, monitoring and review; career education, information and guidance; placement learning; recruitment; and admissions (QAA, n.d.a). The first iteration of the guidelines for distance learning included five system design elements, six elements for academic standards, program design and approval, three on the management of delivery, one on student development and support, three on student communication and representation, and five on student assessment. The main thrust of the original guidelines for distance learning was the integration between distance delivery and the general quality standards for teaching and learning activities expressed in the other codes of practice. The 2004 revision to the code of practice encompasses what were deemed to be good practices for a wide variation of delivery options which in “general do not require the student to attend particular classes or events at particular times and particular locations (QAA, 2004a, p. 3).

    The QAA distance learning guidelines reference the work of the voluntary accreditation association in the United Kingdom’s distance education sector, citing the Open and Distance Learning Quality Council (ODLQC) standards. The ODLQC (2005) accreditation standards, first established in 1999, revised in 2000, and again in 2005, are organized in six sections: outcomes (9 standards); resources (4 standards); support (7 standards); selling (9 standards); providers (10 standards); and collaborative provision (5 standards). While the detailed accreditation standards tilt toward institutional and process issues, the quality council also produced a succinct Buyers Guide to Distance Learning, listing questions that prospective students should ask of providers and of themselves. The list of questions on courses begins, “Can you look at the course first? Is the course right for you? How much support does it offer? Is there faceto-face training? Can you talk to former students? Have previous learners been successful? Can you compare courses?” The outcomes questions are, “What do you want to achieve? Is this the right qualification? Will there be an exam at the end? Are there restrictions?” The cost questions are, “How much will it cost? Is financial support available? When can you get your money back? Finally, for quality, “Is the provider independently inspected/accredited?” This last element carries a warning about other quality marks or schemes like ISO which “may suggest that the distance learning provision is of good quality, but do not guarantee it” (ODLQC, 2003, p.1). The statement points to the competitive nature of the quality assurance agencies and the presence of alternate quality markers, like those advocated by the British Association for Open Learning (BAOL), that explicitly reference the European Foundation for Quality Management (BAOL, 2002). The momentum behind such projects appears to be shifting, however, with amalgamation of BAOL and the Forum for Technology in Training into the British Learning Association (BLA, 2005).

    With such an array of quality assurance prospects, it is noteworthy that in their study of “borderless education,” higher education agencies in the UK have acknowledged that public accountability arrangements and elements of the credentialing or qualification schemes have been challenged by developments in for-profit, virtual, and corporate providers in the domestic and international higher education market. They propose that the quality frameworks addressing these developments would include

    currency and security of qualifications; audit of the system for design and approval of curricula or appropriate learning contracts; an internationally recognized system of educational credit; licensing of staff; security of assessment; adequate and accurate public information about learning opportunities; approved guidance and complaints systems for learners; transparent quality management processes for each agent in the educational supply chain; access to learning resources assured by the provider; and publication of guidance relevant to different modes of provision. (CVCP, 2000, p. 30)

    It has also been suggested that the thinking on quality assurance will have to shift dramatically, from external compliance-based approaches toward comparative benchmarking and mutual recognition arrangements for international quality standards. Attempts to integrate an array of international standards have been made in other jurisdictions.

    In Canada, the responsibility for education rests at the provincial, not the national, level. Each province has its own quality assurance framework or approach to determining whether post-secondary programs are eligible for student funding or to receive public money. The degree to which a province might regulate, or even provide, subsidies to private or for-profit educational institutions varies widely. It is fitting, then, that the Canadian example of quality guidelines originates with a private corporation sponsored by community and government-funded agencies (Barker, 2002a).

    The Canadian Recommended e-Learning Guidelines (Barker, 2002a) bill themselves as “consumer-oriented, consensus-based, comprehensive, futuristic, distinctively Canadian, adaptable, and flexible.” The latter feature admits that “not all guidelines will apply to all circumstances” (p. 2). This qualification is only realistic, as the list is exhaustive. The 138 recommendations are organized into three distinct sections: Quality Outcomes from e-Learning Products and Services, that includes 15 items related to how students acquire content skills, knowledge, and learning skills; Quality Processes and Practices, that includes 20 items on the management of students and the delivery and management of learning, using appropriate technologies; and Quality Inputs and Resources, that includes the remaining 103 items, which range through intended learning outcomes, curriculum content, teaching and learning materials, product and service information, learning technologies, technical design, personnel, learning resources, comprehensive courses packages, routine evaluation, program plans and budgets, and advertising, recruitment, and admissions information. A more succinct adaptation issued under the same initiative is the Consumer’s Guide to e-Learning (Barker, 2002b), which structures 34 questions into basic, discerning, and detailed levels. These questions are paraphrased in Tables 1 to 3 of Appendix A to allow for comparison with the other frameworks, but the instructions to consumers provided with the Consumer’s Guide are more telling。

    Before you sign up for an e-learning course or program, you are to ask yourself: • What is my purpose for taking this course? Do I know what I want or need to learn? • Do I need a credit or certificate when I finish . . . or do I just want to know more? • How much can I afford to spend? How much time can I invest? • What hardware and software do I have, and is it enough? • Where will I access the Internet, what will it cost, and how convenient will it be? • Are my computer and Internet skills good enough for the course I have in mind? Will I need technical help? (Barker, 2002b) Institutions intending to adapt their offerings to the online teaching and learning environment would be well advised to rephrase these questions along the following lines: • What is our purpose for offering this course? • Do we know what we expect students to learn? • Do we have the technological infrastructure to support our students? Is it up-to-date? • How skilled are our course developers and instructors in the online environment? • What technical assistance do we have available? Such questions are at the heart of the two models proposed in the United States.

    In an analysis of the impact of electronically delivered distance education, undertaken for the American Council of Education, Judith Eaton (2002) suggests that the emergence of electronically delivered degrees, programs, courses, and services has the potential to undo the delicate balance between “accreditation to assure quality in higher education, the self-regulation of higher education institutions, and the availability of federal money to colleges and universities” (p. 1). Although U.S. higher education institutions are subject to state funding and regulatory bodies, and although the systems of accountability may vary from state to state, the federal government relies on accredited status to signal that institutions and programs are of sufficient quality to allow the release of federal funds in the forms of student grants and loans, research grants, and other federal program funds. Under traditional approaches to accreditation, the focus was on the verification of site-based resources contributing to a learning environment (e.g., the number of volumes in the library). To address some of the concerns raised by electronic delivery, the eight regional accrediting commissions in the United States developed the “Statement of Commitment for the Evaluation of Electronically Offered Degree and Certificate Programs,” which declares the resolve of the commissions to sustain the following values:

    That education is best experienced within a community of learning where competent professionals are actively and cooperatively involved with creating, providing, and improving the instructional program; • That learning is dynamic and interactive, regardless of the setting in which it occurs; • That instructional programs leading to degrees having integrity are organized around substantive and coherent curricula which define expected learning outcomes; • That institutions accept the obligation to address student needs related to, and to provide the resources necessary for, their academic success; • That institutions are responsible for the education provided in their name; • That institutions undertake the assessment and improvement of their quality, giving particular emphasis to student learning; • That institutions voluntarily subject themselves to peer review. (reprinted in Eaton, 2002, p. 26)

    The regional commissions also committed themselves to a common statement, “Best Practices for Electronically Offered Degree and Certificate Programs,” which was developed by the Western Cooperative for Educational Telecommunications (Howell and Baker, 2006). The statement is organized into five discrete sections: institutional context and commitment; curriculum and instruction; faculty support; student support; and evaluation and assessment (WCET, n.d). Taken together, the Statement of Commitment and the Best Practices propose a consistent.

    framework for developing quality standards. How those standards might translate into benchmarks was the subject of a study prepared by the Institute of Higher Education Policy (Phipps & Merisotis, 2000).

    For “Quality on the Line,” Phipps and Merisotis (2000) surveyed the literature to compile a list of 45 possible benchmarks. They then determined whether those benchmarks were recognized at various institutions delivering online courses, and examined the importance of each benchmark to administrators, staff, faculty, and students at those institutions. The result is a list of 24 benchmarks that should be considered “essential to ensure the quality in Internet-based distance education” (p. 2). The elements (see Table 4 in Appendix A) include institutional support, course development, teaching and learning, course structure, student support, faculty support, and evaluation and assessment benchmarks. The similarities between these benchmarks and the proposals from the accrediting agencies clearly demonstrate a common conceptualization of distance education in the United States. Where they diverge is in the degree to which the actual curriculum elements are prescribed, and in the relative weights given to institutional structures.

     

    Both sets of standards are designed more for traditional face-toface institutions introducing distance education programs than for distance education providers updating their mode of delivery. The currency of these standards within the accreditation community was confirmed by a U.S. Department of Education study, which observed that despite difference in their standard and means of assessment, “there was remarkable consistency” in how reviewers “evaluated distance education programs, and in what they considered to be most important indicators” (U.S. Department of Education, 2006, p. 2). The provider focus remains a strong orientation under both schemes and, unlike the accreditation standard for open and distance learning in the UK, neither U.S. scheme speaks to the importance of encouraging learners to take responsibility for their own learning.

    • Process versus Outcomes

    One of the first principles in all of the quality assurance schemes considered here is guaranteeing consistency in the product’s results. In the view of Total Quality Management advocates, “many quality management initiatives, especially in service industries, die because we fail in measurement of the outcomes” (Widrick et al., 2002, p. 130). The dangers of presenting higher education outcomes as strictly utilitarian competencies are familiar features in the debate about quality assurance activities (Gerard, 2002). Even if outcomes could be framed in wider terms, however, there is also a hazard of sliding into what has been aptly described as a variation on the “naming fallacy” – that is, assuming that “explicitness about standards” somehow provides assurance that the standards have been or can be achieved (Greatrix, 2001).

    Major efforts have been directed to identifying “quality in undergraduate education,” but according to Ernest Pascarella (2001), some of these efforts are “based on a naive understanding of just how difficult it is to accomplish in a valid manner” (p. 19). Most notably, he argues that institutional reputation and resources, and student or alumni outcomes are “potentially quite misleading,” and that results based on either of these common approaches are more likely to be driven by inputs than by effective educational practices (pp. 19-21). The solution to this problem should rest in careful measures that address the integrity of the teaching and learning processes within institutions. The seemingly insatiable appetite for comparable measures, regardless of their validity, is a dimension of the operating environments of most postsecondary institutions. While it is clear that the rhetoric of accountability and the bureaucratic systems it has spawned are not likely to disappear, it may be possible to present a framework for quality online teaching and learning that attends to more than short-term transactional or monetary values.

    Externally defined and inspected standards can lead to complianceoriented responses in institutions. Benchmarking frameworks have been proposed in some jurisdictions as an antidote to mechanistic audit cultures. In keeping with the self-accrediting and international focus of Australian universities, the council on open, distance, and e-learning (ACODE), constituted in 2002, is open to accredited universities in Australasia – of which Australia, New Zealand, Papua New Guinea, and the South West Pacific are entitled to be members (see www.acode.edu. au). Beginning with a survey in 2002-03, the initiative followed up with a collaborative pilot project between the Universities of Melbourne and Tasmania to develop a trial framework with the following components: institutional context, purpose, scope, principles of service delivery, benchmarking priorities, indicators for priority areas, self-assessment/ ranking, comparative matrix of strengths and weaknesses against indicators, and finally, an action plan for self-improvement (Bridgland & Goodacre, 2005). The full articulation of benchmarks includes scoping statements, a good-practice statement, performance indicators, and performance measures (ratings) in eight areas:

    1) Institutional policy and governance for technology supported learning and teaching; 2) Planning for, and quality improvement of the integration of technologies for learning and teaching; 3) Information technology infrastructure to support learning and teaching; 4) Pedagogical application of information and communication technology; 5) Professional/staff development for the effective use of technologies for learning and teaching; 6) Staff support for the use of technologies for learning and teaching; 7) Student training for the effective use of technologies for learning; 8) Student support for the use of technologies for learning. (ACODE, 2007).

    The process of benchmarking in this instance involved scoring the stages of planning, development, and implementation across different elements within each of the eight areas listed above. Other approaches to benchmarking reflect the interoperability questions, and try to apply software development principles and suggest finer levels of granularity. The areas investigated in a New Zealand project included scoring five process levels (delivery, planning, definition, management, and optimization) against 35 standards. The domains under investigation included ten factors for processes with impacts on learning; seven factors on the development of e-learning resources; six factors related to student and operational support; three factors related to evaluation and quality controls; and nine factors related to institutional planning and management (Marshall, 2006). An example which bridges two jurisdictions (The Open University in the UK and the University of Sydney in Australia) emphasizes the importance of the relationship between institutions and the prospective approaches which benchmarking might provide (Ellis & Moore, 2006)

    • Reshaping the Debate

    Whether or not the demands of stakeholder groups (however ill-defined), the threat of fraud, or the intensification of competition from local or international providers are behind the current impulses for elaborating quality assurance mechanisms, a dual challenge is being presented to the providers of online teaching and learning. The common thread across quality assurance schemes in the four jurisdictions is the need to address the concerns from both inside and outside the academy. Even if online and distance delivery institutions have been made scapegoats for a wide range of changes, not the least of which is the erosion of higher education institutions’ power to regulate themselves, there may still be an opportunity to address some of the concerns presented by colleagues in more traditional institutions. It follows that an overarching principle of any proposal to address quality assurance in online teaching and learning environments must recognize the integrity of higher education – no matter how it is delivered. The rhetoric of both the Australian and British qualification frameworks suggests just such an integrated approach, but the regulatory burdens they have spawned do little to reassure those who value the independence of higher education.

    While institutional and regulatory sectors have debated appropriate consumer input tools, the Web has offered an array of solutions. Various directories proffer advice on how to select an institution with revenues tied to referrals or “pay-for-click.” One of the more explicit rating schemes from the United States can be found at the Online Education Database (OEDb, n.d.), which ranks institutions that are accredited by the Distance Education and Training Council; found in other listings like eLearners. com or the U.S. News & World Report E-learning Guide. Much of the institution descriptions and rating factors used by OEDb relies upon the U.S. Department of Education’s College Opportunities Online Locator (COOL, n.d.). For institutions which offer at least 50% of their degree programs online, the Department of Education OEDb digests the available data on acceptance rates, financial aid, retention and graduation rates. Other consumer-focused metrics selected by OEDb are peer web citations based on Yahoo’s link domain search, scholarly citations based on Google Scholar, and the student-faculty ratio and years accredited as reported in Peterson’s College Search (OEDb, n.d.). Established online programs are also making their way into discipline-based league tables like the business school ratings from the Financial Times. Similarly, for all the professional debates over the degree of asynchronous or blended experiences, on ratemyprofessors.com, students do not seem inclined to distinguish between face-to-face and online courses.

    In the process of taking back some of the momentum in the debate, the academy must provide clear statements of educational goals. Such goals need not be restricted to technical mastery in specific subjects. Moreover, the opportunity to pursue ideas beyond the needs of corporate sponsors should not be ignored. The measure of the effectiveness of the articulation of the educational goals should be the ways that a course, program, and institution’s goals align with one another. Demonstrating a consistency of purpose should be persuasive to internal and external stakeholders alike, but should not presuppose that students are responsible for seeking their own learning outcomes. This suggestion returns to the essential need for quality to be constructed through consensus building among a range of institutional stakeholders, who must, at the same time, not promise, or be promised, more than can be delivered.

    A second theme running through all of the frameworks presented here is the need for sustained institutional commitment to support distance learners. The precise nature of that support would be determined by the nature of the programs and by what students need in order to have a reasonable chance of attaining their aspirations in a given program. All too often, online delivery of courses and programs has been presented in an experimental mode, without long-term, planned infrastructure development. Whether it involves investing in technical systems, or in-training for support and instructional staff, the process of developing robust online teaching and learning environments should not be attempted as “one-offs.” Some observers have gone so far as to suggest that digital technology may hamper rather than promote educational change, because investment focuses on short life-cycle technologies rather than the longer view needed for effective education (Ehrmann, n.d.). An institutional commitment to supporting learners will go a long way to satisfying other stakeholders without displacing the fundamental project of scholarship. This commitment can only be true, however, if students and educators are engaging in a collaborative process of discovery; that is, if academics are not simply dispensers or interpreters of content for passive students.

    Learning technologies can promote powerful connections to content, context, and community. Unfortunately, they can also offer broad access to poorly designed and executed courseware. There are deliberate choices to be made in how to accommodate a generation of students who expect independent investigation, collaboration, and peer contacts to be facilitated in an online environment.

    The threats to traditional delivery, and most especially the disaggregation of tasks associated with teaching in higher education, are providing new opportunities for exploring the constructions of community.

    and knowledge, teaching and learning. In generating documented evidence of interactions with content and with others, the structure of the online environment lends itself to new kinds of exploration. Eventually, the goal of such inquiries should be to point to ways to improve the teaching and learning environment. Ultimately, online programs should be able to mobilize recent theory and research into how people learn, and enhance learning by “enabling the identified characteristics of effective learning environments and ensuring that they are present and accessible” (Herrington, Herrington, Oliver, Stoney, & Willis, 2001, p. 266). From that perspective, the pursuit of quality online teaching and learning environments may become as much an exercise in scholarship as it has been in market positioning or state control. 

    • 说说你对在线教育质量评估的理解
    活动类型:
    讨论交流
    活动描述:
    1.简要阐述文献的观点(可选择典型观点); 2.总结归纳在线教育质量评估的思路与策略; 3.我的理解及看法。 组内/间至少回复2条帖子。
    • 以联通主义MOOC为例,说说如何保证在线教育质量
    活动类型:
    讨论交流
    活动描述:
    1.登录联通学习网站进行体验学习,观察学习者的参与度,认知投入和学习投入情况等; 2.分析目前联通学习的主要问题,影响联通学习质量的主要因素; 3.结合所学说说如何改进。 PS:更多参考资源
    • 标签:
  • 加入的知识群:
    学习元评论 (0条)

    评论为空
    聪明如你,不妨在这 发表你的看法与心得 ~



    登录之后可以发表学习元评论
      
暂无内容~~
顶部