Introduction to the special issue on distributed learning environments

On the Horizon

ISSN: 1074-8121

Article publication date: 14 August 2009

529

Citation

Feldstein, M. (2009), "Introduction to the special issue on distributed learning environments", On the Horizon, Vol. 17 No. 3. https://doi.org/10.1108/oth.2009.27417caa.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2009, Emerald Group Publishing Limited


Introduction to the special issue on distributed learning environments

Article Type: Guest editorial From: On the Horizon, Volume 17, Issue 3

This issue is about an idea that does not yet have a name. The term “distributed learning environment” is not part of common parlance in the educational technology community for higher education. For that matter, neither is the term, “distributed learning”. In the training world, “distributed learning” has a somewhat more circumscribed usage thanks in large part to the efforts of the United States government’s Advanced Distributed Learning Initiative, which has historically been responsible for the development of the Sharable Content Object Reference Model (SCORM). The SCORM standard is used for delivering primarily self-paced, re-usable content modules and tracking students’ progress through those modules. Consequently, “distributed learning” in the training world usually refers to self-paced self-study online. In the world of higher education, the terms “distributed learning” and “distributed learning environments” are rarely used, and when they are used they are generally interchangeable with “distance learning (environments)” or “online learning (environments)”. There is no common usage to the terms, and to the degree that they are used at all, “distributed” is intended to be a modifier of “learning” or, more accurately, of the learners. The learners are distributed; they are not all in one place, and not all interacting with the material and each other synchronously. In this context, a “distributed learning environment” simply means a learning environment in which the learners are distributed.

In this issue, however, the expression “distributed learning environments” means something entirely different. Here we mean that the learning environments are distributed. The discussion board may live as part of a traditional VLE on one server run by the university, student-created content may live on a second server run by Google (or Yahoo!, WordPress, etc.), and the virtual science lab for the same class may be run on yet another server located at a different university. These tools may be developed separately, maintained separately with different terms of service, have different security and privacy models, and often have very different user interfaces; and yet they can still collectively comprise the students’ online experience of a class. For our purposes, then, we refer to a collection of disparate capabilities collectively as a “distributed learning environment”. We find ourselves in the awkward position of having to rely on this non-standard usage of the term because there is currently no widely used alternative term of art.

The lack of a common term for this phenomenon is an odd state of affairs, given the facts on the ground. The explosion of Web 2.0 tools used in education, ranging from YouTube to del.icio.us to various hosted weblogs and wikis, is widespread and well known. The increasing pressure to integrate these tools into Virtual Learning Environments (VLEs), or to replace the VLE with a different flavor of an integrated environment such as a Personal Learning Environment (PLE), is equally well known. The development of collaborative, cross-university resources, such as virtual laboratories, is somewhat less widespread but certainly not obscure.

Why is it that we still do not have language to talk about these trends, much less identify potential drivers, obstacles, and strategies? I have no clear answer to this question, although I do have some guesses. To begin with, there are many different disciplines and perspectives required to tackle the distributed learning environment problem, from pedagogy experts to technologists to usability experts to entrepreneurs, but these groups don’t always communicate with each other. Perhaps more importantly, it is possible that the motivation for the development of distributed learning environments are connected to more fundamental tectonic shifts in the evolution of universities – shifts that we do not yet fully understand. Several recurring themes in this issue suggest that deep shifts in the economics of collaboration and of education brought on by new technology will force changes to university business processes and even business models, and that distributed learning environments are just one manifestation of this larger trend. In a world where it is easier than ever for loosely organized groups of people to work together on complex research projects or even to educate themselves, what role will the tradition-bound, hierarchical institution of the university play going forward? These forces of change, which are most visible in the fast-moving world of the consumer web, may be the biggest drivers for the creation of distributed learning environments. And until the academy has come to grips with the forces themselves and their broader implications for its future, then perhaps it may not see the need to name the distributed learning environments that are largely motivated and enabled by these forces.

Regardless of the reasons for the current situation, the purpose of this issue is to spark the broader conversation that has been missing until now. Not surprisingly, the articles here represent a much broader range of professions, philosophies, and even discursive styles than is typical for an academic journal. What I hope will emerge out of this collection of work is a mosaic that tells the story of the forces that are pushing academia toward distributed learning environments, the challenges that this new path presents, and some of the strategies and solution elements that are emerging as possible answers to these challenges.

The forces at work

We begin with Martin Weller, who immediately challenges some bedrock assumptions that undergird today’s institutions of higher learning. Weller’s critique of the university in the face of Web 2.0 draws upon the work of Clay Shirkey, but Weller could have just as easily referenced economist Yochai Benkler’s critique of the software firm in the face of open source software in his seminal paper, “Coase’s Penguin” (Benkler, 2002). Benkler builds upon and updates the work of Nobel laureate Ronald Coase, who argued that firms are created and sustained when the cost of coordination among the members of the firm is lower than the cost would be for coordinating the same people and activities in an open market. Benkler argues that as technology has lowered the effort and cost required to coordinate group work outside of a company, companies begin to lose their competitive advantage. Similarly, Weller argues that as traditional educational models begin to lose their value the easier it becomes for learners to find the knowledge sources they need and coordinate educational conversations and activities informally using modern technology. This opens the possibility of an “open source” education unmoored from any particular accrediting institution. Universities will have to adapt to this new model if they wish to stay in business, just as technology companies have had to adapt to open source software production. To respond to this challenge, Weller’s university has created a distributed learning environment that embraces the use of consumer-oriented Web 2.0 tools to foster informal collaborative learning as a way of exploring new models. In doing so, they have had to confront the technical challenge of how to knit these disparate tools into a coherent educational environment via some kind of “eduglu”. This challenge is one to which we will return several times in this issue.

Alex Reid provides a complementary picture to Weller’s from a faculty member’s perspective. Drawing from the same theoretical wellspring on de-centralized collaboration but adding Actor Network Theory to the mix, Reid points out that a shift to distributed learning environments not only accommodates but also accelerates a shift in the balance of power from the institution to the students and faculty. Given the combinatorial explosion of possible tool combinations in a distributed learning environment, it is impossible for the institution to keep up with them all. Tools cannot be centrally evaluated and supported. Faculty cannot be trained or even properly incentivized to keep up with the torrent of new possibilities. Even the taxonomy of disciplinary knowledge itself dissolves under the acid of the networked economy, where content can be organized by the tags that people give them, meaning can be negotiated collaboratively in a wiki, and curricular programs become easier for individuals to construct on their own, based on their personal interests and needs. Like Weller, Reid foresees the possibility of a revitalized university where facilitated independent study is the norm rather than the exception, but only if these institutions undertake fundamental reforms of processes ranging from degree program definition to faculty tenure evaluation. Ironically, the new university that Reid envisions bears a striking resemblance in some respects to the very old model of the tutorial system still employed by Oxford and Cambridge universities today.

Putting the learner at the center

In contrast to the big-picture thinking of Weller and Reid, Nathan Garrett, Brian Thoms, Nimer Alrushiedat, and Terry Ryan have more tactical interests. They are not trying to remake the fundamental structure of academia, nor are they interested in distributed learning environments writ large. They simply want to design a better e-Portfolio for reflective learning. But in the process of doing so, they provide three insights that are central to the larger discussion. First, the notion that the kind of informal collaboration that is fostered by social software could lead to better learning outcomes is well supported in the learning theory literature as well as in lived practice. We have good reason to believe that social learning can be a powerful educational force, and our ability to amplify that force with technology raises questions about trade-offs between the pace at which we can expand that value and the cost of supervision of experts within the time-tested structure of university discipline(s). Second, Garrett et al. assume neither that the student-generated content to be aggregated and reflected upon in the e-Portfolio nor that the ancillary content providing context for the student’s reflection lives entirely within one unitary application, nor even that the content will stay in that unitary application once aggregated. Sources like Wikipedia, YouTube, and IMDB as well as destinations such as Facebook are requirements that Garrett et al. had to accommodate in their system design. The facts on the ground indicate that distributed learning environments are already here. The question is not whether we should pursue them but how to deal with them and the larger changes that they engender. And finally, Garrett et al. remind us that social software must maintain a high level of ease of use. If the point of social software is to lower transaction costs for collaboration, then a tool that is hard to learn or hard to use decreases the value of social software by increasing the transaction costs.

This is where Jutta Treviranus enters the picture. Like Garrett et al., Treviranus did not undertake her research because she was explicitly interested in the challenges of distributed learning environments. Rather, she and her colleagues at the Fluid project were focused on the problems of usability and accessibility in software. However, the client software projects for Fluid are open source educational software projects such as Sakai, uPortal, and Moodle, where the tools that live within the larger software environment are likely to have been developed by entirely different teams at different institutions. While these applications are not necessarily distributed learning environments in the sense that all the tools often run on the same server with the host learning environment or portal, they share the characteristic of being developed independently, which means that their user interfaces are reliably the same. Just as Weller and Reid rightly point to the power issues involved in distributed learning, Treviranus calls our attention to the fact that there is a power struggle between the individual tools and even the institutional branding applied to the environment itself for control over the user’s screen. Achieving an interface that serves the user and helps to lower those transaction costs in all of this visual cacophony seems unlikely without some kind of coordinated strategy. The Fluid project addresses this challenge in part by radically devolving control of the interface to the user. Equally importantly for the purposes of this issue, Treviranus describes Fluid’s methods for encouraging uptake of their approach by diverse and loosely coordinated tool development team, including (1) the creation of common standards that can be adopted with minimal inter-team coordination, and (2) drawing on the power of social learning by using Agile development methods to create a tight feedback loop between the Fluid developers and the communities of users. Both of these methods will also be taken up by other authors in this issue.

Finding the right architecture

With Treviranus proposing a method for knitting together the user experience in distributed learning environments, we turn our attention to the aforementioned EduGlu that would integrate the various educational tools on a functional level. Like “distributed learning environments”, “EduGlu” is a term that has periodically drifted in and out of fashion in educational technology circles for a number of years without acquiring a precise and commonly understood meaning. Sometimes it is intended to mean little more than the aggregation of RSS feeds for educational purposes. Many of the authors in this issue assert that more than RSS aggregation is needed. Weller, in describing the Open University’s Social Learn project, speaks generally of an API for the central system which can be integrated with any third-party application. Ian Boston, in his piece about Sakai 3, focuses more specifically on Google’s Open Social and Gadgets specifications as the mechanism. These specifications are widely adopted; they specifically support social interactions that are useful in educational settings (whether a traditional class or a non-traditional peer-to-peer open education class); and because they use client-side technologies such as HTML and Java script, they enable the creation of new tools quickly by people with a wide range of technical skills without requiring heavy supervision by the people running the servers. Interestingly, one of the benefits to come out of this architectural shift for Sakai is a very significant increase in the pace of development and innovation because of the reduction of required coordination with highly skilled Java programmers. This is Benkler’s as well as Weller’s, and Reid’s argument in action; if the transaction costs of the coordination required to complete a project can be lowered sufficiently, then loosely organized networks can suddenly become much more efficient than tightly organized project teams. At the same time, as Treviranus and the Fluid project have discovered, project teams can take advantage of those lowered transaction costs by using the newly available bandwidth to invest in a tighter feedback loop through Agile methods.

Glen Moriarty agrees with Boston that OpenSocial provides a good foundation for the EduGlu that distributed learning environments need, but he adds OpenID and standards for unpacking OpenCourseWare into the mix for his NIXTY project. He argues that these shift the locus of power from academic institutions to students in order to accommodate a wider range of educational applications, ranging from traditional university degrees to completely informal and self-organized courses of study. By doing so, Moriarty proposes both a technical architecture and a business model that accommodate an orderly evolution in the shifting educational models forecast by both Weller and Reid. In addition, by explicitly invoking Clayton Christensen, he provides a complementary theory of change to the networked economy model Weller and Reid invoke. Christensen (2003) argues that disruptive technologies begin by capturing less profitable market niches that successful current-generation competitors believe are not profitable enough to be worth pursuing. They then grow in capability and take the market from the bottom up. It is possible that this pattern could hold true with open education as the disruptive force against traditional university-bound education. If so, then providing an architecture in which non-traditional educational models can be supported in a continuum with more traditional models may facilitate the change.

Andrew Booth and Brian Clark pick up on this theme of creating an architecture that accommodates changes in the locus of control. Specifically, they discuss an institutionally controlled Virtual Learning Environment (VLE) that acts as a container for the class resources and activities, teacher-controlled learning tools that populate the learning environment, and many student-controlled Personal Learning Environments (PLEs) in that aggregate students’ content and interactions from the variety of courses in which they participate. Booth and Clark also propose a Service-Oriented VLE (SOVLE) which utilizes Service-Oriented Architecture and a variety of technical standards to create a structure that accommodates these different levels of control. Notably, they raise the same concern about consistency of user interface that that Treviranus raises and arrive at the same vision of what a solution to the problem would look like: users must be given more control over how all the tools present in the learning environment. They also point out the need for an ability to orchestrate academic workflows that cross boundaries between these distributed tools. This is a particularly thorny technical problem with several solution candidates but no definitive solution at this time.

Scott Wilson and Kamala Velayutham tie together several of the threads that run through the architectural portion of this issue. Like Ian Boston, they are concerned with the pace of innovation being hampered by traditional enterprise architecture and the high degree of coordination it requires for new software to be developed. Like Glen Moriarty, they want to accommodate a wide spectrum of educational models, ranging from highly instructor-facilitated to highly student-driven. And like Andrew Booth and Brian Clark, they see a need to segment control at the institutional, faculty, and student levels. In fact, they go even further by proposing that governments could realize benefits in requiring some level of management control at a national infrastructure level. Each level has its own requirements that are often in tension with the requirements of the other levels. For example, faculty, institutional, and national needs to track student progress for support and accountability purposes may be in tension with student needs for privacy. Wilson and Velayutham apply the metaphor of shearing layers from the world of physical architecture to look at how management control should be segmented across these different levels, through both technology architecture and university policy.

Aligning the university

This brings us back to the question of whether universities are ready for the institution changes necessary to manage in this changing environment. Patrick Masson and Ken Udas address the question of how universities must retool themselves from a process perspective. Noting the growing tensions between university needs for greater institutional control in an increasingly competitive business landscape and student demands for more individual control based on their experiences on the consumer web, Masson and Udas try to identify processes that will help universities balance these competing demands. They ask us to consider what general changes need to be made to university decision-making processes in order to accommodate these new challenges. Unsurprisingly, Masson and Udas come up with similar answers to the other authors. They suggest that the same Agile methods and openness that Treviranus and Boston find so successful for their design and development teams can also work for university management. In other words, the management methods developed in the software industry in response to the shifting economics of cooperation can and should also be adopted by universities across a wide range of management practices in response to the same market forces.

Appropriately enough, the last word in the issue goes to a Millennial. Michael Staton represents the generation that built applications like Facebook for its own use and then turned them into multi-billion dollar properties. Staton believes that his generation is eager to do the same for the world of educational software but is impeded by specific problems with the architectures of both educational software and educational bureaucracies as well as university cultures. On the software side, he argues that today’s VLE is simply not designed with the kinds of real-time and pervasive integration channels that social software like Facebook require. On the bureaucratic side, ponderous and opaque business processes make it very difficult for new companies that cannot afford to invest in a large sales force that shepherds proposals through the long and arduous university procurement process. And the cultural aversion to the profit motive and preference for in-house development encourages behaviors that are not economically rational. Staton concludes by recommending nine policy changes universities can make to improve this situation significantly, which can be summed up as:

  1. 1.

    revise procurement practices to encourage innovation;

  2. 2.

    require adopted software to make its data available to other systems in a standard format; and

  3. 3.

    encourage bottom-up, Agile adoption practices within the university.

In other words, he arrives at architectural principles that are consistent with those of Boston, Garret, and Wilson and Velayutham as well as management principles that echo those of Masson and Udas.

The beginning

Finally, then, the most important reason why the phenomenon this issue addresses has no name becomes clear. The advent of the distributed learning environment appears to be just one manifestation of much deeper changes happening in (and to) academia, driven by macro-economic forces that the academy does not yet understand very well. This blind spot is not surprising. Benkler (2006) writes:

It is easy to miss these changes. They run against the grain of some of our most basic Economics 101 intuitions, intuitions honed in the industrial economy at a time when the only serious alternative seen was state Communism-an alternative almost universally considered unattractive today. The undeniable economic success of free software has prompted some leading-edge economists to try to understand why many thousands of loosely networked free software developers can compete with Microsoft at its own game and produce a massive operating system-GNU/Linux. That growing literature, consistent with its own goals, has focused on software and the particulars of the free and open-source software development communities, although Eric von Hippel’s notion of “user-driven innovation” has begun to expand that focus to thinking about how individual need and creativity drive innovation at the individual level, and its diffusion through networks of like-minded individuals. The political implications of free software have been central to the free software movement and its founder, Richard Stallman, and were developed provocatively and with great insight by Eben Moglen. Free software is but one salient example of a much broader phenomenon. Why can fifty thousand volunteers successfully coauthor Wikipedia, the most serious online alternative to the Encyclopedia Britannica, and then turn around and give it away for free? Why do 4.5 million volunteers contribute their leftover computer cycles to create the most powerful supercomputer on Earth, SETI@Home? Without a broadly accepted analytic model to explain these phenomena, we tend to treat them as curiosities, perhaps transient fads, possibly of significance in one market segment or another. We should try instead to see them for what they are: a new mode of production emerging in the middle of the most advanced economies in the world-those that are the most fully computer networked and for which information goods and services have come to occupy the highest-valued roles (Benkler, 2006).

Ten years ago, the casual industry observer would probably not have predicted that some of the largest private source software companies in the world would also be involved with open source software in significant ways. Taking my own employer, Oracle, as an example, the section of the corporate web site describing the company’s involvement with open source lists several dozen open source projects hosted at Oracle, a number of externally hosted projects to which Oracle contributes code, and several Oracle-supported distributions of open source products (http://oss.oracle.com/). Higher education likely faces a transformation that is similar in scope over the coming decade, with equally far-reaching and hard-to-predict consequences to the institution. Distributed learning environments represent just one facet of many in the coming trend toward distributed education, a phenomenon that we also have not yet identified clearly enough to have given it a name. My hope is that this journal issue will make some small contribution toward finally starting that long and challenging conversation.

Michael FeldsteinBased at Oracle Corporation, Redwood Shores, California, USA.

References

Benkler, Y. (2002), “Coase’s penguin”, The Yale Law Journal, Vol. 112 No. 3, pp. 369–446

Benkler, Y. (2006), The Wealth of Networks: How Social Production Transforms Markets and Freedom, Yale University Press, New Haven, CT

Christensen, C. (2003), The Innovator’s Dilemma: The Revolutionary Book that Will Change the Way You Do Business, Collins Business, New York, NY

Related articles