Sunday, November 25, 2012

( #oped12) A need for more research related to MOOCs and a more dynamic publication mechanisms for online education.

#oped12

A few facts:

 1. 
A recent call for papers by JOLT with deadline 15th of November 2012 and Guest Editors:George Siemens, Valerie Irvine, Jillianne Code expresses: “A special issue of the MERLOT Journal of Online Learning and Teaching (JOLT) is planned for Summer 2013 that will address the weak MOOC research base”. In another paragraph it states: “While MOOCs are beginning to burgeon in the higher education space, research in the area is still very limited. For educators, learning designers, and university administrators, making decisions around MOOC design and deployment can be difficult given the lack of published research.”
Observation #1: Note that the call by JOLT remarks: “weak research base”, “limited research and lack of published research”.
On the other hand if a paper is accepted in JOLT by November 30th 2012 it will appear published in June 2013, 7 months later.
2. 
In a recent comprehensive paper by Daniels (2012), mostly on x-MOOCs, of the 66 references cited only 3 are from peer reviewed journals and 6 from books. The great majority are newspaper articles and blogs (90%).

Observation #2: I cannot imagine a physics or medicine paper announcing a major breakthrough based on newspaper articles or individual´s blog posts.
3. 
The figure shows the cover of the TIMES magazine (October 29th, 2012) that states: “160000 students 1 professor.”





Observation #3: If one takes the quote from Times magazine´s cover of 160.000 and the fact that Coursera has 200 announced courses one could induce that 32 Millon people are simultaneously engaged in their courses!! 



4. 
Figure 1 shows the daily unique visitors to the web pages where Coursera/EdX and Udacity learners need to click daily for accessing their courses. Alexa.com (source) gives relative numbers between sites. Khan Academy.org allows us to gauge the numbers: 0.04 in this scale mean around 100.000 unique visitors.
 


In Table 1 “participants” is an estimation for the number of learners accessing daily the Coursera/EdX and Udacity courses. The number of courses they offer was obtained from their web site: 198, 8 and 14 respectively. The last column gives us an average estimate for the number of participants in each course if all offered were running simultaneously. 

participants
Number of courses
Simultaneous running
Average participants per course
Coursera
100.000
198
unknown
500
EdX
50.000
8
4
6.250
Udacity
25.000
14
unknown
1875
Table 1
The case of EdX allows a more precise estimate since during the analyzed period it had 4 simultaneous courses running resulting in 12.500 participants in each (average). It is assumed that all students need to access the web page daily. The peaks in the curves indicates the start of a new course or the beginning of the week.

Observation #4: More precise information from learning analytics and more published research are needed.
Some conclusions:

These are just 4 illustrations that exemplify some important issues in online education today.
  • Academic papers whose references are based on blogs and newspaper articles.
  • Numbers quoted by newspapers and blogs that can be totally misleading.
  • More precise information from learning analytics and more published research are needed. 
  • Since the average time it takes for a research paper to get published in a peer reviewed journal is around 4 months, when they do appear they are most probably obsolete.Research in online education needs to change its publishing methodology to a more dynamic format. (If interested in this particular point I have discussed it in more detail in a previous blog post (Rodriguez 2012)). 
tags: #cfhe12, #oped12, CFHE12

References
  1. Daniels, John, submitted to KNOU (Korean National University, September 25th 2012).
  2. Rodriguez, Osvaldo. Retrieved from http://cor-ar.blogspot.com.ar/2012/09/too-many-blog-posts-and-media-articles.html 
  3. TIMES MAGAZINE: October 29, 2012.
 

Friday, September 14, 2012

Too many blog posts and media articles and few research articles related to MOOCs (c-MOOCs and x-MOOCs)

Reading a recent post by Audery Watters plus the corresponding comments I started wondering: why are there so few research articles related to MOOCs (c-MOOCs and x-MOOCs [1]) so that no one can quote rigorous studies, facts and numbers?
In the post by Jeff Haywood “No such thing as a free MOOC” on Edinburgh University and Coursera he states: “Currently we know little about MOOC learners, about how to design and deliver successfully in a range of subjects, and most importantly at a range of levels (eg final year undergrad). Is the experience helpful to learners, and do they get value from their certificates of completion? Much more research is needed, and perhaps JISC might find this a useful area in which to support the UK HE community”.

So why does this happen? Simple:
  • The average time for publication in educational research journals is around 2-4 months
  • There is too much biased editorial filtering
  • Articles are dispersed into too many journals
  • In this rapidly changing topic, much of what is published is already (nearly) obsolete when it appears.
The basic science research community was faced with exactly this same problem some (many) years back.

From my perspective:

Perhaps as it happened in the world of physics and basic science, educational research should start implementing e-prints and an archiving system. See for example arXiv (an archiving system for electronic preprints) and inSPIRE (a retrieval system).

If you are interested in the answers to the following questions please see the added references:
  •  Is there an advantage for scientists to make their work available through repositories, often in preliminary form?
  •  Is there an advantage to publishing in Open Access journals?
  •  Do scientists still read journals or do they use digital repositories?
 References:
[1] The division of MOOCs into c-MOOCs and x-MOOCs is similar in spirit to the expressions “connectivist MOOCs and the others” by (George Siemens, 2012) or “the real MOOCs, not the x-MOOCs” by (Stephen Downes 2012)

Sunday, July 8, 2012

Produsage. Are participants in connnectivist MOOCs producers AND/OR consumers?


In a recent post in the blog by Bonnie Stewart an interesting topic was exposed. Basically, that in the new digital culture practically all digital users are in some way both producers AND consumers of digital media. In clarifying my ideas for a comment I made there I analyzed if this statement held true in the case of participants in connectivist courses (c-MOOCs).

Connectivist MOOCs are supposed to be “the place” for networking and where everyone is sharing, producing AND consuming. But if we look at the evidence from the research literature related to c-MOOCs, we realize that already early in the courses participants polarize into either an active or lurkers role. In other words they become either producers OR consumers.

One of  most  surprising facts of participant`s behavior in c-MOOCs is that 10% at most are producers and 90% consume. In some weeks of Change11, there were even less than 2% of active participants (producers) of the 2435 registered. Where were the other 2376 (98%)? During these weeks only 35 active participants and a facilitator were producing and 2400 consuming (lurkers).(see figure)


From this we can extract two conclusions:

  • we need to re-define a c-MOOC as courses with an enhanced number of tutors (those 10% active participants) and the rest that retreat to the lurker status.I`m not sure this is connectivism,
  • or we need to improve the way we deliver c-MOOCs finding ways of including the 90% that lurk to participate.

I personally believe in the need for the second option.

I always associate  the value of c-MOOCs with the concept of “eventedness”which was defined by Cormier and Siemens in their Educause paper: “The course members resemble the people in a corner having an in-depth discussion that others can choose to enter. Enough structure is provided by the course that if a learner is interested in the topic, he or she can build sufficient language and expertise to participate peripherally ror directly. The more people who walk over to talk, the better the chance will be that people will contribute to the conversation”.

Tuesday, April 3, 2012

Time to call MOOCs something else?


A recent comment by Stephen Downes in the "OLDaily, (April 3rd 2012) " to the post by John Mak made me understand that the word MOOC is to be considered to mean any Massive Open Online Course (something that seems natural and obvious).
But then, what everybody (i.e. in the context of Change11 and the like) understands as a MOOC (see the excellent Educause paper [1] by Cormiere and Siemens) should be named something else.
Simply naming them “connectivists MOOCs” does not give credit to the ideas behind a MOOC. They should stand on their own.
MOOCs and the AI-Stanford like (AI) courses definitely represent very distinct course formats.
Both types bear some common features:
  • Geographical spread of participants
  • Big dropout rate although for AI courses is much higher than for MOOCs (85% vs. approximately 40% respectively).
  • Massiveness, although AI courses have orders of magnitude higher number of registered learners.
But, they clearly differ in many fundamental aspects (especially in its pedagogical content) so as to establish two very different course formats:
  • The AI fall into the cognitive-behaviorist pedagogy category and the MOOCs into the connectivist.
  • The AI participants have totally different learner’s goals and preparation than those in MOOCs.
  • There exists a very different nature of the subjects studied: in AI engineering and MOOCs educational theory.
  • MOOCs have a vast number of lurker participants while AI have no lurkers.
  • Tutors and facilitators bare very different roles.
  • Openness in each of the formats has also a different meaning. In AI it is more related to the fact that the courses are open for anyone to take. In MOOCs it refers to: openness to the personalization of learning, to the dialogue, debate, and conversation; to the novel, divergent thinking, and creative thinking; to the participation based on connection, collaboration, and sharing.

Open Online courses represent an important development in open education but MOOCs (our MOOCs) in particular, a major change.

So,  it’s time to call MOOCs something else.

[1] Cormier, D., and Siemens, G. (2010). Through the open door: Open courses as research, learning, and engagement. Educause, 45 (4), 30-39. Retrieved March 2012 from: http://www.educause.edu/EDUCAUSE+Review/EDUCAUSEReviewMagazineVolume45/ThroughtheOpenDoorOpenCoursesa/209320
Tag for #change11

Friday, March 9, 2012

Two distinct course formats in the delivery of MOOCs.


MOOCs have been carried out with great success during the last years. Examples are CCK08, PLENK2010, MobiMOOC (2011), EduMOOC (2011), Change11, and LAK12. Their implementation requires conceptual changes in perspective from both “facilitators” (tutors) and learners. They are so novel that much research needs to be done for their understanding. 

Basically 2 very distinct delivery formats have been used:

  • Those that use what’s called an aggregator: a newsletter called “The Daily”.
  • Those where all events go through a “centralizing” web page and discussions happen with the use of a mailing list: in most cases using Google Groups.

Each of these will of course have a different impact on the behavior of learner’s experience and the outcome of the course.

In this blog I expose some ideas on studying this problem.

How many participate in MOOCs

For our discussion it is important to understand how many participate in MOOCs and of these how many are active and how many take a passive role behind the scenes. Lurker is the term used for the latter.

In a recent post Geoge Siemens (see reference 1) gave the following numbers and pattern of participant’s behavior for “change11” but these could be considered representative of all MOOCs.

“The Change MOOC has about 2400 participants, yet we typically get about 40 participants per live sessions, 5-10 blog posts a day, and 20+ daily tweets related to the course. Some are active throughout the course (though when I did an analysis on CCK08, only a few of the most active participants in week 1 were still in the top ten by week 12), some have spurts of activity, and others subscribe to the daily but don’t engage in ways that are visible to us as facilitators. Consistently, as the course progresses, active participation declines.”

Delivery formats

Format 1:

Many MOOCs have utilized a daily newsletter named “The Daily” which basically aggregates contributions from all blogs (or other resources) from participants tagged in a certain way. Examples are CCK08, PLEN2010, Change11 and LAK12.
In these cases it is nearly impossible to track learner’s behavior except for those that are active participants. The work of Rita Kop in Plenk2010 represented an exception since an effort to track lurkers behavior was done by implementing surveys and other strategies.

Those that participate as “active”, have a certain degree of expertise in the course domain and confidence in exposing their writings.
Lurkers seem to restrain to make any kind of appearance but will burst into a blog if that post (announced through The Daily) is of interest. The number of lurkers at any time can be as high as 50% of registered participants.
These MOOCs follow in part a pattern similar to that described in Reference 2: “the MOOC mirrors a discussion at a conference, in a research lab, or in a workshop”.

Format 2:

Some MOOCs employ a “centralizing “web page used by the facilitators for announcing all activities and a mailing list open to contributions from participants(mostly Google Groups). Threads on different subjects are opened and continuously a participant receives the new contributions to the different threads.
Examples are MobiMOOC (2011) (556 registered) and EduMOOC (2011) (2700 registered).
Since adding an opinion or just a comment to some discussion thread does not need to show expertise, a dormant lurker becomes active just in those occasions. Participants get to know each other more since these occasional appearances of lurkers makes them visible.
This second format is closer to the idea of “eventedness” described in Reference 2: “The course members resemble the people in a corner having an in-depth discussion that others can choose to enter. Enough structure is provided by the course that if a learner is interested in the topic, he or she can build sufficient language and expertise to participate peripherally or directly. The more people who walk over to talk, the better the chance will be that people will contribute to the conversation”.

Dropout rate.

Finally, let me introduce a small comment on the dropout rate.
In reference 2, Cormiere and Siemens write:
 “The most disconcerting issue for many educators running an open course is the drop-out rate”. 
And in reference 1:
“While active participation in our courses declines as the course progresses, subscribers to the Daily increase. I’m not sure what to make of that. If I was getting five emails a week on something I wasn’t interested in, I would unsubscribe. Does that mean we can view Daily subscribers as a) people are still engaged, b) people can’t find the unsubscribe link, or c) that we’ve subjected over 15,000 people to guilt about not being active in MOOCs?”

The answer to the last question is a) (people are still engaged) and in reality the most disconcerting issue to those running a course comes from not realizing that lurkers might conform a high percentage (difficult to quantify precisely) of those registered.

Tag for #change11 and  #lak12

  1. http://www.elearnspace.org/blog/2012/02/28/the-best-learning-of-my-life/
  2. Cormier, D., & Siemens, G. (2010). Through the open door: Open courses as research, learning, and engagement. Educause, 45 (4), 30-39. Retrieved October 20th, 2010 from:http://www.educause.edu/EDUCAUSE+Review/EDUCAUSEReviewMagazineVolume45/ThroughtheOpenDoorOpenCoursesa/209320 

Wednesday, March 7, 2012

MOOCs, the CYNEFIN framework and understanding the “basics”.


In a recent post Dave Cormiere proposed that the CYNEFIN framework as developed by Dave Snowden could help describing rhizomatic learning (MOOCs). Broadly CYNEFIN offers 5 “categories” for separating kinds of decisions that can be made: simple issues, complicated issues, complex issues, chaotic issues and disorder.

In this post I would like to contribute some thoughts to the subject and see if I can get some feedback from those in LAK12 and change11, so as to understand better the ideas proposed.

In the post Dave states:

  1. “If you are looking for ‘best practices’ in a given domain, the MOOC is a fantastically inefficient way of acquiring them”. 
  2.  “If you are looking for ‘good practices’ a MOOC is probably a better option than for simple practices, but it’s still not exactly designed for that.”
  3. “If you are looking for a ‘chaotic experience’ MOOCs are probably a little tied tight for you.”
  4. “The complex domain is where the MOOC really shines. If you want to try things, see how it goes, and build from that response, a MOOC is just the ecosystem you need”.
As I understand 1 and 2 refer to “what we learn” (best/good practices in a certain domain) while 3 and 4 on “how we learn”.

All 4 statements are correct when applied, for example, to MOOCs like CCK12, EduMOOC, Change11, LAK12. No learner in these MOOCs was looking for best/good practices.

Counter example:

MobiMOOC (2011) was a very successful MOOC on mobile learning, definitely rhizozomic, that filled all requirements spelled out in the literature for being a MOOC (educause (2010) paper by Cormiere and Siemens). But:
  • If you were looking to learn on “best practices” in mobile learning this was the place.
  •  “Good practices” were also part of the learning space. Not in one expert, but in many distributed through the network. Mentorship was spread through the web.

MobiMOOC, being a MOOC, satisfied points 3 and 4.

A few thoughts on the question on understanding the “basics”

In his post Dave states: By basic here i mean ‘turn on the computer’ rather than define a computer”

I agree and here are some my thoughts:
  • MOOCs are not suited for teaching/learning efficiently “the basics” in any domain.
  • It’s fundamental for learners in a MOOC to have a certain degree of preparation. If someone participates as a lurker and is unprepared he will not understand. If someone wants to be an active participant he can contribute nothing. The learners are nodes of the network and must contribute in part with their knowledge.
  • The 101 running course in Python programming of UDACITY.com would be impossible to carry out in the MOOC format.
 Tag for #lak12 and #change11









Saturday, March 3, 2012

Vast Lurker and No-lurker Participation in Open Online Courses: MOCCs and the AI Stanford like courses respectively.



Open online courses with a massive number of students have represented an interesting development for online education in the past years.
They have basically followed two very different formats: MOOCs and courses similar in spirit to the AI-Stanford course.
In this post I analyze the behavior (both in number and pattern), for both types of the massive courses, of what are described in the research literature  as lurker participants (see Rita Kop, 2011).
MOOCs
MOOCs represent an emerging methodology of online teaching with a structure inspired by the philosophy of connectivism. During the last years they  have been carried out with great success. Examples are CCK08, PLENK2010, MobiMOOC (2011), EduMOOC (2011), Change11, and LAK12. Their implementation requires conceptual changes in perspective from both “facilitators” (tutors) and learners.
These courses can be classified within the connectivist pedagogy (Dron and Anderson 2011, see also a previous post).

Figure 1 (extracted using google analytics to the home page of EduMOOC 2011) represents a typical behavior pattern of those participating in a MOOC. A big number register (2700 in this case) but after a few weeks the active participants reduce to less than 100. Activities like online meetings do not register more than a few tens. Participation in surveys is also small.
Figure 1. shows the number of visits from new visitors (dots) and returning visitors (squares) as defined in the Google Analytics analysis of  the main web site in EduMOOC for the period extending a week before the start until one week after.

Then an important question emerges: have more than 90% of registered participants dropped the course?
Lurker is a term used to define a participant that just follows the course, looks at the recordings, and browses the available course resources. He is mostly behind the scenes waiting for some interesting event as can be seen in Figure 1 and quantified in Table 1. A successful blog post or a particular debate posted to the Google group mailing list may obtain responses that could be 50% of those registered.

Table 1. shows the number of new and returning  daily visitors to EduMOOCs main web page. W0 is the week before the start and W8 the ending week. Thursday was chosen as the sampling day. The total number of unique visitors during the 8 weeks was around 10.000.

The AI-Stanford like courses and udacity.com
The 2011 AI-Stanford class on Artificial Intelligence taught by Sebastian Thrun and Peter Norvig was also a massive open online course with 160,000 registered enrollees of which 20,000 completed all coursework. It was offered free and online to students worldwide from October 10th to December 18th 2011. A very similar pattern is followed by courses released and still in progress by udacity.com.
The Ai-stanford course included feedback on progress and a statement of accomplishment. The curriculum drew from that used in Stanford's introductory Artificial Intelligence course. The instructors offered similar materials, assignments, and exams.

These course can be classified within the cognitive-behaviorist pedagogy

Figure 2 shows the number of participants through the duration of the AI-course course expressed as daily reach (analytics extracted using alexa.com). A huge peak surges to nearly 100.000 (the daily reach of Khanacademy.org) around October 10th (the beginning of the course). Very rapidly it stabilized at 25.000 active participants. The smaller peaks are linked to the weekly obligatory exams. Practically no lurkers participate and the change from 160.000 to 25.000 simply represents dropouts. 


Figure 2. Number of active participants in the Stanford AI-class.


Two very different course formats.

From previous studies it has become evident (George Siemmens 2012) that we are in the presence of different formats:

  • the AI-Stanford participants have totally different learners goals and preparation than those in MOOCs.
  • there exists a very different nature of the subjects studied: engineering  and  educational theory.
  • the AI-Stanford course falls into the cognitive-behaviorist pedagogy category and the MOOCs  into the connectivist.
The retention and lurker behavior described above adds another differentiation to the previous list.


tag for: #lak12 and #change11






Thursday, February 23, 2012

MOOCs and AI-Stanford Course: Examples of Connectivism and Cognitive-behaviourism Respectively.


In a recent paper Anderson and Dron (2011) describe three generations of distance education (DE) pedagogy: cognitive-behaviorist, social constructivist, and connectivist.
They conclude:
  • That all three current and future generations of DE pedagogy have an important place in a well-rounded educational experience.
  • To a large extent, the generations have evolved in tandem with the technologies
  • all three models are very much in existence today.

For each of these models of DE pedagogy they outlined a set of  conditions that characterizes each of them.

During the last years MOOCs  (Massive Open Online Courses) have been carried out with great success. Examples are  CCK08, PLENK2010, MobiMOOC (2011), EduMOOC (2011), Change11, and LAK12. 

They represent an emerging methodology of online teaching. Their structure was inspired by the philosophy of connectivism and the implementation requires conceptual changes in perspective from both “facilitators” (tutors) and learners.
Their subjects ranged from connectivism and connective knowledge (CCK), personal learning environments and networks and knowledge (PLENK), Online learning for today and tomorrow (EduMOOC) to the more technically involved on mobile learning (MobiMOOC).
They share many common features but two different delivery formats can be established. MobiMOOC and EduMOOC for example used a centralized forum and used a Google Group  which made all debates and discussions happen nearly instantly. The others utilized "The Daily", a daily newsletter that harvests new posts from blogs and other online contributions from participants. From the perspective of learning analytics for an external researcher  (non organizer) the former represents a much simpler research environment.


The 2011 AI-Stanford class on Artificial Inteligence taught by Sebastian Thrun and Peter Norvig was also a massive open online course with 160,000 registered enrollees of which  20,000 completed all coursework. In the words of their creators it was "A bold experiment in distributed education". It was offered free and online to students worldwide from October 10th to December 18th 2011. The course included feedback on progress and a statement of accomplishment. The curriculum drew  from that used in Stanford's introductory Artificial Intelligence course. The instructors offered similar materials, assignments, and exams.

If one uses Anderson and Dron to classify the DE pedagogy to these online formats it becomes clear that the AI-Stanford course falls predominantly  into the  cognitive-behaviorist category (with some small components from social constructivism) and the MOOCs  into the connectivist. This analysis is similar to that recently posted in her blog by  Anne Zelenka (http://annezelenka.com/2012/02/16/getting-ready-for-connected-learning/) for the Stanford machine learning class by Andrew Ng and the LAK12 MOOC.

Picture 1 shows the number of visitors to the main page of EduMOOC. Although it had nearly 2000 registered participants only 100/200 on average participated after the first weeks.(from Google analytics)
Picture 2 is the daily number of participants in the AI-Stanford class. It translates to around 25,000 number of daily visitors (between 1/3 and 1/4 of those that visit daily khanacademy.org). Observe that a similar pattern to the ai-class.com is starting to emerge for udacity.com. (from Alexa.com analytics).




Picture 1. Number of daily unique visitors to EduMOOC.

Picture 2. Comparison of number of daily visitors to ai-class.com, khanacademy.org and udacity.com

It is clear that we are in the presence of very different course formats:
  • the AI-Stanford participants have totally different learners goals and preparation than those in MOOCs.
  • there exists a very different nature of the subjects studied: educational theory and engineering.
  • the retention behavior (number and pattern) in both type of the massive courses is totally different and worth researching further. 
  • the AI-Stanford course falls into the cognitive-behaviorist pedagogy category and the MOOCs  into the connectivist.
I have actively participated, mostly with research interest, in 4 different MOOCs (2 from each format described above) and completed successfully the AI-Stanford course. I now joined the 7 weeks "CS101 Building a Search Engine" course that starts 20th of February. This time, sponsored by a spin-off of the AI-Stanford class : udacity.com.

In August 2011 during  EduMOOC, an active discussion emerged as to whether the AI-Stanford class was a MOOC (see my previous posts). I would definitely conclude that the AI-class was not a MOOC.

These are very interesting times to apply learning analytics concepts  for researching these successful massive online courses.

tag for: #lak12 and #change11