Comments on economics, mystery fiction, drama, and art.

Sunday, September 22, 2013

Aonther MOOC experience, and a not-so-positive one

I spent some time thinking about MOOCs in the spring, and taking one so I would have the basis for a presentation at a conference.  That experience was generally positive, although I did find the peer assessment process problematic.

I've been taking another MOOC, which is far outside my area of professional competence--it's a course on Beethoven's piano sonatas--and it's also using peer-assessed assignments as a mechanism for awarding a certificate.  So far (we've completed only the first assignment), this has not only not gone well, it's been something of a disaster.

Let me be clear.  I am enjoying the course, and the lectures, and listening to the sonatas.  The discussion forums that I'm keeping up with have been interesting and the conversations have been almost without exception pleasant.  But as an example of a well-designed and well-operated alternative learning environment, it's not adequate.

We're going to have three assignments during the course, and "scores" on those will be aggregated to determine who does, and who does not, receive a certificate of completion (which is available as an on-line record and is worth the paper it's printed on, but that's OK.  People are mostly taking this for personal growth).  Here's how this was initially described (this has changed):
Three assignments.  Assignments 1 & 2 were each worth 25% of the course assessment and Assignment 3 was worth 50%.
To receive a certificate, a student had to score 70% or higher on the three assignments and participate in the discussion forums. 

For one thing, the first assignment was not well constructed.  One of the sonatas was the basis for a fairly in-depth discussion in the video lecture, and in the assignment we were asked to listen to that sonata and to another one from about the same period, and do a comparison.  Which is OK, but here's what we were specifically asked to do:

1. Identify the sonata that we were using for the comparison, and, if we listened to it on-line, provide a web address.
2. List three ways in which the sonata we selected "conforms to" the one used in the lecture.
3.   List three ways in which the sonata we selected "does not conform to" the one used in the lecture.
(Incidentally, my take on "conforms to" and "does not conform to" was to identify similarities and differences, as "conforming" was not defined.)  Each component of this was to be assessed (when we got to that) on this scale:
0   Does not meet expectations
1   Meets expectations
2   Exceeds expectations 
One difficulty is that these terms were also not defined.  Some people--actually a lot of people--did precisely what the assignment asked for.  So their submission was maybe seven lines long (one for part 1 and three each for parts 2 & 3).  (I'm odd; I wrote a 1200-word essay.)  What's clear here is that we were given no grading rubric at all, and no models of how to apply even the very sketchy framework that we have.  (For example, does providing 4 example of similarities and 4 of differences "exceed expectations"?)

Another difficulty is how a rating on this assignment is supposed to slot into the overall course assessment.

Suppose a submission is rated "1" for each component.  If these are simply treated as scores, then, "meets expectations" gets you 3 out of 6 possible points.  That's 50%, and, based on the 70% requirement, you wouldn't get a certificate even though you met expectations.  This led to a huge discussion on two separate discussion threads (which, to date, have 100 posts).  I participated in those discussions, arguing that this couldn't possibly what was intended.  Also I kept expecting that the course staff (or the instructor) would intervene--which did not happen.  I went outside the course, found an email address to make a contact, described the situation.  And, as it turns out, the system was changed.

My problem is I'm not sure what now exists resolves the issues.  Here's the new policy:
Assignments should be graded on their persuasiveness and clarity, and on drawing upon the lecture materials and personal response.
UPDATED MONDAY, SEPTEMBER 9, 11:30 A.M. EDT: To receive a verified certificate, students will need to view all five lectures, complete all three assignments, and assess a minimum of nine of their peers' assignments (three per assignment). For each assessment, the lowest peer score will be discarded in calculating the grade.
UPDATED THURSDAY, SEPTEMBER 19, 4 P.M. EDT: We've made the following changes to the grading policy/criteria in response to the comments and concerns in the forums. This is the first collaboration between [the sponsoring institution] and [the MOOC aggregator] so it's a learning experience for all of us!

  • Students whose work is evaluated as meeting the criteria will receive a statement of accomplishment. Your first 10 comments and posts to the forum discussions count slightly toward the grade, but are not necessary to earn the statement.
  • Students whose work is evaluated as exceeding the criteria (above 80%) will receive a statement of accomplishment with distinction. Your first 10 comments and posts to the forum discussions count slightly toward the distinction grade, but are not necessary to earn distinction.
  • Explanations of “Meets the criterion” and “Exceeds the criterion” will be included in the peer assessment evaluations and spelled out at the top of the Peer Assessments page, as well as on the Course Format and Grading page.
I'm not clear, exactly, and I don't think anyone is, how that 80% number for completion with distinction gets determined.

The next issue is how a fairly large number of students have chosen to approach the assessment phase of things, which is to basically give everyone high marks, because, after all, nothing really  counts here.  If MOOCs are going to transition to a for-credit model, then this is simply not going to work.  Either students take peer assessment seriously, and are given useful guidelines as to how to perform peer assessments, or the system will not work.

ADDEMDUM (9/23/2013):  Now, the three assignments have equal weight, but there's still no explanation of how the "scores" from the individual assignments will be aggregated.  I'd hate to see my course-teacher evals if I had ever has a system like this.


Friday, September 06, 2013

One of the things that bothers me

Tyler Cowen posts this:
From Greg Mankiw:
John Lott points out the following: “So far this year there have been 848,000 new jobs. Of those, 813,000 are part time jobs…. To put it differently, an incredible 96% of the jobs added this year were part-time jobs.”

I  have already commented (briefly, in the preceeding post) on the relevance of this for current employment numbers.  But it's worth expanding on.  Consider, if you will, what one of the major, consistent differences between January and August is in the U.S. economy.  It's the presence in the labor force of a huge number of seasonal workers--students, out of school for the summer, and looking for work.  And why does this matter?  Well, since 1997, the average growth in the teenage labor force betwen January and August has been +16.8%, while the average percentage growth in the age 20 and over labor force, for the same January to August period is...-0.4%.  So where do we think employment growth is going to be?  It almost has to be concentrated among teenagers, doesn't it?  And, since many teens seek (or can find) only part-time employment, what would you expect to be the case about the change in part-time employment?   (Indicentally, over the 1948 to 2013 period, the teen labor force has grown by an average of 28% between January and August, and the adult labor force has declined by an average of 0.6%.)

So how would we describe someone who presumably knows (or ought to know) about these seasonal fluctuations in the labor force?  As a neutral economic analyst? Or as a hack?

(All data

Has employment growth been disproportionately in part-time employment

Tyler Cowen posts this:
From Greg Mankiw:
John Lott points out the following: “So far this year there have been 848,000 new jobs. Of those, 813,000 are part time jobs…. To put it differently, an incredible 96% of the jobs added this year were part-time jobs.”  
 But we usually look at year-over-year, not January-to-August.  In which case, unless I'm reading the tables incorrectly, and comparing (seasonally-adjusted) August 2012 with August 2013 (in thousands):

PT Emp 8/12: 26,899
PT Emp 8/13: 27,250
% Change in PT: +1.34%

FT Emp 8/12: 115,275
FT emp 8/13: 116,920
% Change in FT: +1.43%

Tot Emp 8/12: 142,164
Tot Emp 8/13: 144,170
% Change in FT: +1.41%

PT Emp as a % of Tot Emp, 8/12: 18.9%
PT Emp as a % of Tot Emp, 8/13: 18.9%
PT employment growth as a & of Total employment growth:
Looks like a big nothingburger to me.