Monday, April 23, 2012
The newest education fiasco
It had to happen, I suppose.
Someone has invented a computer to grade essay tests. Not content to merely developing mindless standardized
multiple choice tests where a lot of answers make no sense, (read this article about a test question which eighth graders knew was incomprehensible gibberish but which educators defended) educators have
developed an electronic grading machine which can grade 16,000 essays in 20
seconds. This will save a bunch of money
and have the nice benefit of putting a lot of teachers out of business. There is one significant downside to this
grader—it can’t actually read the essays, according to an article in the NewYork Times.
Apparently this minor problem constitutes no impediment to the
education establishment which has endorsed this machine in an article entitled “A
Win for the Robo-readers” in the blog “Inside Higher Ed.” No less distinguished an authority than the
dean of the College of Education at the University of Akron glowingly endorsed
these new things by proclaiming: Computer scoring produced “virtually identical
levels of accuracy, with the software in some cases proving to be more
reliable,” according to a University of Akron news release.
Really? This would
cause me to wonder about the state of education if I had not already come to
the conclusion that educators in this country are insufficiently successful. Robo-readers, you see, can only count words,
but not actually understand the substance of the words. An M.I.T. professor (a college not subject to
the ineptitude rampant in higher ed) has effectively destroyed the concept of
using these computers by pointing out how they work.
The e-Rater’s biggest problem, he says, is
that it can’t identify truth. He tells students not to waste time worrying
about whether their facts are accurate, since pretty much any fact will do as
long as it is incorporated into a well-structured sentence. “E-Rater doesn’t
care if you say the War of 1812 started in 1945,” he said.
I would sincerely doubt that the machine, therefore, can do
just as good a job as human graders, except I doubt most of the human educators
would catch that mistake either. But, I
mean, seriously? For an essay test you
want a machine to grade based merely on algorithms evaluating how many words
are included and how they are placed in sentences, but no one should actually
read the words themselves? I know
teachers are pretty lazy (I mean they complain about making $80,000 a year when
they only work about two-thirds of the time) but are they so intellectually slothful
that they don’t even want to read the tests they give out? Again, I would find that hard to believe,
except my daughter had homework which included coloring in a coloring book when
she was a senior in high school.
But it gets worse. Not
only can the robo-reader not actually read, it has been set up by morons who
seem to understand little about good writing.
Again, no surprise as writing instruction in most college classes is
done by English majors who revere verbosity at the expense of clarity. According to the M.I.T. prof, robo-readers
prefer long essays, with long sentences, long paragraphs, and long words. They have been set up to give extra points to
sentences using the word “however” as a sign of complex sentence structure, and
therefore a proxy for complex thinking.
Robo-readers don’t like sentences starting with “and” or “or,” but they
do like using sentences containing words like “moreover.” In
other words, bellicose blow-hards will do well, but Ernest Hemingway, he would fall short. As would I.
I was trained as a journalist, which means writing in short,
punchy sentences, short paragraphs and direct language. Even in my legal writing I stuck to these
tenets. And (oh shoot, there is a bad
word to start a sentence with) while I cannot claim my legal writing was
superior to those who drop “moreover” into sentences containing “however,” I do
believe my writing made sense and got to the point. I think readers, even appellate judges,
appreciate such writing, even if automated graders programmed by arrogant
know-it-alls (like, for example, the Dean of Education at the University of
Akron) don’t.
When I was at CDAC we annually published a summary of
legislation passed in the most recent session.
These updates were presented in direct fashion using bullet points
instead of paragraphs. This style suited
my writing. Where others would write: “The
bill raised the maximum allowed speed on roads outside metropolitan districts
to seventy-five miles per hour,” I would put: “Increased speed limit to 75.” The robo-reader would grade me down.
The developers of this product demonstrate how being a little
smart is dangerous when you are trying to be real smart:
As for good writing
being long writing, Mr. Deane said there was a correlation. Good writers have
internalized the skills that give them better fluency, he said, enabling them
to write more in a limited time.
Read that last sentence again and see if it makes sense to
you. If it does you should pursue a
career in education. To me it is
complete garbage. Good writers can
write more in a limited time? That would
be laudible if writing was like making cars on an assembly line. The more Corollas per hour the more money
Toyota makes. However if your goal is
quality, then perhaps writing more in a limited time is not a virtue but a
detriment. As a journalist who had to
learn to write to space limitations I was taught to deliver the most
information in the shortest space. I
suggest any writing should seek the same goal for the sake of the reader. Certainly with space limitation on appellate
briefs, lawyers should not write for top robo-reader grades, but should aspire
to achieve the journalists’ objective. I
have read a lot of legal writing over the years. Most of it would get high marks from a
robo-reader, but I doubt you would enjoy reading it.
Let’s hope real human essay grading survives. And let’s hope those doing it are not like the
Dean of Education of the University of Akron.
Subscribe to Posts [Atom]