Skip to main content
“Learning from Mistakes” vs. “Learning from Explanations”
Andrew Watson
Andrew Watson

As I wrote last week, thinkers in edu-world often make strong claims at the expense of nuanced ones.

For example:

  • “A growth mindset undergirds all learning” vs. “growth mindset is an obvious boondoggle.”
  • “AI will transform education for the better” vs. “AI will make people dumber and schools worse.”
  • “Be the sage on that stage!” vs “get off the stage to guide from the side!”

The list goes on (and gets angrier).

A closeup of a young student leaning his face up against a chalkboard, his eyes closed in frustration.

When researchers start digging into specifics, however, the daily experience of teaching and learning gets mightily complicated, and mighty fast.

All those strong claims start to look…well…too strong for their own good.

One extraordinary example of “digging into the specifics” can be found in Graham Nuthall’s The Hidden Lives of Learners. Nuthall put cameras and mics on students in New Zealand classrooms, and arrived at all sorts of astonishing conclusions.

Another recent study looks quite specifically — no, really specifically — at 4 teachers. The goal: to understand what part of their work helped students learn.

Here’s the story.

Time to Review

A group of scholars, led by Dr. Janet Metcalfe, wondered if students learned more from teachers’ responses to their mistakes than from teachers’ direct instruction. (You can learn more about the study here.)

A few important points merit attention right away.

First: the classroom sessions I’m about to describe are REVIEW sessions. The students have ALREADY learned the math covered in these lessons; the teachers are helping them review in preparation for a high stakes exam.

In other words: this study does not focus on initial instruction. It focuses on subsequent review.

Second: the students involved VOLUNTEERED to take part. They are, presumably, atypically motivated to learn math.

Keep these points in mind as you think about applying the ideas described below.

In this study, 4 teachers helped 175 8th grade students prepare for upcoming state math exams.

For half of the students, the teachers taught 8 lessons (“explicit instruction”) reviewing core math concepts that would be on that exam.

For the other half, the teachers responded to the misakes that students made on practice tests. That is: during 4 sessions, students took 45 minute math tests. And after each of those sessions, the teachers

“were instructed […] to focus on the students’ errors and to do whatever they deemed appropriate to ensure that the issues underlying the errors would not reoccur and that the students would learn from their errors.”

So, which review approach proved more helpful — the explicit instruction, or the learn-from-mistakes instruction? And, why?

An Envelope, and LOTS of Questions…

The answer to that first question — which kind of review proved most helpful? — is easy to answer.

Students in both groups learned math; they did better on the post-test than the pre-test.

The students in the “learn-from-mistakes” group learned more.

This straightforward finding leads to obvious questions. And — alas — those obvious questions are VERY tricky to answer.

For instance, “how much more did the students in the learn-from-mistakes group learn?” That’s a reasonable question. The answer takes some careful parsing.

Roughly speaking, students in the explicit instruction group increased their scores about 2% per hour of instruction.

For those in the learn-from-mistakes group, the answer depended on the teacher.

The least successful teacher helped students in this group improve 2% per hour of instruction. The most successful teacher helped students improve 5% per hour of instruction.

Of course, that last paragraph prompts another reasonable question: what was different about those two teachers? Why did one teacher benefit his/her students more than twice as much as their colleague?

Let the Sleuthing Commence…

The study’s authors spend a great deal of time — and crunch a great many equations — to answer that question.

For instance:

Maybe the teacher whose students learned more (let’s call her Teacher M) is just a better teacher than the one whose students learned less (Teacher L).

As the researchers point out, that explanation doesn’t make much sense. After all, in their explicit instruction sessions, both Teacher M and Teacher L helped their students equally.

(By the way: to simplify this blog post, I’m leaving out the two other teachers for now.)

Okay, maybe Teacher M did a better job of focusing on students’ mistakes, whereas Teacher L spent too much time focusing on questions that students got right.

Nope. This study includes quite an eye-watering graph to show that they both focused about the same on students’ mistakes.

As the researchers write: “all of the teachers taught to the errors of their students, and … the extent to which they did so did not predict student learning.”

So, what was the secret sauce?

The Perfect Combination

After a few more false leads, the study focuses on two moment-by-moment variables: the teachers’ focus, and the kind of interaction with the student.

Focus: did the teachers

“[dwell] upon how to solve the problem correctly,” or

“[delve] into the nature of the errors – why the students had made them, what the difficulty in the logic was, and/or how to recognize and circumvent such mistakes in the future”?

Kind of interaction: did the teachers explain/lecture, or did they discuss/interact?

With this pair of questions, at last, the study struck gold.

Teacher L — whose students learned relatively little — focused almost all her time on “how to solve the problem correctly.” While pursuing that goal, she divided her time equally between lecture and discussion.

Teacher M — whose students improved more quickly — spent almost all her time in discussion, with almost no time in lecture. While in this interactive mode, she divided her time more-or-less equally between solving problems and understanding the nature of the mistake.

This final insight allows us to make this claim:

Highly motivated 8th grade math students,

reviewing in preparation for a high-stakes exam,

learn less from explicit instruction and more from making and reviewing their mistakes,

as long as the teacher keeps those review sessions interactive,

and equally focused on “getting the answer right” and “understanding the nature of the mistake.’

Notice, by the way, all the nuance in this statement.

To emphasize just one point here: this study does NOT argue that “learning from mistakes” is better than “direct instruction” in all circumstances.

It argues that students learn more from mistakes when reviewing, as long as the teacher follows a very particular formula.

A Final Note

Heated battles in this field often get hung up on specific labels.

As I’ve written before, we do a LOT of arguing about benefits of “desirable difficulty” vs. “productive struggle” — an odd set of arguments, given that both phrases seem to mean the same thing.

This study was co-authored by (among other scholars) Robert Bjork — who helped coin the phrase “desirable difficulty.” For that reason, you might be surprised to learn that this study touts the benefits of “productive struggle.”

That is: the students took a test, they made mistakes, they wrestled with those mistakes, and they learned more. Their struggle (trying to understand what they did wrong) was productive (they improved on their test scores — and probably their understanding of math).

Of course, I could just as easily describe that process as “desirable difficulty.” The difficulties these students faced here — the test, the mistakes, the analysis — turned out to be beneficial — that is, “desirable.”

My own view is: don’t get hung up on the label. The question is: are the students both thinking harder and ultimately succeeding? If “yes” and “yes,” then this teaching approach will benefit students.


Metcalfe, J., Xu, J., Vuorre, M., Siegler, R., Wiliam, D., & Bjork, R. A. (2024). Learning from errors versus explicit instruction in preparation for a test that counts. British Journal of Educational Psychology.

“All People Learn the Same Way”: Exploring a Debate
Andrew Watson
Andrew Watson

Over on eX/Twitter, a debate has been raging — with all the subtlety and nuance of your typical Twitter debate. The opening salvo was something like:

“Despite what you’ve heard, all people learn the same way.”

You can imagine what happened next. (Free advice: look away.)

Despite all the Twitter mishegas, the underlying question is useful and important — so I’ll do my best to find the greys among the black-vs-white thinking.

Here goes.

Useful…

I suspect that this claim — “all people learn the same way” — got started as a rebuttal to various myths about “meaningful sub-categories of learners.” Alas, most of those proposed sub-categories turn out not to be true or useful.

  • No, learning styles theory has not held up well.
  • No, the theory of “multiple intelligences” has no useful teaching implications. (And Howard Gardner didn’t claim that it did.)
  • No, “left-brain, right-brain” dichotomies don’t give us insights into teaching and learning.
  • No, the Myers-Briggs Type Indicator doesn’t tell us how to manage classrooms or lesson plans. *
  • My British friends tell me about some system to sort students according to different colored hats. (I do not think I’m making this up.)
  • (I’ve written about these claims so many times that I’m not going to rehash the evidence here.)

Whenever anyone says “we can usefully divide students into THIS kind of learner and THAT kind of learner,” we should be highly suspicious and ask to see lots of research. (If you want to evaluate that research critically, I can recommend a good book.)

A graphic of two heads facing each other in conversation: one with a lightbulb inside, the other with a question mark.

Well, the shortest rebuttal to this sort of claim is: “Those sub-categories don’t exist. ALL PEOPLE LEARN THE SAME WAY.”

Now, any time someone makes an absolute claim about teaching and learning in six words and seven syllables, you know that claim is oversimplified.

But you can understand the temptation to cut off all those untrue claims with a brusque rejoinder. That temptation pulses all the stronger because those untrue claims persist so stubbornly. (In 2025, schools of education are STILL teaching learning styles.)

…and (substantially) True

This claim (“all people…”) isn’t simply useful; it’s also largely accurate.

For example:

At the neuro-biological level — neurons, neurotransmitters, synapses, myelin, etc. — long-term memories form the same way for everyone.

As far as we know…

  • men and women
  • tall people and short people
  • introverts and extroverts
  • people who think cilantro tastes like soap, and the rest of us

… everyone forms new neural networks (that is: “learns”) the same way. (I should emphasize that our understanding of this neural process is still VERY basic. We’ve still got SO MUCH to learn.)

When we switch our analysis from neuroscience to psychology, the claim still holds up well.

For instance:

  • Everyone uses working memory to combine new information from the environment with concepts and facts stored in long-term memory.
  • Everyone depends on a complex of systems that we call “attention” to control the flow of all that information.
  • Everyone responds simultaneously with emotion and cognition to any given set of circumstances. (These two systems overlap so much that distinguishing between them creates lots o’ challenges.)

And so forth.

Given all these similarities, cognitive science research really can offer up advice that applies to almost everyone in almost all circumstances.

Yes: we really must manage working memory load so that students can build concepts effectively.

Yes: retrieval practice helps almost all learners consolidate and transfer almost all school learning. (Yes, “retrieval-induced forgetting” is a concern, but can be managed if we strategize effecively.)

Yes: spacing and interleaving enhance learning in most circumstances.

And so on…

Given the broad usefulness and truth of the “we-all-learn-the-same” claim, I certainly understand why it’s tempting to make it — and to defend it.

Exceptions Matter

I’ve written that the claim is “broadly” useful and true; but I don’t think it’s ALWAYS true.

For example:

Students with diagnoseable learning differences really might learn differently.

For instance: dyslexic readers combine distinctive neural networks to get their reading done. Those readers almost certainly benefit from distinct teaching strategies. In other words: by any reasonable definition, they “learn differently.”

Another example:

All learning depends on prior knowledge.

That claim — which sounds like “all people learn the same way” — also suggests that people learn differently.

Let’s imagine that you know A LOT more about opera than I do. (This assumption is almost certainly true.) If you and I both attend an advanced lecture about an obscure opera — “Der Häusliche Krieg” —  your learning will function quite differently from mine. Because you’re an expert and I’m a novice, we will learn differently.

Lots of individual differences will bring teachers to this same point.

Because I teach English, I teach grammar — and MANY of my students simply hate grammar. Their prior experience tells them it’s boring, useless, and impossible to understand.

On the one hand, those enduring cognitive principles listed above (working memory, retrieval practice, etc.) do apply to them. But their emotional response to the content will in fact shape the way they go about learning it.

Core principles of learning apply, and my students’ prior experience means that their learning process might well be different.

Beyond Twitter Rage

Twitter generates lots of extreme debates because complex ideas can’t be boiled down into its trivializing format.

So it’s not surprising that a nuanced understanding of “individual differences within important, broad, and meaningful similarities” doesn’t work in Twitter-ville.

At the same time, I do think our discussions of learning should be able to manage — and to focus on — that nuance.

Our students will learn more when we recognize BOTH the broad cognitive principles that shape instruction, AND the individual variation that will be essential within those principles.


Back in 2019, Paul Kirschner wrote a blog post on this same point. His “digestive system” analogy is VERY helpful.


* A few years back, I emailed the MBTI people to ask for research supporting their claims. They did not send me any. They did, however, sign me up for their newsletter.

Difference Maker: Enacting Systems Theory in Biology Teaching, by Christian Moore-Anderson
Guest Post
Guest Post

Today’s book review is by Beth Hawks.


Teaching Science is a Challenge

Science classes cover a massive amount of content knowledge, and it can feel overwhelming finding the best approach to teaching it without feeling like students are merely acquiring a set of disjointed facts.

In the introduction to his book, Difference Maker: Enacting Systems Theory in Biology Teaching, Christian Moore-Anderson sums up the challenge well, when he says, “I’m sure you’ve felt – at some point – that to grasp biology was to master an encyclopedia.”

For some time, he had taught in most of the typical ways, but he felt he was tied to creating resources and activities for students and that students still weren’t seeing the deeper connecting threads of biology.

Time for a Change

As with many things, the move to online teaching during the pandemic motivated him to make a change…because what he had been doing was no longer working.

This concern led him to the world of cybernetics and systems theory; and moved him from a sense of mass knowledge transfer to one of teaching biology from a set of unifying principles.

Book Cover for Difference Maker, by Christian Moore-Anderson

As he dug even more deeply, he found that he wasn’t just teaching about systems; he was enacting systems theory as a method of instruction.  He co-created diagrams with students and engaged them in dialogue to reveal their understanding.

By doing so, he created an interactive feedback loop that allowed him to respond flexibly to student needs.

Model Found in Cybernetics

The book begins with a few chapters of explanation of cybernetics. (Don’t let the terminology of “cybernetics” frighten you.  It is not necessary to have a deep understanding of all of these terms.)

After I set aside my mental images from Star Trek of Dr. Noonien Soong creating Data’s positronic brain (my first exposure to the word cybernetics), I was able to see his blending of two aspects of the discipline.

Conversation theory posits that – since meaning is made in the mind of the listener rather than being transmitted by the speaker – we can have a shared understanding of meaning only through dialogue. The teacher explains, but then he discovers what the student heard through conversation.

Moore-Anderson describes doing this through multiple choice questions or open-ended questions; he also acknowledges that it can be done with other methods (e.g. mini-whiteboards, written answers on paper).

The law of requisite variety – When a system is complex, it can only survive if its ability to adapt is equally complex. In other words, there must be a variety of responses to a variety of changes. If a teacher has only a small set of responses when something happens in her classroom, she won’t be able to adapt to the needs of students during a lesson.

He combines these theories into a model of instruction he calls “the recursive teaching model.”

The teacher explains, while the student interprets. Then the student explains what they understand while the teacher interprets. This cycle keeps looping back on itself until they agree on their understanding.

Moore-Anderson provides guidance by opening each section with a key idea and walking through the process of implementation in the classroom. He includes the conversations he has with his students as well as the diagrams he creates with them during those conversations.

Have Students Notice Differences by Predicting Outcomes

After setting up his foundational theory, Moore-Anderson gets to the heart of his new practice: having students perceive distinctions in the concept being taught.

He defines distinctions as “differences that make a difference to the observer.”

As teachers, we often begin with sameness – giving multiple examples of a new concept to solidify students’ recognition of the standard. This strategy, however, shows only the idea itself and not its interaction with a conceptual whole.

Having students repeat similarities in their own words might not give them a full grasp of the influence they have on the biological system overall.

Moore-Anderson argues that we should begin with variations of the concepts so that students can see what difference a change would make.  He prompts students to notice these differences (and the difference they make) by posing “what if” questions.

  • What if someone drinks sea water rather than fresh water?
  • What if the predator in this ecosystem suddenly disappears?
  • What if this heart valve were missing?
  • What if the sugar concentration was increased in this solution?

When students first predict the outcome of a change, and then add those changes to diagrams they create together, they arrive at a shared understanding of each concept. This approach lets them understand in a deeper way than simply explaining how something works and having students paraphrase that explanation.

Moore-Anderson restricts the responses to keep things from getting out of hand by giving choices like, “Will a change in X make Y increase, decrease, or stay the same?” and having students defend their answers.

Practical Examples Inspire Teachers

The true strength of this book for me as a classroom teacher comes from his descriptions of using this method in his lessons.

When Moore-Anderson moves from summaries of cybernetic theories into examples of actual classroom conversations with students, he allows me to imagine implementing his method with my own students.

As a teacher, my favorite education books are those that inspire ideas outside of those mentioned in the writing, and Moore-Anderson does exactly that throughout each chapter.  As I read his stories, I was able to picture myself having similar conversations with my students and thought of other topics to which I could apply his method.

Difference Maker gives me a way to think about content delivery rather than prescribing an exact method for me to copy.

Is It for Everybody?

The Difference Maker method might not be equally appropriate in all settings.

I imagined my middle schoolers might find this approach frustrating because they lack the foundational knowledge to make reasonable predictions. On the other hand, I thought my juniors and seniors would thrive with these sorts of classroom conversations.

I trust Moore-Anderson when he says he applies the method in class with eleven year old students, but I’m not sure I would. As with all techniques, success relies on adapting them to your context.

As the title makes clear, this book is intended for biology teachers. Since all biological processes have noticeable cause and effect relationships within systems, that makes sense.

I had a bit harder time recognizing topics in which I might apply it to chemistry and physics.  So, I will definitely recommend this book to my biology teacher friend and suggest that he loan it to the environmental science teacher across the hall.

As a chemistry and physics teacher, I might want to have it in the back of my mind as I planned some lessons, because it would provide a way of thinking about how to explain cause and effect. However, I wouldn’t make it a regular practice as Moore-Anderson does with biology.  (Did I mention earlier that it is good to adapt to context?)

Can I Be in This Class?

My biggest takeaway from reading Difference Maker is that I would have loved to be in this biology class when I was a student. I would have absorbed more, seen deeper threads, and remembered more.  I would have walked away with a better understanding of myself and my relationship with my environment.


Beth Hawks taught middle and high school science for 25 years, serving as the science department chair at GRACE Christian School in Raleigh, North Carolina for 17 years. A graduate of Oral Roberts University, Beth has taught 8th grade Physical Science, Physics, Chemistry, Algebra IB, Health, Photography, and Yearbook. She frequently provided professional development to colleagues in her role as resident brain enthusiast and has now moved into consulting full time under the name The Learning Hawk.

You can hear Beth speak at our Science of Learning conference in NYC in April.

“AHA!”: A Working Memory Story…
Andrew Watson
Andrew Watson

Teachers, students, people: we spend lots of our time figuring stuff out.

Sometimes, we do that figuring out by sorting through options, considering similar situations in the past, trying out logical possibilities, and so forth.

And other times, the figuring out just happens: “AHA!”

If we’re going to think about these different mental experiences in a scientific way, we need technical terminology; so, let’s go ahead and call that first process “analysis” and the second one “insight.”

Analysis (I’m paraphrasing from this study here)

  • involves searching long-term memory for potential algorithms, schemas, or factual knowledge,
  • feels effortful, and
  • happens consciously;

Insight, on the other hand,

  • happens more-or-less automatically,
  • feels effortless, and
  • happens unconsciously.

The two questions I’ll explore below are:

  1.  how does working memory load influence the Aha! experience? and
  2.  how does the answer to that question shape the way we plan teaching?

Brace yourself for a radical answer to question #2.

AHA + Working Memory

Obviously, analysis loads working memory. All that comparing options and combing through long-term memory takes up scarce working memory resources.

A drawing of a small bird being freed from a cage -- against a brigth orange and yellow background.

But what about insight? Do those Aha! moments require working memory?

To answer this question, a group of Dutch researchers asked 100+ college students to solve fun mental puzzles.

Here’s the game:

I’m going to list 3 words, and you’re going to tell me another word that “goes with” all three.

So, if I say “artist, hatch, route,” you might come up with the word “______.”

Perhaps you came up with a solution by working your way through various familiar phrases: “con artist? makeup artist?” That would be an analysis solution.

Or perhaps the answer — “escape” — just came to you without any deliberate thought process. That would be an insight solution.

These problems have a splendidly cumbersome name: “compound remote association tests.” Happily, they allow for a handy acronym: CRA.

In their study, the Dutch researchers had students solve CRA problems.

One group of students had no additional working memory load.

A second group had a small WM load; they had to remember a two-digit string while solving problems.

A third group had a larger WM load; they had to remember a 4 digit string.

So, here’s the research question: did the WM load have an effect on analysis solutions or insight solutions as students undertook CRA tests?

Answers, Plus

“Yes, and no.”

In other words:

“Yes”: as WM load increased, the number of correct analysis solutions decreased.

“No”: as WM load increased, the number of correct insight solutions stayed the same.

Now, the first half of that answer was easy to predict. When researchers increased the WM load, the students’ WM “headroom” decreased. Because analysis requires WM capacity, students’ reduced headroom made CRA solutions harder.

The second half of that answer is really interesting.

Students were equally good at insight solutions no matter the WM load. The logical implication: insight solutions do not require WM. (At least, not in a way that is detected in this research paradigm.)

Now that we know the answer to that question, what do we teachers do with that information? How does it help us plan our teaching?

Thinking Aloud

I should say at this moment that I’m switching from research to speculation. That is: the blog post up to know has been a summary of a research study. I’m now leaving that study to consider what we might do with this information.

First off, I suspect that a very large percentage of the school work students do requires analysis, not insight (as defined in this study).

That is: my students have to think their way through grammar solutions. They have to ponder the meaning of that symbol — or that sentence — right there.

They rarely say: “it just came to me — that’s a participle!”

If I’m right that MOST school work relies on analysis, then MOST of the time we teachers must focus on working memory load.

If we place too much stress on working memory, we will hamper our students’ ability to accomplish those analytical tasks.

But…drum roll please…I can imagine niche-y circumstances where we WANT students to prefer insight to analysis. In those circumstances, I hope my students say, “Aha!” rather than “let me think about that.”

For instance: improv theater.

When actors try improv, we want them to “get out of their heads” and let instincts take over. (For the record: I’m bad at improv my self, but I founded and coached an improv troupe at the high school where I taught.)

This thought process leads to an even more surprising idea…

There’s a First Time for Everything

I spend much of my professional life explaining working memory to teachers and coaching them to avoid working memory overload. After all: “no academic information gets into long-term memory except through working memory.”

If, however, WM load hampers analysis, it might thereby indirectly promote insight.

Perhaps then I should deliberately ramp up WM load during improv rehearsals. This approach would make analytical solutions less likely, and in that way make insight solutions more likely.

This improv-coaching idea leads to other, equally radical possibilities. Are there other times during a students’ academic career where we prefer insight to analysis? Should we, during those lesson plans, keep working memory demands unusually high?

I can hardly believe that I’m seriously talking about deliberately stressing working memory. My professional identity is wobbling.

TL;DR

A recent study by Dutch researchers suggests that analytical problem-solving requires WM, but insight problem solving doesn’t.

This finding has prompted me to wonder if we should — in rare circumstances — increase WM load to make students’ insight solutions likelier.

That possibility is entirely new to me — but quite fun to ponder. I hope that my WM friends — and my improv friends — will join the conversation.

 


Stuyck, H., Cleeremans, A., & Van den Bussche, E. (2022). Aha! under pressure: The Aha! experience is not constrained by cognitive load. Cognition219, 104946.

Nerd Alert: Focusing on Definitions
Andrew Watson
Andrew Watson

You come to Learning and the Brain conferences — and to this blog — because you want research-based insight into teaching and learning.

We sincerely hope that you get lots of those insights, and feel inspired by them.

At the same time, all sorts of work has to go on behind the scenes to make sure such advice has merit. Much of that work seems tedious, but all of it is important.

For instance: definitions.

When researchers explore a particular topic — say, “learning” — they have to measure something — say, “how much someone learned.”

To undertake that measurement, they rely on a definition of the thing to be measured — for example: “getting correct answers on a subsequent test = learning.”

A close-up photograh of a dictionary lying open.

Of course, skeptics might reject that definition: “tests don’t reveal learning. Only real world application reveals learning.”

Because these skeptics have a different definition, they need to measure in a different way. And, of course, they might come to a different conclusion about the value of the teaching practice being measured.

In other words:

If I define learning as “getting answers right on a test,” I might conclude that the Great Watson Teaching Method works.

If you define learning as “using new concepts spontaneously in the real world,” you might conclude that the Great Watson Teaching method is a bust.

The DEFINITION tells researchers what to MEASURE; it thereby guides our ultimate CONCLUSIONS.

A Case in Point

I recently read an article, by Hambrick, Macnamara, and Oswald, about deliberate practice.

Now, if you’ve spent time at a Learning and the Brain conference in the last decade, you’ve heard researcher K. Anders Ericsson and others present on this topic. It means, basically, “practicing with the specific intention of getting better.”

According to Ericsson and others, deliberate practice is THE key to developing expertise in almost any field: sports, music, chess, academics, professional life.

Notice, however, that I included the slippery word ‘basically’ in my definition two sentences ago. I wrote: “it means, basically, ‘practicing with the specific intention of getting better.’ ”

That “basically” means I’m giving a rough definition, not precise one.

But, for the reasons explained above, we shouldn’t use research to give advice without precise definitions.

As Hambrick, Macnamara, and Oswald detail, deliberate practice has a frustratingly flexible definition. For instance:

  • Can students create their own deliberate practice regimens? Or do they need professionals/teachers to create them and give feedback?
  • Does group/team practice count, or must deliberate practice be individual?

As the authors detail, the answers to those questions change over time.

Even more alarmingly, they seem to change depending on the context. In some cases, Ericsson and his research partners hold up studies as examples of deliberate practice, but say that Hambrick’s team should not include them in meta-analyses evaluating the effectiveness of deliberate practice.

(The back-n-forth here gets very technical.)

Although the specifics of this debate quickly turn mind-numbing, the debate itself points to a troubling conclusion: because we can’t define deliberate practice with much confidence, we should hesitate to make strong research claims about the benefits of deliberate practice.

Because — again — research depends on precise definitions.

Curiouser and Curiouser

The argument above reminded me of another study that I read several years ago. Because that study uses lots of niche-y technical language, I’m going to simplify it a fair bit. But its headlines were clear:

Project-based learning helps students learn; direct instruction does not.

Because the “constructivist” vs. “direct instruction” debate rages so passionately, I was intrigued to find a study making such a strong claim.

One of my first questions will sound familiar: “how, precisely, did the researchers define ‘project-based learning’ and ‘direct instruction.’ ”

This study started with these definitions:

Direct instruction: “lecturing with passive listening.”

Constructivism: “problem-solving opportunities … that provide meaning. Students learn by collaboratively solving authentic, real-life problems, developing explanations and communicating ideas.”

To confirm their hypothesis, the reseachers had one group of biology students (the “constructivism” group) do an experiment where they soaked chicken bones in vinegar to see how flexible the bones became.

The “direct instruction” students copied the names of 206 bones from the chalkboard into their notebooks.

After even this brief description, you might have some strong reactions to this study.

First: OF COURSE students don’t learn much from copying the names of 206 bones. Who seriously thinks that they do? No cognitive scientist I’ve ever met.

Second: no one — and I mean NO ONE — who champions direct instruction would accept the definition as “lecturing with passive listening.”

In other words: we might be excited (or alarmed) to discover research championing PBL over direct instruction. But we shouldn’t use this reseach to make decisions about that choice because it relies on obviously inaccurate definitions.

(If you’re interested in this example — or this study — I’ve written about it extensively in my book, The Goldilocks Map.)

In Brief:

It might seem nerdy to focus so stubbornly on research definitions. If we’re serious about following research-informed guidance for our teaching, we really must.


Hambrick, D. Z., Macnamara, B. N., & Oswald, F. L. (2020). Is the deliberate practice view defensible? A review of evidence and discussion of issues. Frontiers in Psychology11, 1134.

Finding a Framework for Trauma
Andrew Watson
Andrew Watson

Although education itself encourages detailed and nuanced understandings of complex ideas, the field of education often rushes to extremes.

According to the loudest voices:

  • Artificial intelligence will either transform education for the better, or make us all dumber.
  • Memorization is either an essential foundation for all learning, or “drill and kill.”
  • A growth mindset will either motivate students to new successes, or delude teachers into this out-dated fad (“yet” schmet).

And so forth.

This tendency to extremes seems especially powerful at the intersection of education and trauma.

Depending on your source and your decade, trauma is

  • Either a problem so rare that it doesn’t merit discussion, or
  • a problem so pervasive and debilitating that we need to redesign education.

How can we find a steady, helpful, realistic path without rushing to extremes?

A Useful Start

If we’re going to think about trauma, we should start with a definition of it.

A thousand-word blog post can’t get into the subtleties, but here’s a useful starting place:

“Trauma is a response to an event or series of events that overwhelms an individual’s capacity to cope.”

In that sentence, “overwhelmed” means a serious and ongoing response — not short-term unhappiness (even if intense).

Symptoms of being “overwhelmed” might include dissociation, flashbacks, night terrors, drug addiction, or major depression.

Note: unlike trauma, stress puts pressure on — but does not inherently overwhelm — coping capacity.

Thoughtful people might not agree with the sentences above, but I think most people will agree that they’re an honest attempt to describe a complex mental state.

The First Pendulum

Discussions of trauma — especially the extreme versions — begin with its sources.

When I started teaching, in the 1980s, our school — quite literally — NEVER discussed trauma. (To be fair, I should say: “I don’t remember ever discussing trauma.”)

A closeup of a man sitting with his forearms resting on his legs; his hands are tensely knotted.

The implied message: “trauma probably happens somewhere to some people. But it’s so rare, and so unlikely to be a part of our students’ lives, we’re not going to use precious faculty time to focus on it.”

In brief: “the causes of trauma aren’t relevant to teachers.”

Since those days, our profession has rightly recognized that trauma DOES happen. It does happen to our students and in their families and communities. The causes of trauma are absolutely relevant to teachers.

And yet, because our profession tends to extremes, I now hear the flipside of that earlier casual dismissal. Instead of being rare and almost irrelevant, trauma is common and pervasive.

One sign of this trend: a lengthening list of common occurances that cause trauma. Perfectly typical stressors — being cut from a sports team, getting a bad grade — are reframed as traumatic.

I’ve even seen the claim that “things that we don’t get to experience can be traumatic.” While missed chances can be disappointing, even stressful, it’s just hard to see how they fit the definition of trauma.

The list of symptoms has also grown. E.g.: “procrastination is a sign of trauma.”

Now, I don’t doubt that some people who have experienced trauma procrastinate; I also don’t doubt that almost everyone procrastinates. Traumatized people might procrastinate, but not all people who procrastinate have experienced trauma.

To avoid being caught up in this race to the extremes, I think it helps to keep the definition in mind: a response to an event or series of events that overwhelms an individual’s capacity to cope.

Such events do happen to our students — but not frequently, and not to all of them.

The Second Pendulum

While we negotiate this first pendulum (“trauma doesn’t happen/is universal”), we also watch a second one swing back and forth.

Old school: “least said, soonest mended. On those infrequent occasions when trauma really happens, we should all just keep going and not make a big deal about it.”

Pendulum swing: “a traumatized student is literally incapable of paying attention or learning. Schooling as we know it should come to a halt.”

This second statement is usually accompanied by neuroscience terminology, starting with “amygdala.”

I was reminded of this pendulum swing at the most recent Learning and the Brain conference in Boston — specifically in a keynote address by George A. Bonanno.

Dr. Bonanno has been studying trauma for decades; in his talk, he focused on the symptoms that follow trauma.

He and his team have been running studies and aggregating data, and he showed graphs representing conclusions based on more than 60 trajectory analyses.

To present his complex findings as simply as possible:

  • Roughly 10% of people who experience trauma have enduring symptoms;
  • Less than 10% start without symptoms, but symptoms develop over time and persist;
  • Roughly 20% initially experience symptoms, but recover over two years;
  • The rest never repond with serious symptoms.

In other words: in Bonanno’s research, two years after trauma, roughly 80% of people do not experience troubling symptoms.

For this reason, by the way, Bonanno does not speak of “traumatic events” but of “potentially traumatic events.”

That is: an event has the potential to create trauma symptoms in a person. But something like two-thirds of people do not experience trauma in response to that potentially traumatic event. (And another 10% recover from those symptoms in a year or two.)

Towards a Balanced Framework

How then should teachers think about trauma in schools.

First: we can avoid the extremes.

Yes, trauma does happen.

No, it isn’t common. (Bad grades aren’t traumatic.)

Yes, schools and teachers should respond appropriately to the trauma that students experience.

No, not everyone responds to trauma the same way. Most people react to potentially traumatic events without trauma symptoms (or recover over time).

Second: within this nuanced perspective, we should acknowledge the importance of responding to trauma appropriately.

That is: events that potentially create trauma might be rare; most people might not respond to them with trauma symptoms.

And: our students who do experience trauma symtoms deserve informed and sympathetic response.

By way of analogy: something like 3% of K-12 students are on the autism spectrum. That’s a relatively small number. And: those students deserve the best education we can provide.

If 3% of our students experience trauma symptoms (I have no idea what the actual percentage is), they too deserve our professional best.

Attempting a Summary

In our profession, we have all too frequently overlooked and downplayed the trauma that some of our students experience. As we try to correct that serious error, we should not commit another error by seeing trauma everywhere, and by assuming it debilitates everyone.


 

A Final Note:

To keep this post a readable length, I have not discussed ACES scores. Depending on the response this post gets, I may return to that topic in a future post.

How to Reduce Mind-Wandering During Class
Andrew Watson
Andrew Watson

I recently wrote a series of posts about research into asking questions. As noted in the first part of that series, we have lots of research that points to a surprising conclusion.

Let’s say I begin class by asking students questions about the material they’re about to learn. More specifically: because the students haven’t learned this material yet, they almost certainly get the answers wrong.

A college age student smiling and raising her hand to ask a question.

Even more specifically — and more strangely — I’m actually trying to ask them questions that they won’t answer correctly.

In most circumstances, this way of starting class would sound…well…mean. Why start class by making students feel foolish?

Here’s why: we’ve got a good chunk of research showing that these questions — questions that students will almost certainly get wrong — ultimately help them learn the correct answers during class.

(To distinguish this particular category of introductory-questions-that-students-will-get-wrong, I’m going to call them “prequestions.”)

Now, from one perspective, it doesn’t really matter why prequestions help. If asking prequestions promotes learning, we should probably ask them!

From another perspective, we’d really like to know why these questions benefit students.

Here’s one possibility: maybe they help students focus. That is: if students realize that they don’t know the answer to a question, they’ll be alert to the relevant upcoming information.

Let’s check it out!

Strike That, Reverse That, Thank You

I started by exploring prequestions; but we could think about the research I’m about to describe from the perspective of mind-wandering.

If you’ve ever taught, and ESPECIALLY if you’ve ever taught online, you know that students’ thoughts often drift away from the teacher’s topic to…well…cat memes, or a recent sports upset, or some romantic turmoil.

For obvious reasons, we teachers would LOVE to be able to reduce mind-wandering. (Check out this blog post for one approach.)

Here’s one idea: perhaps prequestions could reduce mind-wandering. That is: students might have their curiosity piqued — or their sense of duty highlighted — if they see how much stuff they don’t know.

Worth investigating, no?

Questions Answered

A research team — including some real heavy hitters! — explored these questions in a recent study.

Across two experiments, they had students watch a 26-minute video on a psychology topic (“signal detection theory”).

  • Some students answered “prequestions” at the beginning of the video.
  • Others answered those questions sprinkled throughout the video.
  • And some (the control group) solved unrelated algebra problems.

Once the researchers crunched all the numbers, they arrived at some helpful findings.

First: yes, prequestions reduced mind-wandering. More precisely, students who answered prequestions reported that they had given more of their attention to the video than those who solved the algebra problems.

Second: yes, prequestions promoted learning. Students who answered prequestions were likelier to get the answer correct on a final test after the lecture than those who didn’t.

Important note: this benefit applied ONLY to the questions that students had seen before. The researchers also asked students new questions — ones that hadn’t appeared as prequestions. The prequestion group didn’t score any higher on those new questions than the control group did.

Third: no, the timing of the questions didn’t matter. Students benefitted from prequestions asked at the beginning as much as those sprinkled throughout.

From Lab to Classroom

So, what should teachers DO with this information.

I think the conclusions are mostly straightforward.

A: The evidence pool supporting prequestions is growing. We should use them strategically.

B: This study highlights their benefts to reduce mind-wandering, especially for online classes or videos.

C: We don’t need to worry about the timing. If we want to ask all prequestions up front or jumble them throughout the class, either strategy (according to this study) gets the job done.

D: If you’re interested in specific suggestions on using and understanding prequestions, check out this blog post.

A Final Note

Research is, of course, a highly technical business. For that reason, most psychology studies make for turgid reading.

While this one certainly has its share of jargon heavy, data-laden sentences, its explanatory sections are unusually easy to read.

If you’d like to get a sense of how researchers think, check it out!


Pan, S. C., Sana, F., Schmitt, A. G., & Bjork, E. L. (2020). Pretesting reduces mind wandering and enhances learning during online lectures. Journal of Applied Research in Memory and Cognition9(4), 542-554.

Executive Functions “Debunked”?
Andrew Watson
Andrew Watson

As long as I’ve been in this field – heck, as long as I’ve been a teacher – the concept of executive function has floated around as a core way to discuss students’ academic development.

Although the concept has a technical definition – in fact, more than one — it tends to be presented as a list of cognitive moves: “prioritizing, switching, planning, evaluating, focusing, deliberately ignoring…”

A head made up of multiple colored puzzle pieces. The head is open at the top and back

I myself have tended to think of executive functions this way: all the cognitive skills that don’t include academic content, but matter in every discipline. So, if I’m trying to execute a lab in science class, I need to …

… focus on this thing, not that thing,

… decide where to begin,

… decide when to switch to the next step,

…realize that I’ve made a mistake,

…evaluate options to fix my mistake,

And so forth.

Crucially, that list applies to almost any academic task: writing an essay, or evaluating the reliability of a historical source, or composing a sentence in Spanish using a new verb tense…

So: these executive functions help students in school – no matter the class that they are in.

To say all this in another ways: EFs resist easy definition but are mightily important in schools and classrooms. (Truthfully they’re important in life, but that broader range lies outside of this blog’s focus.)

Today’s News

I recently saw an enthusiastic response to a newly-published study that explores,  reconceptualizes — and debunks? —  EFs. Because EFs “are mightily important,” such reconceptualization & debunkage merits our thoughtful attention.

Here’s the story.

A research team led by Andreas Demetriou wanted to see if they could translate that long list (“prioritizing, switching, evaluating,” etc.) into a core set of mental processes.

So: a carbon atom might look different from an iron atom, but both are different ways of putting protons, neutrons, and electrons together. Likewise, “prioritizing” and “switching” might seem like two different processes, but they could instead be different arrangements of the same mental elements.

Demetriou’s team focuses on two core mental processes – their “protons and electrons.” Roughly, those mental processes are:

  • Forming and holding a mental model of the goal, and
  • Mapping that mental model onto the situation or problem.

For complicated reasons, Team D combines these two processes with a label: the AACog mechanism. They then run a lengthy series of studies using a GREAT variety of different tests (Stroop this, Raven’s that) across a wide range of ages.

When they run all the calculations, sure enough: the AACog mechanism underlies all those other EFs we’ve been taught about over the years.

As they write: “AACog is the common core running through all executive functions.” (That’s an extraordinary claim, no?)

And, development of the AACog mechanisms explains all sorts of increasing mental capacities: symbolic exploration, drawing inferences, using deductive reasoning, and so forth. (The concentric circles representing this argument challenge all of my AACog mechanisms!)

In other words, this model explains an ENORMOUS amount of human cognitive processing by focusing on two elements.

What It All Means

I wrote above that this study received an “enthusiastic response” when it came out.

In my twitter feed at least, it was packaged with basically this message:

“All those people who were nattering on about EF were having you on. Look: we can boil it down to basically one thing. No need to make it so complicated!”

I can understand why Twitter responded this way: the title of the Demetriou et al. study is: “Executive function: Debunking an overprized construct.” No wonder readers think that the idea of EFs has been debunked!

At the same time, I’m not so sure. I have three reasons to hesitate:

First:

Quoth Dan Willingham: “One study is just one study, folks.” Until MANY more people test out this idea is MANY more ways, we shouldn’t suddenly stop thinking one thing (“EFs exist!”) and start thinking another (“EFs are the AACog mechanism in disguise!”).

We need more research — LOTS — before we get all debunky.

Second:

Let’s assume for a moment that the AACog mechanism hypothesis is true. What effect will that have on discussions in schools?

Honestly, I doubt very much.

The “AACog mechanism” is itself so abstract — as are the “modeling” and “mapping” functions that go into it — that I doubt they’ll usefully replace “exective functions” in daily conversations.

Imagine that a learning specialist says to me: “This student has a diagnosed problem with her AACog mechanism.”

I’ll probably respond: “I don’t understand. What does that mean?”

The learning specialist will almost certainly respond: “Well, she has difficulty with prioritizing, task switching, initiating, and so forth.”

We’re back to EF language in seconds.

Third:

I’m not sure I buy the argument that the “AACog mechanism” DEBUNKS “executive function.”

Imagine this logical flow:

  • Carbon and iron are made up of the same sub-elements: protons, neutrons, and electrons.
  • Therefore, carbon and iron don’t really exist.
  • Voila: we’ve debunked the idea of carbon and iron.

Well, that logic just doesn’t hold up. Carbon and iron DO exist, even as meaningfully different arrangements of sub-particles.

So too:

  • EFs all boil down to the AACog mechanism, which is itself just “mental modelling” and “mapping of models onto reality.”
  • Therefore, EFs don’t really exist.
  • Misson Debunk Accomplished!

I just don’t track that logic.

We understand human cognitive complexity better, but the complexity hasn’t gone away. (We understand carbon and iron better now that we know about protons and neutrons, but the periodic table is still complicated.)

This model helps us think differently about mental functions across academic disciplines. Those new thought patterns might indeed be helpful — especially to people who create conceptual diagrams of cognition.

But I don’t think it will radically change the way teachers think and talk about our students.

TL;DR

A group of thoughtful scholars have put together a new model of cognition explaining executive functions (and a whole lot more).

What does this mean for us?

  1. In ten or fifteen years, EF experts might be talking to us differently about understanding and promoting these cognitive moves.
  2. In the meantime, don’t let oversimplications on the interwebs distract you. Yes: “executive function” is a mushy and complicated category — and yes, people do go too far with this label. But something like EFs exist, and we do need to understand their complexity.

Demetriou, A., Kazali, E., Spanoudis, G., Makris, N., & Kazi, S. (2024). Executive function: Debunking an overprized construct. Developmental Review74, 101168.

Early Thoughts on A.I. Research in Schools
Andrew Watson
Andrew Watson

I hope that one of my strengths as a blogger is: I know what I don’t know — and I don’t write about those topics.

While I DO know a lot about cognitive science — working memory, self-determination theory, retrieval practice — I DON’T know a lot about technology. And: I’m only a few miles into my own A.I. journey; no doubt there will be thousands of miles to go. (My first foray along the ChatGPT path, back in February of this year, did not go well…)

A young child types on a laptop; a small robot points out answers on a see-through screen that hovers between them

Recently I came across research that looks at A.I.’s potential benefits for studying. Because I know studying research quite well, I feel confident enough to describe this particular experiment and consider its implications for our work.

But before I describe that study…

Guiding Principles

Although I’m not a student of A.I., I AM a student of thinking. Few cognitive principles have proven more enduring than Dan Willingham’s immortal sentence: “memory is the residue of thought.”

In other words, if teachers want students to remember something, we must ensure that they think about it.

More specifically:

  • they should think about it successfully (so we don’t want to overload working memory)
  • they should think about it many times (so spacing and interleaving will be important cognitive principles
  • they should think hard about it (so desirable difficulty is a thing)

And so forth.

This core principle — “memory is the residue of thought” — prompts an obvious concern about A.I. in education.

In theory, A.I. simplifies complex tasks. In other words, it reduces the amount of time I think about that complexity.

If artificial intelligence reduces the amount of time I that I’m required to think about doing the thing, it necessarily reduces the amount of learning I’ll do about the thing.

If “memory is the residue of thought,” then less thinking means less memory, and less learning…

Who Did What?

Although discussions of generative A.I. often sound impenetrable to me, this study followed a clear and sensible design.

Researchers from the University of Pennslyvania worked with almost 1000 students at a high school in Turkey. (In this kind of research, 1000 is an unusually high number.)

These students spent time REVIEWING math concepts they had already learned. This review happened in three phases:

Phase 1: the teacher re-explained math concepts.

Phase 2: the students practiced independently.

Phase 3: the students took a test on those math concepts. (No book; no notes; nada.)

For all students, phases 1 and 3 were identical. Phase 2, however, gave researchers a chance to explore their question.

Some students (let’s call them Group A) practiced in the usual way: the textbook, their notes, paper and pencil.

Group B, on the other hand, practiced with ChatGPT at hand. They could ask it questions to assist with their review.

Group C practiced with a specially designed ChatGPT tutor. This tutor was programed not to give answers to students’ questions, but to provide hints. (There were other differences between the ChatGPT and the ChatGPT tutor, but this difference strikes me as most pertinent.)

So: did ChatGPT help?

Did the students in Groups B and C have greater success on the practice problems, compared to Group A?

Did they do better on the test?

Intriguing Results

The students who used A.I. did better on the practice problems.

Those who used ChatGPT scored 48% higher than their peers in Group A.

Those who used the ChatGPT tutor scored (are you sitting down?) 127% higher than their peers in Group A.

Numbers like these really get our attention!

And yet…we’re more interested in knowing how they did on the test; that is, how well did they do when they couldn’t look at their books, or ask Chatty questions.

In brief: had they LEARNED the math concepts?

The students who used regular ChatGPT scored 17% lower than their notes-n-textbook peers.

Those who used the ChatGPT tutor scored the same as those peers.

In brief:

A.I. helped students succeed during practice.

But, because it reduced the amount of time they had to THINK about the problems, it didn’t help them learn.

Case closed.

Case Closed?

In education, we all too easily rush to extremes. In this case, we might easily summarize this study in two sentences:

“A.I. certainly didn’t help students learn; in some cases it harmed their learning. Banish A.I.!”

While I understand that summary, I don’t think it captures the full message that this study gives us.

Yes: if we let students ask ChatGPT questions, they think less and therefore learn less. (Why do they think less? Probably they simply ask for the answer to the question.)

But: if we design a tutor that offers hints not answers, we reduce that problem … and eliminate the difference in learning. (Yes: the reseachers have data showing that the students spent more time asking the tutor questions; presumably they had to think harder while doing so.)

As a non-expert in this field, I suspect that — sooner or later — wise people somewhere will be able to design A.I. tutors that are better at asking thought-provoking hints. That is: perhaps an A.I. tutor might cause students to think even MORE than other students praticing the old-fashioned way.

That two sentence summary above might hold true today. But we’ve learned this year that A.I. evolves VERY rapidly. Who knows what next month will bring.

TL;DR

Although THIS study suggests that A.I. doesn’t help (and might harm) learning, it also suggests that more beneficial A.I. tutors might exist in the future.

If — and this is the ESSENTIAL “if” — if A.I. can prompt students to THINK MORE than they currently do while practicing, then well-established cog-sci principles suggest that our students will learn more.


* A note about the publication status of this study. It has not yet been peer reviewed and published, although it is “under review” at a well-known journal. So, it’s technically a “working paper.” If you want to get your research geek on, you can check out the link above.


Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, O., & Mariman, R. (2024). Generative ai can harm learning. Available at SSRN4895486.

Teachers’ Professionalism: Are We Pilots or Architects?
Andrew Watson
Andrew Watson

I recently attended a (non-Learning-and-the-Brain) conference, and saw a thoughtful presentation that included a discussion of teachers’ professional standing.

In this blog post, I want to …

  1. summarize this speaker’s thoughtful argument,
  2. explain my own reasons for doubting it, and
  3. consider some weaknesses in my own argument.

Teachers as Pilots

The speaker started by summarizing a common claim about teachers’ professional independence:

“Because teachers are highly-trained professionals, we should have the same freedom for independent action and creativity that other professionals enjoy. Rather than scripting and constraining teachers, schools should allow them the leeway to think, act, and teach with meaningful independence.”

I should be clear, by the way, that this speaker’s summary is NOT a straw man. I know that people make (roughly) this argument because a) I’ve heard other people make it, and b) I’m going to make a similar argument in just a few paragraphs.

To interrogate this pro-independence argument, the speaker asks us to think about other highly esteemed professionals: doctors, airline pilots, and engineers.

In every case, these professionals work in highly constrained conditions. In fact, we would be rightly shocked were they to want to escape those constraints. Imagine:

  • If a pilot were to say: “today I think it would be fun to go through this pre-flight check list in reverse order. And, heck, I think I’ll skip step #27; I’ve never understood what it was for!”
  • If an ER doctor were to say: “I understand that you’re experiencing chest pains, and the protocols suggest several tests. But honestly: you look healthy to me. And, I’m feeling lucky today. So let’s assume it’s a touch of nerves and get you some nice chamomile tea.”

We would be horrified!

Pilots and doctors work within well-established constraints, and have a strict ethical obligation to follow them.

For this reason, society rightly condemns instances where these professionals go outside those constraints.

A woman seated in a small airplane cockpit with her hands on the yoke

Another example: when engineers get sloppy and bridges fall down mid-construction, we feel both horror and surprise to learn that they didn’t follow professional codes that govern their work.

These examples — the speaker said — show that teachers’ demands for professional freedom are misplaced.

Tight constraints do not violate our professional standing; they embody our professional standing.

Pushing Back: Reason #1

Although I understand these arguments as far as they go, I disagree with the speaker’s conclusion. Let me see if I can persuade you.

I think that doctors, pilots, and engineers are not good analogues for teachers, because the systems on which doctors, pilots, and engineers operate are unified, coherent, and designed to function as a whole.

Here’s what I mean:

  • An airplane’s ailerons and flaps have been scrupulously designed to act upon the wings in a specific way. So too the engines and the rudders. And, frankly, almost everything else about the airplane.

Because airplane parts have been structured to function together, we can have specific, precise, and absolute rules the operation of planes. When the flaps do this, the airflow over the wing changes in entirely predictable ways. The plane will, in these circumstances, always turn, or ascend, or land, or whatever.

Yes, special circumstances exist: turbulence, wind shear, or thunderstorms. But even these special circumstances call for predictable and consistent responses: responses that can be trained, and should be executed precisely.

  • A bridge has been designed to balance the forces that act upon it and the materials used to build it. Steel does not wake up one day and decide to have the strength of aluminum. Gravity does not vary unpredictably from day to day or mood to mood.

Because bridges have been structured to function in a particular way, engineers can have specific, precise, and absolute rules about their construction. Engineers don’t insist on moment-by-moment freedom because the bridges they build have entirely predictable constraints.

If, however, you have worked in a classroom, you know that such absolute predictability – based on the unity of the system being operated – has almost nothing to do with a teacher’s daily function.

  • Gravity doesn’t work differently before and after lunch, but students do.
  • An airplane’s rudder doesn’t have a different response to the pilot’s input, but this student might have a very different response than that student to a teacher’s input.
  • An EKG (I assume) shows a particular kind of result for a healthy heart and a distinct one for an unhealthy heart. A student’s test results might mean all sorts of things depending on all sorts of variables.
  • By the way: all of these examples so far focus on one student at a time. They don’t begin to explore the infinite, often-unpredictable interactions among students…
  • …or the differences among the topics that students learn…
  • …or the cultures within which the students learn.

We shouldn’t treat a classroom system as a straightforwardly stimulus-response system (like an airplane, like a bridge) because classrooms include an unpredictable vortex of possibilities between stimulus and response.

The best way – in many cases, the ONLY way – to manage that vortex: give teachers professional leeway to act, decide, change, and improvise.

The Continuum of Professionalism

Let’s pause for a moment to consider other kinds of professionls — say, ARCHITECTS, or THERAPISTS.

We allow — in fact we expect — subsantial freedom and creativity and variety as they do their work.

Of course, these professionals work within some constraints, and follow a well-defined code of ethics.

But those modest contraints allow for great freedom because…

… this client wants her house to look like a Tudor castle, while that client wants his to look like Falling Water, or

… this client struggles with PTSD and doesn’t want to take meds, while that client is managing bipolar disorder.

In other words: some professions do permit — in fact, require — strict limitations. But not all professions do. And, in my view, ours doesn’t. We’re more like architects than engineers.

Pushing Back: Reason #2

I promised two reasons that I resist the call for doctor-like-narrow-constraints. Here’s the second.

The analogies provided in this case all focus on people dying. The plane crashed. The heart-attack patient perished in agony. The bridge crushed workers and passers by.

In these professions, constraints literally save lives.

Now, I (like all teachers) think that education is important, and can transform lives and societies for the better. Bad educational practices do have damaging results for individuals and communities.

But: no one ever died from a bad English class. (I know; I’ve taught bad English classes. My students didn’t learn much, but they survived.)

If, in fact, teachers should work within tight constraints — checklists, scripts, step-by-step codes — the argument in favor of that position should be persuasive without the threat of death to energize it.

I’m Right, but I Might be Wrong

I promised up top that I’d include the weaknesses in my argument. I see at least four.

One:

Obviously, novice teachers require lots of support, and should work within tighter constraints than experienced teachers.

And, some teachers aren’t very good at their jobs. We might reasonably decline to trust their professional judgments.

Also: some people LIKE scripted curricula. I don’t think we should take them away from people who want them.

Two:

Teachers shouldn’t be scripted or managed detail by detail, but we should operate within well-established cognitive science principles. For instance:

  • Retrieval practice is, in almost all cases, better than simple review
  • Working memory overload is, in ALL cases, a detriment to learning
  • Myths like “learning styles” or “right/left brain learning” should not be a guide for instruction.

In other words: my call for independence isn’t absolute. We should know our subject and our craft — and then work flexibly and appropriately with those knowledge bases.

Three:

I suspect that a few specific disciplines might allow for precise scripting.

The teaching of reading or early math, for instance, might really benefit from doing EXACTLY step A and then EXACTLY step B, and so forth.

However:

Even in this case, I suspect we would need LOTS of different scripts:

… “typical” readers,

… students with dyslexia, or diagnosably low working memory,

… students with unusually low background knowledge,

… students whose home language isn’t the instructional language,

and so forth.

In other words: even a scriptable subject matter requires teacherly expertise and inventiveness in moving from script to script.

Four:

My own biases no doubt shape my argument.

I myself am a VERY independent person, and I have a penchant for holding teachers in great esteem.

For these reasons, I probably react more strongly than others do to the suggestion that teachers should be tightly constrained to meet our professional obligations.

In other words: I do have a logical argument in support of my position.

a) Flying an airplane is an inappropriate analogue for teaching a class;

b) Tight constraints are almost certainly impossible in the work we do.

But: that logical argument almost certainly gets much of its passion from a realm beyond logic.

In Sum

Ideas about pedagogy often rest on assumptions about teacher independence. For that reason, I’m glad that the speaker raised this point specifically, and made a strong argument to support one point of view.

I hope I’ve persuaded you that teachers — like architects — need informed independence (within some constraints) to do our work.

Even if not, I’m hopeful that thinking through these questions has proven helpful to you.


 

In this tweet threat, the invaluable Peps Mccrea gives an excellent example of a situation where teachers’ communal adherence to the same norms benefits students and schools.