Blog

Success and Failure, Part 2: My CV+

Hi, everyone.

I hope last week’s Failure CV was helpful to those who perused it. My hope was that it would show that failure is a common and normal part of any academic’s life. I thought this might help other people feel better about their overall records.

I am aware, though, that understanding the role of failure in general doesn’t make any individual rejection feel any better. The number of failures I have experienced has not indoctrinated me to disappointment. Shortly after I posted last week, I recalled two other rejected grant proposals, each of which I poured a lot of work into. In the intervening week, I received two desk rejections of papers. None of that made me feel good, and none of it just rolled off my back. Every paper and proposal represents hours of my time and gallons of my blood, sweat, and tears. It’s got pieces of me in it. It hurts when other people judge it as unworthy of funding or publication.

Still, over the years, I think I have developed some strategies for managing disappointment and frustration. I am sharing three of those strategies today.

First, when reading reviews, I suggest making a conscious effort to separate commentary on the ideas from any implicit value judgements. This is not easy. Most of the time, commentary on the content feels like thinly veiled value judgements, and it’s tempting to draw inferences. For example, I recently got a review on an empirical research paper that suggested it would be more suitable for a practitioner-facing journal. The paper was an analysis of a huge data set from a super high-level perspective and full of talk of statistical significance and effect sizes. The suggestion to publish it for a teacher audience did not make sense to me, and so I immediately went to a value-judgement place, and started thinking that the underlying meaning of the reviewer’s comment was that the paper didn’t present real or useful research.

I don’t know if that was the reviewer’s intended implication or not, but my point is this: Regardless of whether the value judgement was really there, considering it does not help move the paper forward. Considering whether the paper is actually appropriate for a practitioner journal is a tractable step: I can take that advice or leave it. (In this case, my co-author and I left it.) Considering whether or not the paper was real or useful research is not useful or tractable. If I really believed the paper was not worth sharing, I would not have spent so much time analyzing the data and writing it. I also have a coauthor I respect, who felt the same way about the value of the work. Those two opinions matter; the inferred, unstated (and therefore potentially non existent) opinion of this reviewer does not.

It’s hard to stop yourself from translating content commentary into value judgements — or to ignore the value judgements that are explicitly written into reviews sometimes. But it’s a vital skill to develop. Only in focusing on concrete things that reviews say about the ideas in a paper can reviews be used to advance a paper. Read the reviews and feel your feelings, but don’t make any decisions on what to do next until you’re able to separate commentary from value judgements. Early in my career, getting to that place took me a long time. Now, I get there faster. I think you will, too, with practice.

Second, use the interminable time you spend waiting for reviews to come back to advance additional lines of work. I firmly believe that rejections are easier to take when they relate to one particular line of inquiry of which you have several. We’re pressured to specialize, especially in graduate school, but this does not necessarily mean we must pin all of our hopes to a few papers and opportunities. A rejection of a paper — even a paper I particularly liked — has always been easier when I have other papers that I’m excited about in development or under review. Spreading out your interests a bit can help harsh criticisms of your papers feel directed at one piece of your work — not as you as a researcher, student, or professional.

Third, and probably most importantly, keep a list of accomplishments you are proud of that go beyond what’s listed on your CV. For whatever reason, we hold up funding and publications as the accomplishments most worthy of mention on a CV. Even when we have diverse lines of inquiry represented there (as mentioned in my second point above), these are only a very narrow window of things we accomplish in academic work, especially in education. So, even though these things won’t get listed on your official documents, keep a list for yourself and read it over from time to time to remind yourself that you are more than your papers.

I’m sharing my list of accomplishments that you won’t see on any official version of my CV below. I call it my CV+. I hope it helps you think about things you’re proud of, too.

Kathryn Rich CV+

Faithfully writes a blog post every week that classes are in session, and uses that habit to advance thinking and writing skills.

Continuing to improve at developing relationships with partner teachers.

Has lots of wonderful memories and stories about the interactions I’ve had with students during field work.

Successfully navigates advancing work on three distinct research projects and three courses at the same time.

Gaining a steady increase of Twitter followers.

Stands up for equitable treatment of fellow graduate students.

Provides substantive and thoughtful feedback to peers through coursework and formal review processes for conferences and journals.

Recently wrote a paper in which one of my advisors complimented my “economy of expression” (HUGE for a verbose writer like me!).

Remains intellectually curious throughout the trials and tribulations of academic life.


I encourage you to write your own CV+, friends, and more generally to keep your chins up in the face of rejections and failures. I’ll be back next week with some tips on how to get papers out the door to those feisty, fickle reviewers.

Success and Failure, Part 1: My Failure CV

This three-part blog series is going to have a bit of a different flavor than most of what I write. I typically use this blog to share thoughts about the various research topics and educational issues I’m interested in. Recently, though, I’ve been thinking about more general issues related to academia, research, and writing. I’d like to spend some time sharing my thinking about a couple of things.

One of my advisors tells me from time to time that I should take care in moments when I “rattle off” accomplishments, lest I make my peers feel less accomplished in the process. I can see both sides of this issue. On one hand, I don’t think my CV should really have any effect on anyone else. I am me and they are them, and we come from different backgrounds. In particular, I know (and they know) that I have the benefit of 10+ more years in an academic space than most of my peers.

On the other hand, I understand his point and sometimes think about how I might actually use those years of experience to support my peers, rather than having the unintentional (but nonetheless real) effect of contributing to others’ imposter syndrome. (Which I really struggle with, too, by the way.)

I figure one way to start is to be transparent about how when I talk about successes, I am sharing a really narrow highlight reel of my academic journey. So, today I’m sharing my Failure CV. What’s below is a partial list of the rejections I’ve received since leaving high school almost 20 years ago. (These are just the ones I remember or have a record of — which I guess is less than 50%.)

I masked my collaborator’s names and the project or articles names for the sake of not implicating anyone else in this list. The point isn’t really what the failures were — just the sheer number of them, in particular in comparison to the successes listed on my actual CV.

Kathryn Rich — Failure CV

EDUCATION

  • Brown University, REJECTED for undergraduate admission, 2001
  • University of California at Berkeley, REJECTED for PhD admission, 2005
  • University of Cambridge, REJECTED for PhD funding, 2017

PROFESSIONAL EXPERIENCE

  • Pearson Education, REJECTED application for content developer, 2007
  • McGraw-Hill Education, REJECTED application for educational research position (after I had already been an employee there before!), 2007
  • Approximately 50 other job applications REJECTED, 2005 and 2007 after leaving school (The above are just the ones I remember because they stung the most.)

HONORS AND AWARDS

  • While I can’t think of particular ones right now, I’m sure there are about a billion of these.

GRANTS

  • IES proposal to develop a digital curriculum tool that gives feedback to teachers on adaptation, REJECTED, 2016
  • IES proposal to develop integrated mathematics + CT curriculum, REJECTED, 2016
  • NSF proposal to develop learning trajectories for the mathematical practices, REJECTED, 2016
  • NSF proposal to develop a phone app to support early childhood mathematics education through augmented reality “math walks,” REJECTED, 2016
  • NSF proposal to support scaling of an afterschool science program already shown to support girls’ science identities, REJECTED, 2016
  • NSF proposal to develop a digital curriculum tool that gives feedback to teachers on adaptation (yep, another try at the above), REJECTED, 2016
  • NSF proposal to develop interactive problems that allow underrepresented minorities to explore STEM careers, REJECTED, 2015 (This one really hurt!)
  • IES proposal to develop integrated mathematics + CT curriculum (yep, another try at the above), REJECTED, 2015
  • William T. Grant Foundation proposal to to develop interactive problems that allow underrepresented minorities to explore STEM careers (different spin on the above), REJECTED, 2015
  • NSF proposal to study the use of innovative kinds of feedback in tech-enhanced math content, REJECTED, 2014

PAPERS AND PRESENTATIONS

  • Theoretical analysis of relationship between computational abstraction and mathematical modeling in elementary school, REJECTED 2018
  • Exploration of theories of fact fluency development using a huge dataset from an online game, DESK REJECTED (Not even set out for reviews!) 2017
  • Study of how elementary school teachers integrated CT into mathematics, REJECTED (after a round of major revisions) 2016
  • Discussion of how databases of authentic problems could expand mathematical modeling opportunities in K-12, REJECTED 2015
  • Document analysis of the CCSS for CT content, MAJOR REVISIONS requested twice, fate still TBD.
  • AERA paper on fact fluency development using the online game dataset, REJECTED 2017
  • SIGCSE paper on the development of a particular computational thinking trajectory, REJECTED 2017
  • SIGCSE paper describing an analysis of the mathematical understandings required to use various Scratch blocks and modifications that would align with math standards, REJECTED 2016
  • NCTM proposal for session on comparing affordances and costs of physical and digital manipulatives, REJECTED 2015
  • NCTM proposal for session on open response and re-engagement in K-2, REJECTED 2014

Again, this is only the stuff I remember.

I’ve been lucky to have been accepted to three schools at which I’ve learned a great deal — but two Ivy League schools rejected me, and Cambridge took me only if I was willing to pay almost $100K for the privilege of being there. I’ve worked on at least 11 grant proposals — only 1 has been funded. I’ve worked on more than 10 journal articles, and not one has been published so far. I’ve had better luck at conferences, but even there I’m batting about 0.500.

My successes are my highlight reel. So are everyone else’s. Don’t ever forget that.

Over the next two weeks I’ll talk first about some of the tips I have for dealing with all the rejections that come with this territory. Then in week three, I’ll talk about how I manage to get so much stuff out the door — if there’s one thing I’m good at, it’s hitting submission buttons.

 

Educative Curriculum Materials, Part 3: Teachers as Creators

Happy Friday, everyone. Welcome back to Part 3 of my series of educative curriculum materials.

Two weeks ago I wrote about how I worried that curriculum materials were overspecified in ways that could be suppressing pedagogical design capacity.

Last week I wrote about how I thought translations of some of the information in curriculum materials into different forms of media might help in efforts to invite the development of pedagogical design capacity while also supporting it.

This week, I’m thinking about the potential of digital curriculum materials from a different angle. One of my lab mates, who works as a technology coach at a middle school, mentioned last week that one of the broad goals at her school this year is to help kids become not mere consumers of technology, but creators. This got me thinking: The translations of information I discussed last week, powerful as they might be, still position teachers as consumers of curriculum materials and not creators. On this particular dimension, nothing I’ve said so far really moves the needle.

Is there a way to use a digital platform to place teachers in a creative role as they work with curriculum materials? Even if it’s possible, is it something to strive for?

I’ve written before about why I think taking teachers’ roles as designers of curriculum is important, and why a digital platform is a place to do it. The short version is this: Making a digital curriculum manipulable — maybe even “programmable” (Hoyles & Noss, 2003) — has the potential to make curriculum developers’ intentions visible in a new way. Allowing teachers to make adaptations within the system and see effects of their adaptations could communicate the rationale for some of the design decisions made by the curriculum developers at a point in time when that information is useful to teachers — that is, when they are considering an adaptation that may or may not be aligned with the original intent of the activity, lesson, or unit.

So, that’s why positioning teachers as creators of some sort is something worth exploring. A question that remains, though, is what parts of curriculum materials creation are best left to teachers, and what parts curriculum developers are better positioned to do. Asking teachers to design curriculum from scratch is, after all, a really tall order with everything else they are expected to do. What can I, as a curriculum developer outside of the classroom, do to support teachers while still respecting their roles in the design and creation process?

As I’ve reflected on this issue this week, I’ve realized that I’m not the only person to have thought about this. To close out this series, I thought I’d highlight a couple of examples of research that discusses teachers as creators of curriculum and think about what those studies suggest about the most fruitful divisions of labor between curriculum developers and teachers.

First, de Araujo, Otten, and Birisci (2017) shared a case study of a teacher implementing a flipped classroom model for the first time. The teacher followed a textbook as she created lecture videos she expected students to watch before they came to class. Then students spent class time working on problems by themselves or in groups. The authors found that with these videos available as a resource, students did not reference the textbooks much, even while working on problems. They also noted that while the teacher used much of the content from the textbook, she did make some significant changes in terms of adding or skipping content. As such, the videos could be considered a curriculum material in and of themselves — one created by the teacher. This provides one model of the line between curriculum developer and teacher in terms of materials creation. Perhaps curriculum developers simply keep doing what they’re doing, and teachers use video or other digital creation tools to create a new version of the materials that suit their particular needs.

Relatedly, Bates (2017) described an effort to build a tool that allowed for direct adaptation of a pre-created curriculum resource. She and her collaborators created a digital mini-version of a curriculum as described in a textbook and built in tools that allowed teachers to re-order, “snooze,” or skip activities. When teachers made these changes, they received feedback about the potential implications. For example, if they deleted an activity that was intended as an important prerequisite for a later activity, they were advised that additional adjustments may be needed to compensate. Thus, like the teacher in the above flipped classroom study, teachers using Bates’ tool were also creating new versions of an existing textbook. The difference, in this case, is that teachers received feedback on their changes. This additional feature changes the roles a bit: curriculum developers create the first version of the curriculum and also serve as advisors (of sorts) on the process of adaptation, through the provision of automated feedback.

Finally, a third research team (Confrey, Gianopulos, McGowan, Shah, & Belcher, 2017) built a different kind of tool. Called the Math Mapper, their tool allowed teachers to select and sequence resources pulled from other sources (often the Internet), then receive feedback on the coherence of their chosen sequence of activities. This model changes the roles yet again. Here, teachers create the first version of their curriculum, and the curriculum developers serve as advisors (through feedback) on how to make improvements.

It’s not clear which one of these is the most productive approach. Most likely, different approaches will be best suited to different contexts. The first version, where teachers create their own versions of curriculum, is more or less what happens with any implementation, although the videos allow the textbook to be eliminated altogether. Such an approach works well for many teachers, but not always. That’s what makes me believe that the two kinds of feedback systems, discussed in Bates (2017) and Confrey et al. (2017), are worthy of further investigation. Effective feedback systems for teachers creating curriculum will be a huge challenge to create, but I have a feeling it will be a worthwhile effort.

References

Bates, M. S. (2017). Leveraging digital tools to build educative curricula for teachers: two promising approaches. ZDM, 49(5), 675-686.

Confrey, J., Gianopulos, G., McGowan, W., Shah, M., & Belcher, M. (2017). Scaffolding learner-centered curricular coherence using learning maps and diagnostic assessments designed around mathematics learning trajectories. ZDM – Mathematics Education, 49(5), 717–734. https://doi.org/10.1007/s11858-017-0869-1

de Araujo, Z., Otten, S., & Birisci, S. (2017). Teacher-created videos in a flipped mathematics class: digital curriculum materials or lesson enactments? ZDM – Mathematics Education, 49(5), 687–699. https://doi.org/10.1007/s11858-017-0872-6

Hoyles, C., & Noss, R. (2003). What can digital technologies take from and bring to research in mathematics education?. In Second international handbook of mathematics education (pp. 323-349). Springer Netherlands.

 

Educative curriculum materials, Part 2: supporting and inviting adaptation

Welcome back for part 2 of our series on educative curriculum materials!

To briefly recap last week’s post: I argued that creating educative curriculum materials requires a delicate balance between the provision of lots of highly specific information and leaving space for teachers to exercise and develop pedagogical design capacity. And I shared a worry and a hope: A worry that we have leaned too far toward the over-specified end of the continuum between specification and flexibility, and a hope that perhaps digital delivery can help us find a better balance. But how?

Davis & Krajcik (2005) suggested that one way might be through use of multiple forms of media, rather than just one: “By delivering educative curriculum materials online, we have the opportunity to provide more information along the lines of the design heuristics presented here, using many different media. For example, because complementing text with other media can promote more effective learning and because teachers learn from realistic descriptions of practice, online educative curriculum materials could incorporate audio and visual records of teachers’ enactment of lessons” (p. 9).

Their text seems to suggest they were thinking of video as a way to provide additional information that was not already present. In the case of a lot of curriculum materials, though — in mathematics, as well as other subjects — I think that video could instead be a translation of information we already attempt to provide.

Let me explain.

When I first started thinking about how the educative curriculum materials I’m familiar with could be made to better support pedagogical design capacity, I began looking into research on teacher agency. Several researchers have explored how teachers make agentic choices in the context of various reforms. A common theme in these papers is that one reason reforms don’t succeed is that they fail to allow teachers to contribute their expertise toward the reform effort. Instead, they are given directives rather than autonomy. I think this quotation sums it up well: “Historically and continuing today, two aspects of teachers’ agency are limited within educational reform efforts, namely, their capacity to shape and define the course of the reform effort and their level of control or volition” (Severance, Penuel, Sumner, & Leary, 2016, p. 532).

When translated to apply specifically to the context of use of educative curriculum materials, I think Severance et al.’s (2016) above quotation would say something like this: Historically and continuing today, one or both of the following aspects of teachers’ agency are limited with regards to using reform-oriented curriculum materials, namely, their pedagogical design capacity and their opportunities to exercise it. That is, educative curriculum materials don’t always support the development of pedagogical design capacity while also inviting it.*

As noted last week, I think materials developers (myself included) make valiant attempts at the support piece. That’s what all the information we pack into educative curriculum materials is for. When we script student dialogues or discussion questions with sample answers, the intent (most of the time, at least) is to provide a model of how a lesson might go, to allow teachers to imagine what might happen in their classrooms. But there’s research that suggests this isn’t always the effect that the scripting has. I know there is a study somewhere — comment if you know it! — that discusses a teacher who had her students read aloud sample student dialogues rather than having an open discussion. Parks and Bridges-Rhoads (2012) found that a preschool teacher using a highly scripted literacy curriculum applied those scripts in her mathematics teaching, which limited opportunities for students to explain their mathematical thinking. Grossman and Thompson (2008) noted that highly structured materials were very useful for new teachers, but further found that the teachers found it difficult to abandon the structured activities and practices even as their grew into awareness of their instructional limitations.

So, sometimes the script-like elements aren’t functioning as a support for developing pedagogical design capacity. Instead, they’re being interpreted in ways that limit how and when teachers make design decisions for instruction. In short, in an effort to support pedagogical design capacity, we are doing exactly the opposite of inviting it. We may actually be suppressing it.

On the other hand, simply removing the support and not providing any model of implementation does not seem right either. Only giving an overview of a lesson, with many details to be filled in, would certainly invite pedagogical design, but it wouldn’t support it.

So how do we support and invite use of pedagogical design capacity at the same time? I think one way is to translate all those sample questions and dialogues into classroom video, for the following reasons:

  • We know teachers learn a lot from watching classroom video.
  • The videos would serve the function of providing a model of implementation — a specific kind of support for developing design capacity.
  • It would be very challenging to reproduce the dialogue from a video while actively teaching. Videos, more than text-based scripts, communicate the idea that this is a sample implementation, not something to be followed rigidly. Thus, it better invites pedagogical design from teachers.
  • It’s also possible to provide more than one video of the same lesson, which can also clearly communicate the flexible versus critical elements of a lesson. This could invite teachers to make changes to the flexible parts while supporting them in understanding overall intent of the lesson.

In short, I think translation of sample questions and dialogues into classroom video is a concrete and widely applicable way to take up Hoyles, Noss, Vahey, and Roschelle’s (2013) call to support teacher adaptation of materials by shifting from scripting to steering.

Of course, I’m making this change sound easy, but it’d be difficult to accomplish in practice. It raises questions of how to obtain and choose among video resources, what text goes on the page instead of the scripted elements, and so on. As for many things, the devil is in the details. Still, it’s been a productive line of thinking for me. It’s made me wonder what other elements of educative curriculum materials could be translated (not eliminated!) in ways that help us support pedagogical design capacity while also inviting it.

*Note: My sincere thanks to Dr. Corey Drake for helping me articulate this idea in a very productive and helpful conversation I had with her today!

References

Davis, E. A., & Krajcik, J. S. (2005). Designing educative curriculum materials to promote teacher learning. Educational Researcher, 34(3), 3–14.

Grossman, P., & Thompson, C. (2008). Learning from curriculum materials: Scaffolds for new teachers? Teaching and Teacher Education, 24, 2014–2026.

Hoyles, C., Noss, R., Vahey, P., & Roschelle, J. (2013). Cornerstone Mathematics: Designing digital technology for teacher adaptation and scaling. ZDM – International Journal on Mathematics Education, 45(7), 1057–1070.

Parks, A. N., & Bridges-Rhoads, S. (2012). Overly scripted: Exploring the impact of a scripted literacy curriculum on a preschool teacher’s instructional practices in mathematics. Journal of Research in Childhood Education, 26(3), 308–324.

Severance, S., Penuel, W. R., Sumner, T., & Leary, H. (2016). Organizing for teacher agency in curricular co-design. Journal of the Learning Sciences, 25(4), 531–564.

 

Educative Curriculum Materials, Part 1

For more than 20 years now, educative curriculum materials have been a popular focus for mathematics education research, development, and theorizing. When the 1989 NCTM Standards laid out new and ambitious ways of teaching mathematics, curriculum materials were seen as a potentially powerful mechanism for supporting teachers in making the needed changes to their practice (Ball & Cohen, 1996). Because they were (and still are) so widely used, and fit into the day-to-day work of teachers, they seemed (and still do seem!) like a fruitful avenue for teacher learning. Several development groups received NSF funds to create educative curriculum materials, so called because they were intended to support both student and teacher learning.

The logic in this argument makes sense to me, and I spent a large chunk of my career working on a couple of sets of these educative curriculum materials. I truly, sincerely loved my work. I never believed I had it in me to be a teacher. I was too anxiety-ridden to handle all the on-demand thinking and decision making, and I was never very good at developing rapport with kids. But I did (do) admire teachers and think their work is both challenging and important. When I stumbled into a career in educative curriculum materials development, I was really happy to have found a way to support educators.

I’m still proud of that work, and respect the colleagues I worked with. But graduate school has given me some time for thoughtful reflection on our efforts, and has me wondering if we’ve gone a bit astray on our efforts to make curriculum materials educative.

The problem is this: For the most part, our methods for making materials educative is simply to provide more and more information. To help teachers anticipate student thinking, we supply sample student work. To deepen teachers’ mathematical content knowledge, we provide mathematical background that goes into greater depths than we expect students to reach. To give teachers a model of how a discussion might play out, we script questions and sample student dialogue. To help teachers discern the intended takeaways from each activity, we provide the rationale for the activities and the reasons they are sequenced as they are.

It’s not that I think all this information should be concealed or ignored. I understand the purpose of providing it. Davis and Krajcik (2005) laid out five high-level guidelines for creating educative curriculum materials, and the first four have to do with providing the kinds of information described above. (It’s a great piece to read if you’re interested in this topic.)

My concern is that in the course of providing all this information, we’ve created materials that are wildly over-specified. With lessons scripted out and detailed rationales for why the lessons are as they are, the materials end up reading like an argument for why they should be implemented exactly as they are written — even if that’s not what was intended. And worse, oftentimes it’s even generous to call what we write an argument. Several researchers have analyzed existing educative materials and found that their voice and structure tends to talk through teachers, telling them exactly what to say to students, rather than to teachers (Stein & Kim, 2009; Herbel-Eisenmann, 2007). As educators ourselves, we know that telling a student what to do step by step isn’t particularly educative. Why do we think materials that tell teachers what to do step by step will be educative?

There are other issues with overspecification, too. For one thing, we know that teachers don’t read curriculum materials in their entirety, and not all teachers read them the same way (Remillard, 2012; Sherin & Drake, 2009). Overspecification also seems to translate to lack of flexibility in the opinion of some teachers and districts, who are abandoning structured curriculum materials to create their own from online resources (Choppin & Borys, 2017).

The biggest problem I see with overspecification, though, is that it doesn’t leave much room for teacher decision-making. In our eagerness to use educative curriculum materials to support teacher learning, I think we have lost sight of the fact that implementation is, in a sense, a second act of curriculum design. Adapting teaching practices to particular students and contexts is a critical part of education that lies solely in the hands of teachers, and curriculum materials that are too overly specified won’t support the development of this skill. In their fifth guideline for the development of educative curriculum materials, Davis and Krajcik (2005) described this skill as pedagogical design capacity and argued that educative curriculum materials should promote it: “Promoting a teacher’s pedagogical design capacity can help him participate in the discourse and practice of teaching; rather than merely implementing a given set of curriculum materials, the teacher becomes an agent in its design and enactment” (p. 6).

I just don’t think our overspecified materials are supporting the development of pedagogical design capacity. So what can we do better? If educative materials need to provide so much information, and providing that information tends to lead to overspecification, which in turn limits the development of pedagogical design capacity …. Are we stuck?

In a curriculum delivered in print, perhaps. In a curriculum delivered digitally, perhaps not. Several research teams have identified ways in which a digital medium could ease this tension between providing needed information and overspecifying. For example:

  • In a digital space, information can be delivered via multiple media (Davis & Krajcik, 2005), which could help developers provide information in ways that don’t prescribe what teachers do.
  • Provision of hyper-links between related material can increase teacher agency and metacognition while reviewing the materials (Shapiro & Niederhauser, 2004), and a digital medium allows developers to define multiple links between materials that provide multiple pathways through the curriculum.
  • Integration of student-facing technology into the materials can stimulate teacher thinking and allow developers to shift from scripting to steering their planned activities (Hoyles, Noss, Vahey, & Roschelle, 2013).

I’ll be exploring these ideas about how a digital medium could help to develop curriculum materials that are educative without being overspecified over the next two weeks.

References

Ball, D. L., & Cohen, D. K. (1996). Reform by the book: What is — or might be — the role of curriculum materials in teacher learning and instructional reform? Educational Researcher, 25(9), 6–8, 14.

Choppin, J., & Borys, Z. (2017). Trends in the design, development, and use of digital curriculum materials. ZDM – International Journal on Mathematics Education, 49(5), 663–674.

Davis, E. A., & Krajcik, J. S. (2005). Designing educative curriculum materials to promote teacher learning. Educational Researcher, 34(3), 3–14.

Herbel-Eisenmann, B. A. (2007). From intended curriculum to written curriculum: Examining the “voice” of a mathematics textbook. Journal for Research in Mathematics Education, 38(4), 344–369.

Hoyles, C., Noss, R., Vahey, P., & Roschelle, J. (2013). Cornerstone Mathematics: Designing digital technology for teacher adaptation and scaling. ZDM – International Journal on Mathematics Education, 45(7), 1057–1070.

Remillard, J. T. (2012). Modes of engagement: Understanding teachers’ transactions with mathematics curriculum resources. In G. Gueudet, B. Pepin, & L. Trouche (Eds.), From Text to “Lived” Resources: Mathematics Curriculum Materials and Teacher Development (pp. 105–122). Springer Netherlands.

Shapiro, A., & Niederhauser, D. (2004). Learning from hypertext: Research issues and findings. In Handbook of Research on Educational Communications and Technology (pp. 605–620).

Sherin, M. G., & Drake, C. (2009). Curriculum strategy framework: Investigating patterns in teachers’ use of a reform-based elementary mathematics curriculum. Journal of Curriculum Studies, 41(4), 467–500.

Stein, M. K., & Kim, G. (2009). The role of mathematics curriculum materials in large-scale urban reform: An analysis of demands and opportunities for teacher learning. In J. T. Remillard, B. A. Herbel-Eisenmann, & G. M. Lloyd (Eds.), Mathematics Teachers at Work: Connecting Curriculum Materials and Classroom Instruction (pp. 37–55). New York: Routledge.

 

Virtual and Physical Manipulatives, Part 3

Hello, again. Welcome to the third installment of my musings on virtual manipulatives.

To briefly recap:

Two weeks ago, I shared a piece of a course paper that used ideas from Seymour Papert and James Gee to explain why virtual manipulatives seem to have an edge over their physical counterparts for promoting mathematical learning.

Last week, I discussed research (DeLoache, 2000) showing that when a representation is perceptually rich, young children have a harder time thinking of it as a representation of something else, rather than just an interesting object in its own right. I discussed the implications of this for use of manipulatives to model word problems, and put an idea on the table about how virtual manipulatives might be designed to overcome some of the issues associated with physical manipulatives. In short, perceptually rich manipulatives seem to help kids better makes sense of problems, but bland manipulatives seem to help kids focus on the mathematics (McNeil, Uttal, Jarvin, & Sternberg, 2009). I suggested the development of a virtual manipulative that fades and restores its perceptual richness during the appropriate stages of the mathematical modeling process.

But what about when manipulatives are used outside of contextualized problem solving? What about the cases where manipulatives are used as a model of mathematical concepts but not of real-world contexts? Consider, for example, when we ask students to use base-10 blocks to add whole numbers, or to use fraction circle pieces to add fractions. We are, in essence, hoping that the blocks and pieces function as representations of mathematical ideas, and nothing else. Today, I’m going to discuss the implications of the research on perceptual richness for that.

In my admittedly brief literature search on this topic, I did not find a study that removed context entirely, but I did find one with a problem context that I don’t think rises to the use of mathematical modeling. Peterson and McNeil (2013) asked preschoolers to complete a counting task using manipulatives with varying perceptual richness. Children were told they needed to count the objects to help out a puppet character—to either check whether the puppet’s counting was correct, or count out the number of objects the puppet asked for. So, there was an element of context, but only to set up the mathematical task.

In addition to varying the perceptual richness of the manipulatives, Peterson and McNeil (2013) also varied the familiarity that the children had with the objects. So, some of the perceptually rich manipulatives were objects found in their classroom, like miniature animals. Others were objects that they had not seen before, like glitter pom-poms from a craft store. Similarly, the bland manipulatives varied according to student familiarity; craft sticks were considered bland but highly familiar, and wooden pegs were considered bland but unfamiliar.

The results of the study were, in short, that perceptual richness depressed students’ performance on the counting task when the manipulatives were familiar, but enhanced their performance with the manipulatives were new (Peterson & McNeil, 2013). I don’t think I would have predicted this, but the results do make intuitive sense to me. If children have used perceptually rich manipulative for a different purpose—for example, if they have used toy animals in imaginative play—they have a hard time seeing those manipulatives as representations of the mathematics. This jives with the results of the DeLoache (2000) study, particularly the experiment where DeLoache asked students to play with the 3D model of the room before asking them to use the model as a representation of the real room. The play led students to think of the model as a toy in its own right, and then they had a harder time using it as a representation (DeLoache, 2000).

The more interesting result, though, is that when students’ first use of a perceptually rich manipulative is to use it as a mathematics manipulative, the perceptual richness might actually be beneficial (McNeil & Peterson, 2013). This at first seems to counter the DeLoache (2000) study, as the 3D model was new to the children in that study and they still had trouble using it as a representation. However, even though the specific 3D model was new to children, it’s quite possible (maybe even likely) that the children had seen a dollhouse or other miniature scene like it before. In that case, the children might have been “familiar” with the model in a way that led to that familiarity depressing their ability to use the model as a representation.

So, returning to the original question about students using base-10 blocks and fraction circle pieces to add and subtract, what do the results of the McNeil and Peterson (2013) study suggest? Overall, the results made me feel a little better about the brightly colored fraction circle pieces I see used in many classrooms. After first reading the DeLoache (2000) piece, I wondered if we were doing kids a disservice by making them so colorful (i.e., perceptually rich). Because kids are not likely to have seen them before using them to think about fractions, maybe the color actually helps. I do wonder, though, whether giving students a period of open exploration of materials before using them to work with fractions—a common practice among teachers, in my experience—might be doing some harm to the ability of kids to use them as a fraction representation. It’s an interesting empirical question.

When it comes to virtual manipulatives, I wonder what the implications for this line of research are for manipulatives that are embedded within apps. Watts et al. (2016) studied students’ use of a variety of virtual manipulative apps, some of which had little in the way of context or familiar elements, and others that involved a game-like setting with lots of familiar elements, like fish, fruit, and dogs. Watts et al. took into account many aspects of the apps, such as whether the tasks were open or closed ended. However, the issue of perceptual richness wasn’t treated as a variable, and I wonder what kind of influence it had on students’ use of the apps. That’s another interesting empirical question.

Putting together the thoughts from last week’s post and this one, then, here is where I end up:

  • The potential of virtual manipulatives to vary their perceptual richness based on the point in the modeling process is exciting, but…
  • That also means we might need to be more careful, both when designing and studying virtual manipulatives, to think about the effects that perceptual richness and students’ familiarity with the elements of richness might have on the ability of the manipulative to serve as a mathematical representation for kids.

In short, perceptual richness of manipulatives is more easily controlled in a virtual environment than a physical one, and we should use that control to our advantage to support students’ use of manipulatives as mathematical representations.

Thus ends our series on virtual manipulatives. Hope you found it interesting! Next week will start a new 3-part series. Stay tuned for the topic. 🙂

References

Deloache, J. S. (2000). Dual representation and young children’s use of scale models. Child Development, 71(2), 329–338.

McNeil, N. M., Uttal, D. H., Jarvin, L., & Sternberg, R. J. (2009). Should you show me the money? Concrete objects both hurt and help performance on mathematics problems. Learning and Instruction, 19(2), 171–184.

Petersen, L. A., & McNeil, N. M. (2013). Effects of perceptually rich manipulatives on preschoolers’ counting performance: Established knowledge counts. Child Development, 84(3), 1020–1033.

Watts, C. M., Moyer-Packenham, P. S., Tucker, S. I., Bullock, E. P., Shumway, J. F., Westenskow, A., … Jordan, K. (2016). An examination of children’s learning progression shifts while using touch screen virtual manipulative mathematics apps. Computers in Human Behavior, 64, 814–828. https://doi.org/10.1016/j.chb.2016.07.029

 

Physical v. Virtual Manipulatives, Part 2

Hello again, all.

I’m trying something a bit different with my blog this year. As I noted last week, I’m hoping to do a better job this year of staying with ideas a bit longer instead of dabbling in lots of different things all the time. I think a good way to facilitate that might be to have sets of themed blog posts. Sets of three related posts seems a reasonable place to start.

So, welcome to virtual vs. physical manipulatives, part deux.

Last week I shared a piece of one of my course papers that discussed a few reasons why virtual manipulatives might have an edge over their physical counterparts in terms of instructional effectiveness (at least in certain contexts). I read an article for a course this week that led me to another possible reason. It’s going to take me a little while to get to the physical versus digital comparison, but stay with me — I promise there are some really interesting tidbits along the way.

As an assignment for my cognitive development class, I read a piece by Judy DeLoache (2000) about young children’s ability to use scale models to reason about real objects. Researchers showed 2.5- to 3-year-old children where a stuffed toy was hidden in some sort of model of a real room (more on the models in a minute). After kids were shown where the toy was hidden via the model, they were let into the real room and asked to find the stuffed toy. Researchers recorded whether the kids found the stuffed toy in the first place they looked (in which case it was likely they understood how the model of the room mapped onto the real thing) or searched more randomly.

The main independent variable of interest was the physical or perceptual salience of the model. Sometimes, the model kids saw was a 3D physical model similar to a dollhouse. Sometimes, it was a photo of the room. Still other times, it was a 3D model, but a piece of glass was in front of it so children could not interact with the model. Consistently, in this study and several related studies, children were more likely to find the stuffed toy immediately when they saw a less perceptually salient model. That is, kids were more successful when they saw the flat photo of the room or the 3D model behind glass than when they saw the 3D model with no glass.

This result was very counterintuitive to me. Why would it be easier for kids to map a photo of a room onto the real room than to map a 3D model? The 3D model is a closer match! DeLoache (2000) theorized that the 3D model was more interesting to children as an object in itself, and so they had a harder time using it as a representation of something else. To further support this theory, the researchers ran another experiment, asking some children to play with the 3D model before showing them where the stuffed toy was hidden. Children who physically manipulated the model performed worse on the task than those who did not. Assuming that playing with the model made children more interested in the model in its own right — a reasonable assumption — this result is supports the theory that increased perceptual salience of an object makes it more difficult for children to use the object as a representation for something else. Once children think of the model as a toy, for example, they have a harder time thinking of it as a representation of the real room.

This theory got me thinking more about manipulatives. We give young kids manipulatives all the time in early mathematics classes, and sometimes those manipulatives are quite interesting in and of themselves. Counters are often shaped like animals, and fraction pieces come in bright colors. By making the manipulatives visually appealing, are we actually making it harder for kids to think of them as representations of mathematical objects?

I couldn’t help but recall an episode in a first-grade classroom several years ago, when I was working with a child to create combinations of 10 using red and green counters. The goal of the task was for children to generate as many different combinations as they could — 1 and 9, 4 and 6, and so on. This particular child spent most of the time I was with him (maybe 10 minutes?) making the same combination of 5 and 5, but arranging the colors differently. First he make a row of 5 red and a row of 5 green. Then he alternated them in a single line – red, green, red, green, and so on to 10. I tried a couple of tactics to get him to see that these were all the same combination, but I didn’t get there. He insisted they were different. After reading the DeLoache (2000) piece, I realized that he couldn’t see the 5 and 5 combinations as the same because he wasn’t seeing them as representing 10. He was seeing red and green counters — just that! — and those counters were in arrangements that were meaningfully different to him.

What might this mean instructionally? Should we make manipulatives as bland as possible? Would that help kids see them as representations of mathematical concepts? Unsurprisingly, I am not the first person to think about this, and also unsurprisingly, the answer is not simple. In one study, researchers compared fourth-grade students’ performance on money-based word problems according to whether they used coins and bills that closely resembled real money or paper rectangles and circles simply labeled as dollars and coins (McNeil, Uttal, Jarvin, & Sternberg, 2009). Students who used the simpler manipulatives answered more problems correctly, suggesting (at first glance) that less perceptually rich manipulatives are better. However, the authors also analyzed the errors made by students in each group and found that students using the realistic bills and coins made fewer conceptual errors. That is, students who used the realistic bills and coins were less likely to completely misinterpret the problems — their errors tended to be arithmetic mistakes. Students using the bland manipulatives were more likely to misinterpret the problem conceptually (e.g., use the wrong arithmetic operation), but overall had better performance.

As a potential explanation for this finding, McNeil et al. (2009) theorized that the two kinds of manipulatives are good for different purposes. The realistic bills and coins helped kids make sense of the word problems, perhaps because the students more readily see them as representations of real money. The bland versions helped students carry out the mathematics once the problems had been correctly interpreted, perhaps because students more easily see them as representations of decimal numbers. So, each manipulative might be better, depending on what you’re hoping that they’ll help kids model — the connection to the real world, or the mathematics itself.

The trouble is, when we ask kids to solve word problems, we’re asking them to connect all the way from the real-world context to the mathematics. And neither version of the manipulative is getting them all the way there.

Oy. Is anyone else feeling like education is impossible? We can’t ask kids to use two different manipulatives in the course of solving one word problem. I can feel all the elementary school teachers in the world laughing derisively at that idea right now.

But… What if one manipulative could start out perceptually rich and then have its perceptual details temporarily fade away?

We could do that with a digital medium. And now I’m super curious whether it would help kids better understand mathematical modeling.

In one of the books I used to work on as an editor and curriculum developer, there is a diagram of mathematical modeling that looks something like this*:

math modeling
What if there were a virtual manipulative where kids could choose when to toggle between the real world and the world of mathematics? They could mess around with realistic coins and bills until they decide how to proceed mathematically. They could click an arrow or button to enter the world of mathematics, and the details of the manipulatives could fade away, making it easier for kids to focus on the mathematics. Then they could come back to the real world, and the perceptual details would come back, helping them recall the context and interpret the answer.

I have no idea if this would work. Part of me thinks it might be just as complicated as asking students to use two different manipulatives. But for the common contexts for word problems — money for decimals, sharing pizzas for fractions, counting toys or animals for whole numbers — I don’t think the manipulative itself would be hard to make.

Anyone want to make it for me so I can study it? Think about it and get back to me.

Believe it or not, I have even more to say about this. Kids use manipulatives for more than just contextualized problems, after all — they also use them strictly for modeling mathematical concepts without the context issue. What do these issues of perceptual salience suggest about physical and digital manipulatives used for that purpose?

I’ll be writing about that next week. Come back for manipulatives, part 3, and let me know what you think.

References

Deloache, J. S. (2000). Dual representation and young children’s use of scale models. Child Development, 71(2), 329–338.

McNeil, N. M., Uttal, D. H., Jarvin, L., & Sternberg, R. J. (2009). Should you show me the money? Concrete objects both hurt and help performance on mathematics problems. Learning and Instruction, 19(2), 171–184.

Uttal, D. H., O’Doherty, K., Newland, R., Hand, L. L., & DeLoache, J. (2009). Dual representation and the linking of concrete and symbolic representations. Child Development Perspectives, 3(3), 156–159.

*Figure adapted from:

University of Chicago School Mathematics Project. (2007). Everyday Mathematics 3: Teacher’s Reference Manual. New York: McGraw-Hill Education.

In with the new, but not out with the old

Hello, everyone. Happy back-to-school!

The Fall semester of my second year as a doc student started Wednesday. This semester, I’ll be taking courses on cognitive development, intellectual history of educational psychology, and research design. I’ll continue to work on the CT4EDU project, helping our awesome cohort of teachers to integrate computational thinking into their mathematics and science instruction. In the spring, I will likely teach a course about integrating technology into mathematics instruction. Already, my brain is becoming awash with new ideas to explore and share on this blog.

One of my goals for this year is to make efforts to draw connections between the work I did last year and what I will do this year. When it comes to educational, intellectual, and even recreational pursuits, I have always been a bit of a dabbler. I like exploring lots of different things and struggle with feeling boxed in when I stay with one idea or activity for too long. I’m trying to find a happy medium this year where I don’t feel the need to perseverate on individual things, but also don’t entirely abandon ideas after playing with them for a while.

So as a step in that direction, I’m going to cheat just a little bit on the blogging this week, and share a piece of a course paper I wrote last year. The course focused on exploring educational technology as a field, and a piece of the final project was to make some connections between ed tech theorists and recent research in our particular domain of interest. The text below is my effort to connect the work of Seymour Papert and James Gee to recent research findings about virtual manipulatives. I think it contains some interesting ideas I could pursue this year and beyond.

It was useful for me to revisit it, and I hope you enjoy it too. Thanks for reading, and stay tuned for new blog posts each week this semester!

Virtual Manipulatives as Transitional Objects with Embedded Knowledge

Physical manipulatives are concrete objects designed to help children learn mathematical concepts (Uttal, Scudder, & DeLoache, 1997). Common examples of mathematics manipulatives include linking cubes, fraction circle pieces, and tangrams. Physical manipulatives have been used in mathematics education since at least the 1960s, but only in the last two decades have virtual versions of manipulatives had an increasing presence in classrooms. Moyer-Packenham and Bolyard (2016) recently defined a virtual manipulative (VM) as, “an interactive, technology-enabled visual representation of a dynamic mathematical object, including all of the programmable features that allow it to be manipulated, that presents opportunities for constructing mathematical knowledge” (p. 13).

Arguments for the use of manipulatives in elementary mathematics build on the arguments of theorists such as Piaget (1972) and Bruner (1960), who said that young children think concretely and thus have difficulty operating on abstract concepts. Researchers have touted manipulatives as tools that allow elementary children to think concretely about abstract mathematical concepts (Sarama & Clements, 2009; Uttal, Scudder, & DeLoache, 1997). Recent research suggests that VMs may have an edge over their physical counterparts when it comes to promoting learning. Moyer-Packenham and Westenskow (2013) conducted a meta-analysis of 66 studies comparing the use of VMs to other instructional treatments. They found an overall moderate positive effect on student achievement for the use of VMs in mathematics instruction — including in comparison to the use of physical manipulatives in similar instruction.

In an effort to make sense of this finding, Moyer-Packenham and Westenskow (2013) conducted a conceptual analysis on the papers included in the metaanalysis. The conceptual analysis focused on identifying “specific researcher-reported affordances of the virtual manipulatives that promoted student learning in mathematics” (p. 39). Two of the affordances they identified were focused constraint, or limitations placed on the actions that can be carried out on VMs, and simultaneous linking, or the ability to see how changes in one representation of a mathematical idea affect a related representation.

The above mentioned argument for the use of manipulatives, and for the particular utility of VMs, echo arguments by Papert (1980) for the utility of the LOGO environment for learning. Consider, for example, this excerpt from Mindstorms:

[I have] an interest in intellectual structures that could develop as opposed to those that actually at present do develop in the child, and the design of learning environments that are resonant with them. The Turtle can be used to illustrate both of these interests: first, the identification of a powerful set of mathematical ideas that we do not presume to be represented, at least not in developed form, in children; second, the creation of a transitional object, the Turtle, that can exist in the child’s environment and make contact with the ideas. (Papert, 1980, p. 161)

Papert’s discussion of the Turtle as a transitional object is well-aligned with the aforementioned claim that manipulatives help children connect their concrete knowledge to abstract mathematical concepts. In fact, Papert (1980) explicitly noted that the quotation above reflects his suggested extension of Piaget’s ideas about children’s concrete ways of thinking, just as advocates for manipulatives claim that manipulatives are an answer to Piaget’s research.

Still, Papert (1980) argued that computers, in particular, offer excellent opportunities for children to explore and get to know mathematical ideas:

Working in Turtle microworlds is a model for what it is to get to know an idea the way you get to know a person. Students who work in these environments certainly do discover facts, make propositional generalizations, and learn skills. But the primary learning experience is not one of memorizing facts or of practicing skills. Rather, it is getting to know the Turtle, exploring what a Turtle can and cannot do. (Papert, 1980, p. 136)

Recent studies of students’ use of virtual manipulatives – particularly those examining the focused constraints built into VMs – also contain arguments that technology provides opportunities for students to get to know ideas and explore what tools can and cannot do. Evans and Wilkins (2011), for example, compared students’ conversations when working with physical and virtual tangram pieces. Students could move the physical pieces any way that they wished, but when they worked with the virtual tangrams, flips, turns, and slides had to be handled separately via different controls. These focused constraints on movements in the virtual tangrams led to more explicit experimentation with how the pieces could be rearranged and more mathematical terminology being used in student conversations.

Relatedly, Hansen, Mavrikis, and Geraniou (2016) studied one teacher’s use of a VM that showed a numerical answer when students attempted to add two fractions with like denominators, but did not show a numerical answer when students attempted to add two fractions with unlike denominators. The teacher felt strongly that this constraint helped students grasp the reason and need for common denominators.

Researchers exploring the role of focused constraint in student learning with VMs do not typically reference Papert. However, connecting Papert’s ideas to the focused constraint affordance attributed to VMs gave me a new way to think about why focused constraint might be beneficial for learning.

Virtual manipulative research focused on the affordance of simultaneous linking echos arguments made by Gee (2013) in support of games as a context for learning. Consider, for example, Gee’s description of how video games distribute intelligence:

In the game, that experience – the skills and knowledge of professional military expertise – is distributed between the virtual soldiers and the real-world player. The soldiers in the player’s squads have been trained in movement formations; the role of the player is to select the best position for them on the field. The virtual characters (the soldiers) know part of the task (various movement formations), and the player must come to know another parts (when and where to engage in such formations). (Gee, 2013, p. 19)

I find this idea of knowledge distribution to be closely related to the idea of simultaneous linking in VMs. Moyer-Packenham and Westenskow (2013) give examples of forms of simultaneous linking as “linking two different dynamic pictorial objects” and “linking dynamic pictorial objects with symbols” (p. 43). Students may, for example, shade in one of four parts on a fraction VM’s area model and see the numerical representation of the fraction change to ¼. When considering the VM as a “virtual character” and a student using the VM as a “player,” this automatic updating of linked representations can be likened to the knowledge and skills offloaded to the VM. The VM takes care of updating the linked representation; the student is responsible for making the changes necessary to the manipulable representation in order for the linked representation to change to what the student desires it to be.

A recent study of students’ use of VMs with and without simultaneous linking found that students using the linked representations made more generalizations than students not using the linked representations (Anderson-Pence & Moyer-Packenham, 2016). Thus, it seems that through use of a tool that distributes knowledge between the student and a kind of virtual character, students do “come to know” (Gee, 2013, p. 19) particular mathematical ideas and relationships. Gee’s (2013) explanation of the learning potential of games helps to make sense of the learning potential of VMs with simultaneous linking.

In all, through my close reading of Papert (1980) and Gee (2013) this semester, I have come to a more nuanced theoretical understanding of why VMs might have advantages over physical manipulatives in supporting mathematics learning. VMs function as transitional objects that allow students to get to know mathematics ideas (Papert, 1980). They distribute knowledge between the tool and the student, allowing students to come to know particular mathematical ideas through simulation and experimentation (Gee, 2013).

References

Anderson-Pence, K. L., & Moyer-Packenham, P. S. (2016). The influence of different virtual manipulative types on student-led techno-mathematical discourse. Journal of Computers in Mathematics and Science Teaching, 35(1), 5–31.

Bruner, J. S. (1960). The process of education. Cambridge, MA: Harvard University Press.

Evans, M. A., & Wilkins, J. (Jay) L. M. (2011). Social interactions and instructional artifacts: Emergent socio-technical affordances and constraints for children’s geometric thinking. Journal of Educational Computing Research, 44(2), 141–171.

Gee, J. (2013). Good video games + good learning (2nd Ed.). New York: Peter Lang Publishing.

Hansen, A., Mavrikis, M., & Geraniou, E. (2016). Supporting teachers’ technological pedagogical content knowledge of fractions through co-designing a virtual manipulative. Journal of Mathematics Teacher Education, 19(2–3), 205–226.

Moyer-Packenham, P. S., & Bolyard, J. J. (2016). Revisiting the definition of a virtual manipulative. In P. S. Moyer-Packenham (Ed.), International Perspectives on Teaching and Learning Mathematics with Virtual Manipulatives (pp. 3–23). Switzerland: Springer International Publishing.

Moyer-Packenham, P. S., & Westenskow, A. (2013). Effects of virtual manipulatives on student achievement and mathematics learning. International Journal of Virtual and Personal Learning Environments, 4(3), 35–50.

Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York: Basic Books, Inc.

Piaget, J. (1972). Intellectual evolution from adolescence to adulthood. Human Development, 15, 1–12.

Sarama, J., & Clements, D. H. (2009). “Concrete” computer manipulatives in mathematics education. Child Development Perspectives, 3(3), 145–150.

Uttal, D. H., Scudder, K. V., & DeLoache, J. S. (1997). Manipulatives as symbols: A new perspective on the use of concrete objects to teach mathematics. Journal of Applied Developmental Psychology, 18(1), 37–54.

New Paper: Decomposition LT

Hello, everyone. I hope you all had a great summer!

I’m using my first blog post of the year for a little self-promotion: I have a new paper out!

In this year’s International Computing Education Research (ICER) conference proceedings, Carla Strickland, Diana Franklin, Andrew Binkowski, and I share our recently developed learning trajectory intended to guide instruction on the computational thinking (CT) concept of decomposition for students in K-8 (Rich, Binkowski, Strickland, & Franklin, 2018).

That was a lot of ideas in one sentence. Here’s a brief introduction to several of the ideas I just mentioned.

First, a learning trajectory (LT) is a possible pathway from a student’s existing knowledge to a desired learning goal. Martin Simon (1995) first used the term while describing the ways teachers must negotiate between knowledge of their students’ thinking and knowledge of the mathematics they are intending to teach. One purpose of an LT is to manage the tension between the needs for advanced instructional planning and for spontaneous, responsive instructional decision-making in the classroom (Simon, 1995). LTs have since become a popular theoretical construct among curriculum developers and professional development providers seeking to base their materials on research in student thinking (Clements & Sarama, 2004; Sztajn, Confrey, Wilson, & Edgington, 2012). We — meaning a group of colleagues at UChicago STEM Education — were interested in developing instructional materials for K-8 students to learn computational thinking concepts, and so we first set out to develop some LTs for CT through the LTEC project.

Second, faithful readers of my blog right now will certainly know that computational thinking is loosely defined as the thinking processes used by computer scientists (Wing, 2006). CT is quickly becoming a new kind of literacy all students will need to be productive and engaged citizen in our technology-oriented world.

Lastly, decomposition, or more specifically, problem decomposition, is a process of breaking down problems, objects, or phenomena into smaller, more manageable parts. We think of it as a computational thinking practice. Just as modeling, pattern-finding, and generalization, for example, are mathematical practices, decomposition is a computational thinking practice.

So, to return to the paper: It shares our work in developing a decomposition LT intended to guide CT instruction in K-8. Check out the full paper to read about the processes of synthesis and theoretical frameworks we used to guide the development of the LT. Our aim was to make the best possible use of existing research evidence about students’ learning of decomposition to form a starting point for curriculum development.

Spoiler alert: There is a lot more research out there on students’ use and creation of procedures and functions then there is research about students’ overall processes of problem solving through decomposition. Our LT-building efforts made a start at connecting use of procedures to broader decomposition ideas. For example, the LT suggests that a productive intermediate learning goal might be to fluently and flexibly connect existing functions to decomposed parts of complex problems. It may be that such ideas are taught in CS courses, but according to our review, they have seldom been mentioned or studied in the K-12 CS education literature. Future research may well reveal other core but tacit ideas that would be productive to explicitly address in decomposition instruction.

Our LTEC research team has a lot of experience in the K-5 space, and so many of us are particularly interested in how decomposition ideas could be addressed with young students before they begin programming. Two particular ideas seem worthy of mention here.

First, the LTEC team is curious about the relationship between early work with decomposition and early work with another CT idea for which we already developed an LT: sequence. We previously used research evidence about students’ abilities to parse stories into steps to support our sequence LT. Through the development of the decomposition LT, we also came to see this as a kind of decomposition. We are not bothered by the duality in principle, but the double-use of this idea made us wonder whether the difference between sequencing and decomposition will feel meaningful to young students — and what the implications of the answer to that question might be for K-5 CT curriculum development.

Second, I have been thinking a lot about how decomposition in CS/CT relates to decomposition in mathematics. In both the LTEC project and my work at MSU with the CT4EDU project, one of the goals is to develop integrated mathematics and CT instruction for students in K-5. A big part of this work is to identify key ways that ideas are used similarly in the two disciplines and figure out how to leverage the similarities in instruction. Decomposition seemed at first quite similar in CT and math, but close scrutiny has led me to examine an interesting divergence.

In mathematics, the thing being decomposed is usually some kind of mathematical object — a number or shape, for example — and not the problem itself. Students decompose 25 into 20 + 5, or decompose a rectilinear figure into rectangles. The purpose of this decomposition is often for the purpose of solving a complex problem, like multiplying 25 by another number or finding the area of a rectilinear figure. However, the connection between the decomposition of the mathematical object and the decomposition of the problem is not always made clear. In the Common Core State Standard for Mathematics (CCSS-M) 3.MD.7d, the connection to the problem is clear: “Find areas of rectilinear figures by decomposing them into non-overlapping rectangles and adding the areas of the non-overlapping parts.” In the CCSS-M 3.NF.3b, however, students decompose fractions with no purpose stated: “Decompose a fraction into a sum of fractions with the same denominator in more than one way.”

So, decomposition is not always discussed in CT-friendly terms in mathematics. Oddly enough, I think this divergence makes decomposition one of the strongest candidates for integration of CT ideas into elementary mathematics. In this case, adopting the CT practice of focusing the decomposition on the problem has the potential to be a subtle, achievable instructional change for teachers that:

  • Makes certain mathematical tasks more meaningful to kids. (You practice decomposing fractions because you can use those decompositions to help you add later!)
  • Gives kids an introduction to a basic CT idea in a context that fits easily into core instruction in elementary school.

Cool, right?

This and other math-CT connections will be the focus of my contribution to the ICER doctoral consortium. You can check out my abstract for that here.

Thanks for reading, and hope to see some of you in Helsinki!

References

Clements, D. H., & Sarama, J. (2004). Learning trajectories in mathematics education. Mathematical Thinking and Learning, 6(2), 81–89.

Rich, K. M., Binkowski, T. A., & Franklin, D. (2018). Decomposition : A K-8 computational thinking learning trajectory. In Proceedings of the 2018 ACM Conference on International Computing Education Research (pp. 124–132). ACM.

Simon, M. A. (1995). Reconstructing mathematics pedagogy from a constructivist perspective. Journal for Research in Mathematics Education, 26(2), 114–145.

Sztajn, P., Confrey, J., Wilson, P. H., & Edgington, C. (2012). Learning trajectory based instruction: Toward a theory of teaching. Educational Researcher, 41(5), 147-156.

Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.

 

The end of year 1

This week marks the end of my first year as a PhD student! First academic year, at least — summer courses and a continued half-time assistantship await me in just a week’s time. But, still, three years from about right now, I hope to be graduating. Time has a weird way of dragging and flying by at the same time.

As at the end of last semester, I thought I’d share a few more things I learned.

  1. The MSU Dairy Store has really, really good ice cream. I know that sounds like a weird thing to lead with, but I think it will be key to my survival over the next three years.
  2. Courses are investments. Every course you take — especially as a graduate student — will dominate a significant amount of your time and attention during a semester. And unless you’re on the 10-year-PhD plan (don’t be that student), you can only take so many courses during your graduate career. Because graduate school is a place for specialization, there are a huge number of courses available that delve into a huge number of niches. You can’t take them all. Course need to be chosen carefully and thoughtfully, and with advice from lots of people.
  3. Course instructors matter. One of my areas of research interest is understanding the role of the teacher in tech-infused teaching and learning. I know a thing or two about how important teachers are to the learning process. Give that, you would think that I would have realized much sooner how much impact an instructor has on a course. But it wasn’t until this year that I realized the importance of considering the instructor when making course choices. Advice about graduate school often includes talk about finding a good intellectual and personality match when choosing an advisor. Relationships with course instructors are much shorter in duration, but it’s still worth considering that fit when you have options about when and with whom to take a course.
  4. Conducting good interviews takes a special kind of listening. I did 18 interviews this year — something I’m rather proud of, as the idea of conducting interviews really freaked my socially-anxious self out. They all had their rough patches and awkward moments, but I did feel like I got better at it over time. Before this year, I imagined that the key to a good interview is asking the right question. That is true in a sense, but what makes it really challenging is that the “right question” is different for every person. The key isn’t having really well-thought-out questions beforehand (although, of course, it’s important to have that, too). The key is to listen closely during interviews and use follow-up questions to help participants elaborate on the things that are most interesting or confusing. There’s no way to know all the “right questions” ahead of time.
  5. The best research is conducted by people who care about what they are studying. This seems like a cliche, I know. But this semester I’ve come to realize that I underestimated how much that matters. Skilled and ethical researchers can, of course, complete a valid and sound study on anything. I’m not denying that. But without real interest in the research — without understanding exactly what problem the research is trying to solve or what issue it is addressing — researchers won’t dig as deep as they would otherwise. It’s genuine interest in a topic that serves as the impetus to really push on a problem, question and press on the findings, and use results to make meaningful decisions about what to do next. That is why my biggest and most important goal in my time here is to complete a practicum and dissertation project that really matter to me. It seems like it should be simple enough to do so, but I’m finding it more challenging than I anticipated. It’s so easy to grab at the low-hanging fruit or to choose the things that other people suggest. And sometimes those things can be worthwhile. But sometimes they end up just filling my time instead of really piquing my interest. Figuring out what matters to me is really hard and tiring work — but so, so important.

Thanks so much to all of you who have read any or all of my posts this year. This blog has been a really important tool for me to organize my thinking and try out new ideas. I’m taking next week off, but I do plan to keep writing this summer, even if it is at a bit of a slower cadence than every week.