Vocabulary, Part 1: Why precision might matter in written resources

I’ve been thinking a lot lately about the role of vocabulary in learning, teaching, and research.

Until recently, I didn’t have a very well developed opinion on whether or not we need to worry about kids’ use of mathematical vocabulary, at least at the elementary grades. There were a few ill-articulated assumptions underlying my lesson writing style, though. Generally, I believed that:

  • Helping kids to learn definitions should never be the main point of a lesson. An idea is bigger than its definition.
  • Similarly, perfect use of the vocabulary shouldn’t be the main learning goal of a lesson. I don’t think imperfect expression should invalidate an idea, especially if it is coming from a young child.
  • On the other hand, if an idea is key to a lesson, there’s no reason not to introduce a term for it. I don’t believe in withholding words from kids because they are long or “difficult”. Words can be powerful tools.
  • Even though precise use of vocabulary isn’t an appropriate expectation for kids, I do think teachers should try to be precise in their use of the terms. And that means that curriculum materials should be precise in their mathematical language, too.

I was kind of a stickler about that last point in my curriculum writing days. I think it sometimes annoyed my coworkers, who believed that no teacher was going to notice or change her practice if we wrote The sides are equal in length instead of The sides are equal or said The angle measures 42° instead of The angle is 42°.

I had to admit at the time that they were probably right about that. Teachers have limited time to read curriculum materials and plan and I’m doubtful they spend that time paying close attention to subtle differences in language. Still, when I caught language that was imprecise, I was stubborn about changing it. I justified this mostly by arguing that if any mathematicians reviewed our materials, this would give them one less thing to pick at.

I read and article this semester, though, that made me wonder if there was a bigger reason for precise language than that. Gentner (2010) published a lengthy argument for the reciprocal relationship between language and analogical reasoning. The first half of the paper summarized research suggesting that making comparisons facilitates learning. The second half was a more specific argument about the role of language in learning and cognitive development. Gentner argued that:

  • Having common labels invites comparison of examples and abstraction of shared elements, and
  • Naming promotes relational encoding that helps us connect ideas that are learned in different contexts.

To illustrate her argument, Gentner cited research about learning to apply natural number labels to quantities. Studies of cultures whose languages do not contain specific number labels showed that people of those cultures were able to estimate magnitudes, but were not very accurate at assigning specific number names to quantities, especially as the quantities got larger (Gordon, 2004 and Frank et al., 2008, as cited in Gentner, 2010). Other studies showed that children who speak languages with number names (like English) first learn the count sequence by rote, but by comparing sets of objects (e.g. two trains and two dogs) that have a common number label attached, they gradually bind the number names to the quantities (Gentner, 2010).

This explanation makes perfect sense to me. This is why words are powerful — they are a means of connecting examples at their most fundamental, definitional level. They prompt looking for sameness in contexts where things feel very different.

This got me wondering whether my stubbornness was better justified than I originally thought. Abstract mathematical terms like equal (and its symbol) are known to be poorly understood (e.g., Knuth, Stephens, McNeil, & Alibali, 2006; Matthews, Rittle-Johnson, McEldoon, & Taylor, 2012; McNeil et al., 2006). At least one study has concluded that the contexts in which the equal sign is used impact students’ understanding of its meaning (McNeil et al., 2006). I would not be surprised if a similar study examining how the word equal is used in sentences showed that these uses impact understanding of the word. According to Gentner (2010), we both consciously and unconsciously use words as labels, which invite comparisons, which invite conclusions about meanings. It seems reasonable to suggest that if we use the word equal with counts and measures, but use the term congruent with geometric figures, teachers could abstract more sophisticated and precise meanings of those terms than if we use the term equal sides.

I don’t know for sure, of course. But I don’t think precision in language, especially in resources that teachers can continually reference, can’t hurt the educative power of the resources.


Gentner, D. (2010). Bootstrapping the mind: Analogical processes and symbol systems. Cognitive Science, 34, 752–775.

Knuth, E. J., Stephens, A. C., McNeil, N. M., & Alibali, M. W. (2006). Does understanding the equal sign matter? Evidence from solving equations. Journal for Research in Mathematics Education, 297-312.

Matthews, P., Rittle-Johnson, B., McEldoon, K., & Taylor, R. (2012). Measure for measure: What combining diverse measures reveals about children’s understanding of the equal sign as an indicator of mathematical equality. Journal for Research in Mathematics Education, 43(3), 316-350.

McNeil, N. M., Grandau, L., Knuth, E. J., Alibali, M. W., Stephens, A. C., Hattikudur, S., & Krill, D. E. (2006). Middle-school students’ understanding of the equal sign: The books they read can’t help. Cognition and instruction, 24(3), 367-385.


Integration, Part 3: Authenticity

The end of the semester is approaching, and I’m a bit crunched for time this week. But I do have one more brief thought about integration to share.

A few years before I left my full-time work as a curriculum developer, a freelance journalist interviewed me via email about mathematics word problems and my theories on why students often say they hate them. I told her that my years of curriculum development work really opened my eyes to just how inauthentic word problems can be. Is it possible to write word problems that target a particular mathematics concept and also are meaningful to children? Yes, definitely. Is it possible to write 50+ such problems targeting that same mathematics concept? I can tell you from experience that it gets really tough, really fast.

And that’s before applying the list of constraints that comes along with the task in large-scale curriculum development. During our latest round of development, we were not allowed to reference any junk food in our problems. We also had to stick to round objects when we talked about fractions, because our chosen manipulatives for fractions were circles. How many round items can you think of that aren’t considered junk food, but are round and make sense to divide into parts? It starts out easy: oranges, tortillas, cucumber slices. But then come the descriptors that help make semi-junky food sound ok: veggie pizzas, whole-wheat pita. By the tenth problem or so, I promise you’ll be grasping at straws. I’m pretty sure we wrote some fraction problems about cans of cat food.

My point is simply this: Starting with some form of disciplinary content and back-tracking to a reasonably authentic task is difficult after the first few times. And when tasks start to lose authenticity, kids notice. The activities they complete start to feel like busy-work (because they are).

The issue I’ve been thinking about this week is whether the task of contextualizing content becomes easier or harder when you’re thinking about two disciplines, as in integrated curricula. On one hand, it seems like finding a task authentic to both disciplines might be more difficult. But on the other hand, I think part of the difficulty of generating authentic tasks is that usually authentic tasks require multiple kinds of component skills. Finding one that gives kids exposure or practice to one particular thing, but does not require any other skills they don’t yet have, is a challenge. So I think it is possible that considering two disciplines might actually open up some space to move in task development.

Take the number grid activity I discussed last week. I’ve written other activities before asking kids to map out paths on a number grid. And I’ve asked them to limit their movements to moving in rows or columns — adding or subtracting 10s or 1s. But I never had a great reason for that restriction, other than a desire to focus on place value, so often the task felt inauthentic. But when I added the element of programming a robot, suddenly the restriction in movements had new meaning: Programming languages are made up of a limited set of commands. So a very similar activity became more authentic — along one dimension, at least — through integration.

I’m hoping to find more of these happy compatibilities as I continue to think about integrated curricula.

Integration, Part 2: Translation

Here we are at the end of the week, and that means that it’s time for Integration, Part 2!

Last week I wrote about shifting my views on the development of integrated curriculum. Rather than framing my efforts as trying to understand and consistently achieve a fully integrated curriculum, I started thinking about how a long-view of curriculum development might enable a different model: helping kids to walk across Kiray’s (2012) balance, rather than stay in the middle. This view doesn’t eliminate the need to find fully integrated activities, but it shifts their role. Rather than being the activity form that supports all kinds of conceptual development within an integrated curriculum, the fully integrated activities serve as a meeting point, or a gateway between the two disciplines. At other points in the curriculum, activities might make use of convenient connections between disciplines. In those key, fully integrated activities, though, I think kids probably need to look both disciplines straight in the face, note the similarities, and also wrestle with and make sense of some of the differences.

So. What might that look like? In my case specifically, the question is, what might that look like for elementary school kids working in mathematics and computer science?

I’ve spent a good bit of time thinking about this business of synergies and differences between mathematics and computer science and what they might mean for integrated curriculum. (I’m hopeful — please oh please oh please — that soon I’ll be able to point you to a published paper or two about this.) It’s a complex issue full of nuance and I always have a hard time coming up with a clear description or list of what’s the same and what’s different. For a while I thought I had to have a handle on that before I could think about writing an activity that asks kids to wrestle with the ideas.

But then I remembered a key idea that underlies my whole educational philosophy:

Don’t do the thinking for the kids. Let the kids do the thinking.

What does that mean for a fully integrated math + CS activity? First of all, it means I don’t have to have some definitive list of synergies and differences. Kids will notice things, and talk about them, and they may or may not map directly onto my ideas. And that’s ok, because there isn’t a perfect list.

It also means that I don’t have to generate a problem context that maximizes synergies and reduces differences as much as possible. I saw that as a goal, at first. Too many differences might either distract from the big ideas that lesson is meant to address or lead to confusion in one discipline or the other later on.

But I no longer think that’s true. The point isn’t to minimize differences, but rather to have kids think about them.

Based on this thinking, here’s my new idea for fully integrated activities: Instead of figuring out the best way to address both disciplines at once, we ask kids to translate the language of one discipline into another.

For example, take a common activity that happens throughout elementary mathematics: Counting, adding, and subtracting on a number grid. I’ve written plenty of activities in my career that have kids think about making “jumps” on a number grid. To add 23 to 37, for example, they might jump down two rows to add 2 tens, and jump to the right three spaces to add 3 ones. They land on 60, the sum.


They could think about this as a missing addend problem, too. How can I get from 37 to 60? Add 2 tens, then add 3 ones.

This activity, particularly the visuals associated with it, remind me a lot of the kinds of activities kids do in programming curricula aimed at elementary school. Kids direct on-screen characters and physical robots through pathways by programming them in terms of distance and direction of travel. When I first started thinking about this, it seemed like a superficial connection based on nothing but a common grid-like structure. But more recently I’ve been wondering if the similarities go deeper. In both cases, kids are giving directions to move from one point to another. The difference is in the way of communicating that information.

Is mapping the two kinds of language about directions onto each other something kids could do? I’m not sure, but I think at least upper elementary school students could. Not only that, but I think the kinds of thinking work it would take to translate directions like this:

Add 2 tens

Add 3 ones

… into directions like this:

Face toward increasing 10s

Move forward 2 units

Turn left

Move forward 3 units

… could be beneficial to kids. Assuming they start with a printed number grid and illustrate the mathematical directions with their fingers, to change that into spatial directions they’d need to engage in some perspective-taking. To orient themselves and “think like the robot,” much like Papert (1980) advocated using using the Logo turtle as an object to think with.

I think kids might come out of that translation experience with a different way of thinking about the structure of the number grid, and also a foundation for thinking about algorithms that would support other programming activities.

Maybe the “full integration” balance point isn’t about pointing out synergies and differences between disciplines to kids. Maybe it’s about allowing kids to translate their thinking from one to the other.


Kiray, S. A. (2012). A new model for the integration of science and mathematics: The balance model. Energy Education Science and Technology Part B: Social and Educational Studies, 4, 1181-1196.

Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books, Inc..


Integration, Part 1: Across the Balance

I’ve spent a significant amount of time over the last four or five years thinking about integration of mathematics and computer science. My efforts have been driven, in particular, by a general dissatisfaction with a lot of the available “integrated” materials for elementary school. Without calling out any programs by name (because criticism of any program is not at all my point), I’ll say that I see an awful lot of what I’d call “slapping a math standard on a CS activity.” The integration was superficial at best — in many cases, kids could (and probably would) complete the programming activities without really engaging with the mathematics. And that makes sense, because the math is not what the activities were claiming to support.

I just felt like there had to be a better way.

I looked as several models describing different ways to think about integrating mathematics and science to help push my thinking about what that better way might be.

While working on a paper with some colleagues, I came across something called the balance model for integration (Kiray, 2012). The article was about integration of math and science, so science was swapped for CS, but otherwise the diagram of the model looked something like this:

Kiray model

Kiray (2012) doesn’t use exactly those words (“Math with opportunistic CS” is “Math-centred science-assisted integration” and “Math with coherent CS” is “Math-intensive science-connected integration”), but I think my version captures the differences he is trying to get at. Integration is opportunistic when activities take advantage of any available connection, without worrying about whether the content that is connected is a part of the expected curriculum or related to anything else students have explored in the second subject. This is what I feel like a lot of the math connections in elementary CS materials look like. Yes, kids could count the number of blocks in a program that moves a robot around a maze, and technically that’s a math connection. But it’s opportunistic, and therefore likely not all that meaningful in terms of developing math learning.

And it works the other way, too. Full-time CS ed folks (I’m only a part-timer, really) feel the same way when I say kids could think about drawing a mathematical model in terms of  CS abstraction. It took me a while to understand the skepticism coming from some corners of the CS ed world when I would talk about such things, but now I think I get it. It’s as hard for them to see how abstracting numbers from a word problem is connected to meaningful CS learning as it is for me to see how counting blocks is connected to meaningful math learning.

The types of integration that are closer to the middle, math with coherent CS and CS with coherent math, still have one discipline or the other in the driver’s seat. However, the connections to the other discipline are made with careful attention to the development of both disciplines. The way each discipline builds on itself is considered, and how the two might be made to build on each other. Still, in all the cases when a choice has to be made for better development and opportunities for learning, the driving discipline wins. So, one discipline progresses at a typical, grade-level appropriate rate, and the other lags behind what might be possible otherwise.

Finding a way to plan for coherent progressions through CS/CT content as it gets integrated into mathematics was a big part of the goal of the LTEC project. By developing learning trajectories for CT concepts to use as a guide while working on a math+CT curriculum, we hoped to help make the learning in both disciplines meaningful.

It seems to be working (results coming soon!). Still, I think one of the biggest lessons I’ve learned as I’ve watched the work progress (I want to be clear that my colleagues are doing most of the work) is the delay in one discipline is unsatisfying, even for the people (like me) who claim to care more about one discipline over the other. My inner dialogues about these issues tend to go like this:

Well, we could introduce the idea of looping here. The connection seems really solid.

Yes, but the kids haven’t really been examining repetition anywhere else, so it’s going to take some build up to that.

But… the fit is so good! We can’t give up that opportunity! Maybe we do just a straight-up CT activity in here to get them ready?

We said we weren’t going to do that.

I know. Blargh.

We either choose to give up the good connection opportunity and let the CS lag behind, or put in a CS-only or CS-with-opportunistic-math activity. Then the curriculum feels like it’s got “extra” stuff in it, and it feels kind of jerky. So even though math with coherent CS is better than math with opportunistic CS, it still just feels like there’s got to be a better way.

That leaves us with the balance point: total integration. Here’s Kiray’s description of that:

In total integration, the target is to devote an equal share of the integrated curriculum to science and mathematics. Neither of the courses is regarded as the core of the curriculum. This course can be called either science-mathematics or mathematics-science. Separate science or mathematics outcomes do not exist. An independent person observing the course cannot recognise the course as a purely mathematical or a purely scientific course. One of the aims of the course is to help the students to acquire all of the outcomes of the science course as well as the mathematics course. (p. 1186)

Sounds great, right? I’ve conceptualized much of my work over the last five years to be a pursuit for what total integration looks like. Unsurprisingly, I have not figured it out. I can think of one or two example activities, maybe, but not a whole curriculum. Plus, no school is going to be happy with no recognizable math or CS outcomes. Total integration was supposed to be the solution I was searching for, yet sometimes it felt unachievable or inappropriate for real schools, teachers, and kids.

A week or two ago, though, I was doing revisions on another paper. We got some thoughtful comments that prompted me to step back and try to articulate some of the assumptions underlying the analysis in that paper. One of them that I included in my response to the editor was this:

We believe that curriculum development has to take a longer view than one activity at a time.

It’s true. This belief has underscored all of my mathematics curriculum development work for the last 10 years. (My colleague — shout out to Carla — left a comment on that sentence saying, “Actual snaps for this!”) Yet I had not really said it to myself for a while. And when I did, I realized I’d been thinking about this balance framework in the wrong way.

What if it isn’t about picking a kind of integration and staying there?

What if, instead, development of integrated curricula is about moving across the balance?

We start with math. (I mean, it’s already happening anyway!) We take some opportunities to dabble with CS in math (which mostly involve use of CT practices, I think). We find places to build reasonable, coherent activity sequences, still letting the math drive. We work on identifying those few, golden opportunities for total integration.

And here’s the part I thought I’d never say:

Then we keep developing the CS, and start being ok with letting the strong connections to math fade and maybe even disappear.

Integration is hard to achieve even one activity at a time, and it’s really, really, hard to maintain. Maybe a continual push for fuller integration isn’t really what we want. Maybe we just want to drive toward an example or two of really rich, total integration activities and make sure they’re functioning as good transitions into learning CS without the mathematics.

How do we find and develop those golden, fully integrated, strong transitionary activities?

I’ll talk about one idea I have — next week.


Kiray, S. A. (2012). A new model for the integration of science and mathematics: The balance model. Energy Education Science and Technology Part B: Social and Educational Studies, 4, 1181-1196.

Success and Failure, Part 3: How to Press the Submit Button

Welcome back for Part 3 of my series on success and failure.

To recap: I know from experience it how discouraging it can be to see rejections and failures pile up. But it’s important to keep in mind that just about everyone in academia has a Failure CV as long as yours. It’s also important to keep your CV+ in mind, and know you are more than a list of published articles and funded grants.

I do not claim to have figured out how to crack the publication code yet, so I can’t provide any advice on increasing your chances of getting articles accepted. Instead, in this last post, I’m going to share some thoughts about a very necessary (but not sufficient) step in getting there: getting manuscripts out the door and submitted.

I have come to the realization over the past six months ago that I submit a lot more manuscripts than your average graduate student, both to conferences and to journals. This is not a strategy I adopted to increase my chances. Rather, I think it’s a reflection of a particular set of opportunities I’ve had as well as some of my personality traits. I’ll talk about those a bit below. But first I want to make clear that I’m not claiming that acceptances are random or quality doesn’t matter. The value of a higher number of submissions, in my view, is twofold: (1) More submissions mean more writing, which means more practice; and (2) more submissions also means more feedback, both on your papers and on the way your work suits or doesn’t suit different venues. A lot of my work sits at the intersection of disciplines (math & CS; ed psych & learning sciences; student thinking & teacher thinking, etc.), and if nothing else, the past year of submissions and feedback has helped me learn about where different kinds of pieces might receive the best reception.

So, anyway… I think getting stuff out there is good for a lot of reasons as long as quantity is not pushed to the detriment of quality. Here are three things that I think contribute to my relatively high rate of submissions.

First, I’m involved in three different lines of work with three different sets of collaborators. There is my assistantship work, the work I’m continuing from my previous job, and a line of inquiry I’m pursuing with a favorite collaborator that is outside of either of these projects. This fact alone explains a lot of the reason why I’m able to contribute to so many papers: I have an abundance of opportunity. I know I am lucky this way, and I’m not suggesting everyone should seek to be involved in three projects as if three is some magic number. But I do thinking having at least a couple parallel lines of work can be helpful. More data and more collaborators always lead to more ideas, and it’s nice to have a writing project to work on when a project is at a stage when writing isn’t possible (e.g., during long periods of data collection).

Second, I write a lot and in a lot of different contexts. I write papers, yes. But I also write this blog every week. I write instructional activities for kids and educative materials for teachers as part of my assistantship work. I take a lot of notes as I read academic articles, trying to summarize important parts in my citation manager for better location and memory of the details later. All of this serves as practice that I think contributes to my ability to produce a workable manuscript in limited time when the need and opportunity arises. As I noted above, more manuscripts mean more practice — but my point here is that more writing of any kind is also more practice, and that matters.

Lastly, and probably the most importantly, I judge my own work in absolute, rather than relative, terms. I do not consider myself a competitive person. My upbringing almost certainly contributes to this — in my family, we played games, but we never kept score. My default mindset is therefore to think about whether something is good — not whether it’s good enough for x, better than y, or still not as good as z. I judge my own work by whether or not I feel like I’ve met my goal of writing a good paper worthy of someone else’s time to read. I don’t tend to think so much about whether I think it’s good enough for a particular journal or not as significant as something I read last week. Admittedly, I think this focus on my own standards for worth and clarity does me harm sometimes. More attention to cultural and methodological norms in my discipline, for example, is something I should work on. But my modes of judgement also help me let go of papers and move on to something else while I wait for feedback.

Different styles of research, collaboration, and writing will certainly call for different ways of operating, and so this list won’t work for everyone. But I hope it provides some food for thought.

As always, thanks for reading.

Success and Failure, Part 2: My CV+

Hi, everyone.

I hope last week’s Failure CV was helpful to those who perused it. My hope was that it would show that failure is a common and normal part of any academic’s life. I thought this might help other people feel better about their overall records.

I am aware, though, that understanding the role of failure in general doesn’t make any individual rejection feel any better. The number of failures I have experienced has not indoctrinated me to disappointment. Shortly after I posted last week, I recalled two other rejected grant proposals, each of which I poured a lot of work into. In the intervening week, I received two desk rejections of papers. None of that made me feel good, and none of it just rolled off my back. Every paper and proposal represents hours of my time and gallons of my blood, sweat, and tears. It’s got pieces of me in it. It hurts when other people judge it as unworthy of funding or publication.

Still, over the years, I think I have developed some strategies for managing disappointment and frustration. I am sharing three of those strategies today.

First, when reading reviews, I suggest making a conscious effort to separate commentary on the ideas from any implicit value judgements. This is not easy. Most of the time, commentary on the content feels like thinly veiled value judgements, and it’s tempting to draw inferences. For example, I recently got a review on an empirical research paper that suggested it would be more suitable for a practitioner-facing journal. The paper was an analysis of a huge data set from a super high-level perspective and full of talk of statistical significance and effect sizes. The suggestion to publish it for a teacher audience did not make sense to me, and so I immediately went to a value-judgement place, and started thinking that the underlying meaning of the reviewer’s comment was that the paper didn’t present real or useful research.

I don’t know if that was the reviewer’s intended implication or not, but my point is this: Regardless of whether the value judgement was really there, considering it does not help move the paper forward. Considering whether the paper is actually appropriate for a practitioner journal is a tractable step: I can take that advice or leave it. (In this case, my co-author and I left it.) Considering whether or not the paper was real or useful research is not useful or tractable. If I really believed the paper was not worth sharing, I would not have spent so much time analyzing the data and writing it. I also have a coauthor I respect, who felt the same way about the value of the work. Those two opinions matter; the inferred, unstated (and therefore potentially non existent) opinion of this reviewer does not.

It’s hard to stop yourself from translating content commentary into value judgements — or to ignore the value judgements that are explicitly written into reviews sometimes. But it’s a vital skill to develop. Only in focusing on concrete things that reviews say about the ideas in a paper can reviews be used to advance a paper. Read the reviews and feel your feelings, but don’t make any decisions on what to do next until you’re able to separate commentary from value judgements. Early in my career, getting to that place took me a long time. Now, I get there faster. I think you will, too, with practice.

Second, use the interminable time you spend waiting for reviews to come back to advance additional lines of work. I firmly believe that rejections are easier to take when they relate to one particular line of inquiry of which you have several. We’re pressured to specialize, especially in graduate school, but this does not necessarily mean we must pin all of our hopes to a few papers and opportunities. A rejection of a paper — even a paper I particularly liked — has always been easier when I have other papers that I’m excited about in development or under review. Spreading out your interests a bit can help harsh criticisms of your papers feel directed at one piece of your work — not as you as a researcher, student, or professional.

Third, and probably most importantly, keep a list of accomplishments you are proud of that go beyond what’s listed on your CV. For whatever reason, we hold up funding and publications as the accomplishments most worthy of mention on a CV. Even when we have diverse lines of inquiry represented there (as mentioned in my second point above), these are only a very narrow window of things we accomplish in academic work, especially in education. So, even though these things won’t get listed on your official documents, keep a list for yourself and read it over from time to time to remind yourself that you are more than your papers.

I’m sharing my list of accomplishments that you won’t see on any official version of my CV below. I call it my CV+. I hope it helps you think about things you’re proud of, too.

Kathryn Rich CV+

Faithfully writes a blog post every week that classes are in session, and uses that habit to advance thinking and writing skills.

Continuing to improve at developing relationships with partner teachers.

Has lots of wonderful memories and stories about the interactions I’ve had with students during field work.

Successfully navigates advancing work on three distinct research projects and three courses at the same time.

Gaining a steady increase of Twitter followers.

Stands up for equitable treatment of fellow graduate students.

Provides substantive and thoughtful feedback to peers through coursework and formal review processes for conferences and journals.

Recently wrote a paper in which one of my advisors complimented my “economy of expression” (HUGE for a verbose writer like me!).

Remains intellectually curious throughout the trials and tribulations of academic life.

I encourage you to write your own CV+, friends, and more generally to keep your chins up in the face of rejections and failures. I’ll be back next week with some tips on how to get papers out the door to those feisty, fickle reviewers.

Success and Failure, Part 1: My Failure CV

This three-part blog series is going to have a bit of a different flavor than most of what I write. I typically use this blog to share thoughts about the various research topics and educational issues I’m interested in. Recently, though, I’ve been thinking about more general issues related to academia, research, and writing. I’d like to spend some time sharing my thinking about a couple of things.

One of my advisors tells me from time to time that I should take care in moments when I “rattle off” accomplishments, lest I make my peers feel less accomplished in the process. I can see both sides of this issue. On one hand, I don’t think my CV should really have any effect on anyone else. I am me and they are them, and we come from different backgrounds. In particular, I know (and they know) that I have the benefit of 10+ more years in an academic space than most of my peers.

On the other hand, I understand his point and sometimes think about how I might actually use those years of experience to support my peers, rather than having the unintentional (but nonetheless real) effect of contributing to others’ imposter syndrome. (Which I really struggle with, too, by the way.)

I figure one way to start is to be transparent about how when I talk about successes, I am sharing a really narrow highlight reel of my academic journey. So, today I’m sharing my Failure CV. What’s below is a partial list of the rejections I’ve received since leaving high school almost 20 years ago. (These are just the ones I remember or have a record of — which I guess is less than 50%.)

I masked my collaborator’s names and the project or articles names for the sake of not implicating anyone else in this list. The point isn’t really what the failures were — just the sheer number of them, in particular in comparison to the successes listed on my actual CV.

Kathryn Rich — Failure CV


  • Brown University, REJECTED for undergraduate admission, 2001
  • University of California at Berkeley, REJECTED for PhD admission, 2005
  • University of Cambridge, REJECTED for PhD funding, 2017


  • Pearson Education, REJECTED application for content developer, 2007
  • McGraw-Hill Education, REJECTED application for educational research position (after I had already been an employee there before!), 2007
  • Approximately 50 other job applications REJECTED, 2005 and 2007 after leaving school (The above are just the ones I remember because they stung the most.)


  • While I can’t think of particular ones right now, I’m sure there are about a billion of these.


  • IES proposal to develop a digital curriculum tool that gives feedback to teachers on adaptation, REJECTED, 2016
  • IES proposal to develop integrated mathematics + CT curriculum, REJECTED, 2016
  • NSF proposal to develop learning trajectories for the mathematical practices, REJECTED, 2016
  • NSF proposal to develop a phone app to support early childhood mathematics education through augmented reality “math walks,” REJECTED, 2016
  • NSF proposal to support scaling of an afterschool science program already shown to support girls’ science identities, REJECTED, 2016
  • NSF proposal to develop a digital curriculum tool that gives feedback to teachers on adaptation (yep, another try at the above), REJECTED, 2016
  • NSF proposal to develop interactive problems that allow underrepresented minorities to explore STEM careers, REJECTED, 2015 (This one really hurt!)
  • IES proposal to develop integrated mathematics + CT curriculum (yep, another try at the above), REJECTED, 2015
  • William T. Grant Foundation proposal to to develop interactive problems that allow underrepresented minorities to explore STEM careers (different spin on the above), REJECTED, 2015
  • NSF proposal to study the use of innovative kinds of feedback in tech-enhanced math content, REJECTED, 2014


  • Theoretical analysis of relationship between computational abstraction and mathematical modeling in elementary school, REJECTED 2018
  • Exploration of theories of fact fluency development using a huge dataset from an online game, DESK REJECTED (Not even set out for reviews!) 2017
  • Study of how elementary school teachers integrated CT into mathematics, REJECTED (after a round of major revisions) 2016
  • Discussion of how databases of authentic problems could expand mathematical modeling opportunities in K-12, REJECTED 2015
  • Document analysis of the CCSS for CT content, MAJOR REVISIONS requested twice, fate still TBD.
  • AERA paper on fact fluency development using the online game dataset, REJECTED 2017
  • SIGCSE paper on the development of a particular computational thinking trajectory, REJECTED 2017
  • SIGCSE paper describing an analysis of the mathematical understandings required to use various Scratch blocks and modifications that would align with math standards, REJECTED 2016
  • NCTM proposal for session on comparing affordances and costs of physical and digital manipulatives, REJECTED 2015
  • NCTM proposal for session on open response and re-engagement in K-2, REJECTED 2014

Again, this is only the stuff I remember.

I’ve been lucky to have been accepted to three schools at which I’ve learned a great deal — but two Ivy League schools rejected me, and Cambridge took me only if I was willing to pay almost $100K for the privilege of being there. I’ve worked on at least 11 grant proposals — only 1 has been funded. I’ve worked on more than 10 journal articles, and not one has been published so far. I’ve had better luck at conferences, but even there I’m batting about 0.500.

My successes are my highlight reel. So are everyone else’s. Don’t ever forget that.

Over the next two weeks I’ll talk first about some of the tips I have for dealing with all the rejections that come with this territory. Then in week three, I’ll talk about how I manage to get so much stuff out the door — if there’s one thing I’m good at, it’s hitting submission buttons.


Educative Curriculum Materials, Part 3: Teachers as Creators

Happy Friday, everyone. Welcome back to Part 3 of my series of educative curriculum materials.

Two weeks ago I wrote about how I worried that curriculum materials were overspecified in ways that could be suppressing pedagogical design capacity.

Last week I wrote about how I thought translations of some of the information in curriculum materials into different forms of media might help in efforts to invite the development of pedagogical design capacity while also supporting it.

This week, I’m thinking about the potential of digital curriculum materials from a different angle. One of my lab mates, who works as a technology coach at a middle school, mentioned last week that one of the broad goals at her school this year is to help kids become not mere consumers of technology, but creators. This got me thinking: The translations of information I discussed last week, powerful as they might be, still position teachers as consumers of curriculum materials and not creators. On this particular dimension, nothing I’ve said so far really moves the needle.

Is there a way to use a digital platform to place teachers in a creative role as they work with curriculum materials? Even if it’s possible, is it something to strive for?

I’ve written before about why I think taking teachers’ roles as designers of curriculum is important, and why a digital platform is a place to do it. The short version is this: Making a digital curriculum manipulable — maybe even “programmable” (Hoyles & Noss, 2003) — has the potential to make curriculum developers’ intentions visible in a new way. Allowing teachers to make adaptations within the system and see effects of their adaptations could communicate the rationale for some of the design decisions made by the curriculum developers at a point in time when that information is useful to teachers — that is, when they are considering an adaptation that may or may not be aligned with the original intent of the activity, lesson, or unit.

So, that’s why positioning teachers as creators of some sort is something worth exploring. A question that remains, though, is what parts of curriculum materials creation are best left to teachers, and what parts curriculum developers are better positioned to do. Asking teachers to design curriculum from scratch is, after all, a really tall order with everything else they are expected to do. What can I, as a curriculum developer outside of the classroom, do to support teachers while still respecting their roles in the design and creation process?

As I’ve reflected on this issue this week, I’ve realized that I’m not the only person to have thought about this. To close out this series, I thought I’d highlight a couple of examples of research that discusses teachers as creators of curriculum and think about what those studies suggest about the most fruitful divisions of labor between curriculum developers and teachers.

First, de Araujo, Otten, and Birisci (2017) shared a case study of a teacher implementing a flipped classroom model for the first time. The teacher followed a textbook as she created lecture videos she expected students to watch before they came to class. Then students spent class time working on problems by themselves or in groups. The authors found that with these videos available as a resource, students did not reference the textbooks much, even while working on problems. They also noted that while the teacher used much of the content from the textbook, she did make some significant changes in terms of adding or skipping content. As such, the videos could be considered a curriculum material in and of themselves — one created by the teacher. This provides one model of the line between curriculum developer and teacher in terms of materials creation. Perhaps curriculum developers simply keep doing what they’re doing, and teachers use video or other digital creation tools to create a new version of the materials that suit their particular needs.

Relatedly, Bates (2017) described an effort to build a tool that allowed for direct adaptation of a pre-created curriculum resource. She and her collaborators created a digital mini-version of a curriculum as described in a textbook and built in tools that allowed teachers to re-order, “snooze,” or skip activities. When teachers made these changes, they received feedback about the potential implications. For example, if they deleted an activity that was intended as an important prerequisite for a later activity, they were advised that additional adjustments may be needed to compensate. Thus, like the teacher in the above flipped classroom study, teachers using Bates’ tool were also creating new versions of an existing textbook. The difference, in this case, is that teachers received feedback on their changes. This additional feature changes the roles a bit: curriculum developers create the first version of the curriculum and also serve as advisors (of sorts) on the process of adaptation, through the provision of automated feedback.

Finally, a third research team (Confrey, Gianopulos, McGowan, Shah, & Belcher, 2017) built a different kind of tool. Called the Math Mapper, their tool allowed teachers to select and sequence resources pulled from other sources (often the Internet), then receive feedback on the coherence of their chosen sequence of activities. This model changes the roles yet again. Here, teachers create the first version of their curriculum, and the curriculum developers serve as advisors (through feedback) on how to make improvements.

It’s not clear which one of these is the most productive approach. Most likely, different approaches will be best suited to different contexts. The first version, where teachers create their own versions of curriculum, is more or less what happens with any implementation, although the videos allow the textbook to be eliminated altogether. Such an approach works well for many teachers, but not always. That’s what makes me believe that the two kinds of feedback systems, discussed in Bates (2017) and Confrey et al. (2017), are worthy of further investigation. Effective feedback systems for teachers creating curriculum will be a huge challenge to create, but I have a feeling it will be a worthwhile effort.


Bates, M. S. (2017). Leveraging digital tools to build educative curricula for teachers: two promising approaches. ZDM, 49(5), 675-686.

Confrey, J., Gianopulos, G., McGowan, W., Shah, M., & Belcher, M. (2017). Scaffolding learner-centered curricular coherence using learning maps and diagnostic assessments designed around mathematics learning trajectories. ZDM – Mathematics Education, 49(5), 717–734.

de Araujo, Z., Otten, S., & Birisci, S. (2017). Teacher-created videos in a flipped mathematics class: digital curriculum materials or lesson enactments? ZDM – Mathematics Education, 49(5), 687–699.

Hoyles, C., & Noss, R. (2003). What can digital technologies take from and bring to research in mathematics education?. In Second international handbook of mathematics education (pp. 323-349). Springer Netherlands.


Educative curriculum materials, Part 2: supporting and inviting adaptation

Welcome back for part 2 of our series on educative curriculum materials!

To briefly recap last week’s post: I argued that creating educative curriculum materials requires a delicate balance between the provision of lots of highly specific information and leaving space for teachers to exercise and develop pedagogical design capacity. And I shared a worry and a hope: A worry that we have leaned too far toward the over-specified end of the continuum between specification and flexibility, and a hope that perhaps digital delivery can help us find a better balance. But how?

Davis & Krajcik (2005) suggested that one way might be through use of multiple forms of media, rather than just one: “By delivering educative curriculum materials online, we have the opportunity to provide more information along the lines of the design heuristics presented here, using many different media. For example, because complementing text with other media can promote more effective learning and because teachers learn from realistic descriptions of practice, online educative curriculum materials could incorporate audio and visual records of teachers’ enactment of lessons” (p. 9).

Their text seems to suggest they were thinking of video as a way to provide additional information that was not already present. In the case of a lot of curriculum materials, though — in mathematics, as well as other subjects — I think that video could instead be a translation of information we already attempt to provide.

Let me explain.

When I first started thinking about how the educative curriculum materials I’m familiar with could be made to better support pedagogical design capacity, I began looking into research on teacher agency. Several researchers have explored how teachers make agentic choices in the context of various reforms. A common theme in these papers is that one reason reforms don’t succeed is that they fail to allow teachers to contribute their expertise toward the reform effort. Instead, they are given directives rather than autonomy. I think this quotation sums it up well: “Historically and continuing today, two aspects of teachers’ agency are limited within educational reform efforts, namely, their capacity to shape and define the course of the reform effort and their level of control or volition” (Severance, Penuel, Sumner, & Leary, 2016, p. 532).

When translated to apply specifically to the context of use of educative curriculum materials, I think Severance et al.’s (2016) above quotation would say something like this: Historically and continuing today, one or both of the following aspects of teachers’ agency are limited with regards to using reform-oriented curriculum materials, namely, their pedagogical design capacity and their opportunities to exercise it. That is, educative curriculum materials don’t always support the development of pedagogical design capacity while also inviting it.*

As noted last week, I think materials developers (myself included) make valiant attempts at the support piece. That’s what all the information we pack into educative curriculum materials is for. When we script student dialogues or discussion questions with sample answers, the intent (most of the time, at least) is to provide a model of how a lesson might go, to allow teachers to imagine what might happen in their classrooms. But there’s research that suggests this isn’t always the effect that the scripting has. I know there is a study somewhere — comment if you know it! — that discusses a teacher who had her students read aloud sample student dialogues rather than having an open discussion. Parks and Bridges-Rhoads (2012) found that a preschool teacher using a highly scripted literacy curriculum applied those scripts in her mathematics teaching, which limited opportunities for students to explain their mathematical thinking. Grossman and Thompson (2008) noted that highly structured materials were very useful for new teachers, but further found that the teachers found it difficult to abandon the structured activities and practices even as their grew into awareness of their instructional limitations.

So, sometimes the script-like elements aren’t functioning as a support for developing pedagogical design capacity. Instead, they’re being interpreted in ways that limit how and when teachers make design decisions for instruction. In short, in an effort to support pedagogical design capacity, we are doing exactly the opposite of inviting it. We may actually be suppressing it.

On the other hand, simply removing the support and not providing any model of implementation does not seem right either. Only giving an overview of a lesson, with many details to be filled in, would certainly invite pedagogical design, but it wouldn’t support it.

So how do we support and invite use of pedagogical design capacity at the same time? I think one way is to translate all those sample questions and dialogues into classroom video, for the following reasons:

  • We know teachers learn a lot from watching classroom video.
  • The videos would serve the function of providing a model of implementation — a specific kind of support for developing design capacity.
  • It would be very challenging to reproduce the dialogue from a video while actively teaching. Videos, more than text-based scripts, communicate the idea that this is a sample implementation, not something to be followed rigidly. Thus, it better invites pedagogical design from teachers.
  • It’s also possible to provide more than one video of the same lesson, which can also clearly communicate the flexible versus critical elements of a lesson. This could invite teachers to make changes to the flexible parts while supporting them in understanding overall intent of the lesson.

In short, I think translation of sample questions and dialogues into classroom video is a concrete and widely applicable way to take up Hoyles, Noss, Vahey, and Roschelle’s (2013) call to support teacher adaptation of materials by shifting from scripting to steering.

Of course, I’m making this change sound easy, but it’d be difficult to accomplish in practice. It raises questions of how to obtain and choose among video resources, what text goes on the page instead of the scripted elements, and so on. As for many things, the devil is in the details. Still, it’s been a productive line of thinking for me. It’s made me wonder what other elements of educative curriculum materials could be translated (not eliminated!) in ways that help us support pedagogical design capacity while also inviting it.

*Note: My sincere thanks to Dr. Corey Drake for helping me articulate this idea in a very productive and helpful conversation I had with her today!


Davis, E. A., & Krajcik, J. S. (2005). Designing educative curriculum materials to promote teacher learning. Educational Researcher, 34(3), 3–14.

Grossman, P., & Thompson, C. (2008). Learning from curriculum materials: Scaffolds for new teachers? Teaching and Teacher Education, 24, 2014–2026.

Hoyles, C., Noss, R., Vahey, P., & Roschelle, J. (2013). Cornerstone Mathematics: Designing digital technology for teacher adaptation and scaling. ZDM – International Journal on Mathematics Education, 45(7), 1057–1070.

Parks, A. N., & Bridges-Rhoads, S. (2012). Overly scripted: Exploring the impact of a scripted literacy curriculum on a preschool teacher’s instructional practices in mathematics. Journal of Research in Childhood Education, 26(3), 308–324.

Severance, S., Penuel, W. R., Sumner, T., & Leary, H. (2016). Organizing for teacher agency in curricular co-design. Journal of the Learning Sciences, 25(4), 531–564.


Educative Curriculum Materials, Part 1

For more than 20 years now, educative curriculum materials have been a popular focus for mathematics education research, development, and theorizing. When the 1989 NCTM Standards laid out new and ambitious ways of teaching mathematics, curriculum materials were seen as a potentially powerful mechanism for supporting teachers in making the needed changes to their practice (Ball & Cohen, 1996). Because they were (and still are) so widely used, and fit into the day-to-day work of teachers, they seemed (and still do seem!) like a fruitful avenue for teacher learning. Several development groups received NSF funds to create educative curriculum materials, so called because they were intended to support both student and teacher learning.

The logic in this argument makes sense to me, and I spent a large chunk of my career working on a couple of sets of these educative curriculum materials. I truly, sincerely loved my work. I never believed I had it in me to be a teacher. I was too anxiety-ridden to handle all the on-demand thinking and decision making, and I was never very good at developing rapport with kids. But I did (do) admire teachers and think their work is both challenging and important. When I stumbled into a career in educative curriculum materials development, I was really happy to have found a way to support educators.

I’m still proud of that work, and respect the colleagues I worked with. But graduate school has given me some time for thoughtful reflection on our efforts, and has me wondering if we’ve gone a bit astray on our efforts to make curriculum materials educative.

The problem is this: For the most part, our methods for making materials educative is simply to provide more and more information. To help teachers anticipate student thinking, we supply sample student work. To deepen teachers’ mathematical content knowledge, we provide mathematical background that goes into greater depths than we expect students to reach. To give teachers a model of how a discussion might play out, we script questions and sample student dialogue. To help teachers discern the intended takeaways from each activity, we provide the rationale for the activities and the reasons they are sequenced as they are.

It’s not that I think all this information should be concealed or ignored. I understand the purpose of providing it. Davis and Krajcik (2005) laid out five high-level guidelines for creating educative curriculum materials, and the first four have to do with providing the kinds of information described above. (It’s a great piece to read if you’re interested in this topic.)

My concern is that in the course of providing all this information, we’ve created materials that are wildly over-specified. With lessons scripted out and detailed rationales for why the lessons are as they are, the materials end up reading like an argument for why they should be implemented exactly as they are written — even if that’s not what was intended. And worse, oftentimes it’s even generous to call what we write an argument. Several researchers have analyzed existing educative materials and found that their voice and structure tends to talk through teachers, telling them exactly what to say to students, rather than to teachers (Stein & Kim, 2009; Herbel-Eisenmann, 2007). As educators ourselves, we know that telling a student what to do step by step isn’t particularly educative. Why do we think materials that tell teachers what to do step by step will be educative?

There are other issues with overspecification, too. For one thing, we know that teachers don’t read curriculum materials in their entirety, and not all teachers read them the same way (Remillard, 2012; Sherin & Drake, 2009). Overspecification also seems to translate to lack of flexibility in the opinion of some teachers and districts, who are abandoning structured curriculum materials to create their own from online resources (Choppin & Borys, 2017).

The biggest problem I see with overspecification, though, is that it doesn’t leave much room for teacher decision-making. In our eagerness to use educative curriculum materials to support teacher learning, I think we have lost sight of the fact that implementation is, in a sense, a second act of curriculum design. Adapting teaching practices to particular students and contexts is a critical part of education that lies solely in the hands of teachers, and curriculum materials that are too overly specified won’t support the development of this skill. In their fifth guideline for the development of educative curriculum materials, Davis and Krajcik (2005) described this skill as pedagogical design capacity and argued that educative curriculum materials should promote it: “Promoting a teacher’s pedagogical design capacity can help him participate in the discourse and practice of teaching; rather than merely implementing a given set of curriculum materials, the teacher becomes an agent in its design and enactment” (p. 6).

I just don’t think our overspecified materials are supporting the development of pedagogical design capacity. So what can we do better? If educative materials need to provide so much information, and providing that information tends to lead to overspecification, which in turn limits the development of pedagogical design capacity …. Are we stuck?

In a curriculum delivered in print, perhaps. In a curriculum delivered digitally, perhaps not. Several research teams have identified ways in which a digital medium could ease this tension between providing needed information and overspecifying. For example:

  • In a digital space, information can be delivered via multiple media (Davis & Krajcik, 2005), which could help developers provide information in ways that don’t prescribe what teachers do.
  • Provision of hyper-links between related material can increase teacher agency and metacognition while reviewing the materials (Shapiro & Niederhauser, 2004), and a digital medium allows developers to define multiple links between materials that provide multiple pathways through the curriculum.
  • Integration of student-facing technology into the materials can stimulate teacher thinking and allow developers to shift from scripting to steering their planned activities (Hoyles, Noss, Vahey, & Roschelle, 2013).

I’ll be exploring these ideas about how a digital medium could help to develop curriculum materials that are educative without being overspecified over the next two weeks.


Ball, D. L., & Cohen, D. K. (1996). Reform by the book: What is — or might be — the role of curriculum materials in teacher learning and instructional reform? Educational Researcher, 25(9), 6–8, 14.

Choppin, J., & Borys, Z. (2017). Trends in the design, development, and use of digital curriculum materials. ZDM – International Journal on Mathematics Education, 49(5), 663–674.

Davis, E. A., & Krajcik, J. S. (2005). Designing educative curriculum materials to promote teacher learning. Educational Researcher, 34(3), 3–14.

Herbel-Eisenmann, B. A. (2007). From intended curriculum to written curriculum: Examining the “voice” of a mathematics textbook. Journal for Research in Mathematics Education, 38(4), 344–369.

Hoyles, C., Noss, R., Vahey, P., & Roschelle, J. (2013). Cornerstone Mathematics: Designing digital technology for teacher adaptation and scaling. ZDM – International Journal on Mathematics Education, 45(7), 1057–1070.

Remillard, J. T. (2012). Modes of engagement: Understanding teachers’ transactions with mathematics curriculum resources. In G. Gueudet, B. Pepin, & L. Trouche (Eds.), From Text to “Lived” Resources: Mathematics Curriculum Materials and Teacher Development (pp. 105–122). Springer Netherlands.

Shapiro, A., & Niederhauser, D. (2004). Learning from hypertext: Research issues and findings. In Handbook of Research on Educational Communications and Technology (pp. 605–620).

Sherin, M. G., & Drake, C. (2009). Curriculum strategy framework: Investigating patterns in teachers’ use of a reform-based elementary mathematics curriculum. Journal of Curriculum Studies, 41(4), 467–500.

Stein, M. K., & Kim, G. (2009). The role of mathematics curriculum materials in large-scale urban reform: An analysis of demands and opportunities for teacher learning. In J. T. Remillard, B. A. Herbel-Eisenmann, & G. M. Lloyd (Eds.), Mathematics Teachers at Work: Connecting Curriculum Materials and Classroom Instruction (pp. 37–55). New York: Routledge.