Blog

New paper: Elementary teachers and integration of CT

I’m proud to share a newly-released article in the Journal of Technology and Teacher Education, written with MSU faculty Aman Yadav and Christina Schwarz, entitled “Computational thinking, mathematics, and science: Elementary school teachers’ perspectives on integration.” This is among the first journal articles we have out based on the CT4EDU project, which has been the focus of my research assistantship for two years. I’m excited that our findings are starting to roll out.

This piece focuses on what we learned from interviews we conducted with our partner teachers before the project began in earnest. The interviews focused on how teachers connected computational thinking to their existing teaching practices in mathematics and science. Our goal was to use the information we gained in the interviews to help us design professional development experiences that met teachers where they were.

In my view, one of the main contributions of this piece is that we worked hard at pushing against the view that the goal of educative experiences for teachers (and for students, for that matter) should be to identify and correct or eradicate misconceptions. Rather than focusing on identifying what teachers said that was contrary to commonly accepted views of computational thinking, we focused on the positive connections between CT and teachers’ existing practices. We viewed teachers’ current conceptions about CT as resources to build upon instead of mistakes to correct. Yes, the teachers tended to talk about algorithmic thinking as following predetermined steps. But moving from following algorithms to developing algorithms is a smaller step than moving from no knowledge of algorithms to developing algorithms. Similarly, they tended to talk about automation in terms of students automatically answering basic math facts, which is different than thinking about using a computer to automate something. Yet, they talked about students’ automaticity as lessening the cognitive burden for kids — and automation on computers can do that, too.

This paper took several rounds of review to get accepted, but that turned out to be a positive thing because it allowed us to talk specifically, in the discussion, about ways we built upon teachers’ thinking in our professional development sessions.

On a more personal note, I’ll also mention that I’m particularly proud of this piece because conducting the interviews was my first official foray into formal data collection in graduate school. So it’s nice to see that effort come to fruition both in the PD sessions and in publication. I hope others find the paper useful.

 

FIRST JOURNAL ARTICLE! Synergies and differences in mathematical and computational thinking

I have reached a milestone in my academic career: My first peer-reviewed journal article has been published!

Written with my UChicago colleagues Liesje Spaepen, Carla Strickland, and Cheryl Moran, the article is called, “Synergies and differences in mathematical and computational thinking: Implications for integrated instruction.” It is currently available online (this link will get you to a free eprint) and will eventually appear in a special issue on computational thinking from a disciplinary perspective in Interactive Learning Environments.

The history of this piece, and the special issue that will contain it, is rather interesting. Back in 2016, I attended a PI meeting for the NSF Cyberlearning program. I wasn’t a PI at the time, but the program invited interested folks to apply and come for learning and discussion. I applied and got accepted. That meeting included time for working groups to meet and discuss potential collaborations. There was a working group on computational thinking, and one of the outcomes of that working group was a structured poster session at AERA 2017. In the discussion period at the end of that session, the idea of a special issue came up. Several years later, the issue is finally close to finished!

The Everyday Computing project contributed two posters to the AERA poster session. One of those posters was the debut of our first learning trajectories, on Sequence, Repetition, and Conditionals. But during the time between the poster session and official start of work on the special issue, we published those LTs in the 2017 ICER proceedings. We were left, then, with the choice to either drop out of the special issue or come up with something else to contribute. I am reluctant to give up opportunities, so I campaigned for the latter.

Knowing the special issue was not focused just on CT, but specifically on looking at CT from disciplinary perspectives, I spent some time thinking about the discussions we were having (and continue to have) as a team about the relationship between CT and the kinds of thinking kids are already doing in elementary mathematics. Kids do decomposition and abstraction as they engage in mathematics, sure. And they put things in order, and repeat steps, and develop algorithms. But are all those things CT? When do to they start being CT?

During our processes of integrated curriculum development, we explored some of connections that really seemed to hold promise for leveraging mathematics to get kids ready to engage in meaningful computing. But we had also explored lots of them that fell apart under scrutiny. I was interested in seeing if we could come up with a way to systematically look across the elementary mathematics curriculum and make sense of the opportunities for connecting mathematics to computing.

So, we ended up doing a document analysis of the K-5 Common Core State Standards for Mathematics, mining it for potential connections to CT. We used our own trajectories as a starting point for the CT ideas we used to code the standards — a choice that was self-serving but, I would argue, justified given the work and detail we put into the articulation of those LTs and the ways in which they have been readily taken up by others. We looked, thoroughly and systematically, for ways in which general ideas connected to CT — like precision, completeness, order, identifying repetition, and examining cumulative effects — appear in K-5 mathematics. Then we scrutinized each connection, asking ourselves in each case if the thinking happening in mathematics could be built upon to forge a pathway into computing.

Unsurprisingly, we found lots of variation, including both connections that surprised us in their potential — like ways third graders think about when to stop counting as they subtract, and how well those map onto different kinds of loops — and surface-level connections that got thorny when we thought through the details — like the many subtle differences among examples of conditional-like thinking in K-5 math, and how few of the examples seemed synergistic with computing (or at least the kinds of computing kids might do in elementary school or shortly after).

There are lots of both kinds of examples in the paper, although we did not have enough room to explain most of them at the level of detail I might have liked.

More than the specific examples, though, there are two bigger ideas that I took away from writing this paper.

First, the question of whether or not CT-like ideas that appear in mathematics are useful leverage points for starting computing instruction, or even building readiness for later computing instruction, can’t be decided at a general level. There is no overall, general answer to this. Not all of mathematics is going to support computing instruction. On the other hand, not all integration of CT into mathematics is meaningless. We have lots of work to do figuring out our best avenues.

Second and even more importantly, working on this paper helped me (or at the risk of speaking for my coauthors, us) to articulate for myself a truth I think is both fundamental and often forgotten in debates about unplugged CT: Skilled curriculum development can’t be done one activity at at time. Ideas aren’t learned through one activity. They are developed across a curriculum. Evaluations of whether what students are doing is or is not CT don’t make sense to me when pointing to one activity completed in one hour of one school day. The finish line could be miles away, but that doesn’t mean kids aren’t making progress towards it. We need to think about development of ideas across time and give kids and teachers the space to think and learn.

I’m becoming less and less interested in debates about what CT is or is not. It needs to be decided, perhaps, but I’m willing to let others hash that out. I’m more interested in using the admittedly ill-articulated conceptualizations of CT I have so far, and imagining and studying how we can psychologize it for kids a al Dewey (1902) and spiral a curriculum around it a la Bruner (1960). Go ahead and keep shifting the finish line a bit. I’ll just keep trying to point my elementary school students in the right direction.

References

Bruner, J. S. (1960). The process of education. Vintage Books.

Dewey, J. (1902). The child and the curriculum (No. 5). Chicago, IL: The University of Chicago Press.

 

New Conference Paper: Time Tracking and Fact Fluency

I just returned from the 2019 AERA Annual Meeting in Toronto! It was a great meeting where I heard about lots of interesting research, met new colleagues, and best of all, got to present some of my own work.

My contribution to AERA this year was a paper I wrote with my friend and colleague, Dr. Meg Bates of UChicago STEM Education. A few years ago, we were given access to de-identified data from kids using an online fact practice game associated with the Everyday Mathematics curriculum. One of the most interesting features of the game is that students self-select one of three timing modes:

  • They can play in a mode with no time limit or tracking, where they take as much time as they need to answer each question and no information about their speed is reported to them.
  • They can play in a mode called Beat Your Time, where they still take as much time as they need to answer each question, but their total time is reported at the end of the round and compared to their best previous time. So, time is tracked but not limited.
  • Lastly, they can play in a mode with a 6 second time limit on each question.

When we noticed this feature of the game (and its user data), we starting digging into research on timed fact drills. It’s a highly discussed and controversial issue in elementary mathematics education. On one hand, several prominent researchers argue the potential connections to mathematics anxiety and inhibition of flexible thinking outweigh any benefits (e.g., Boaler, 2014; Kling & Bay-Williams, 2014). On the other hand, it’s well established that efficient production of basic facts is connected to later mathematics achievement (e.g., Baroody, Eiland, Purpura, & Reid, 2013; Geary, 2010). And arguably, even if it does not have to be discussed directly with kids, efficiency involves some amount of speed.

Overall, we were surprised at the inconclusive nature of the research when taken as a whole. There may be connections between timed fact drills and outcomes we don’t want (like math anxiety), but there has not been much unpacking of what features of timed testing are problematic. The game data — in particular, the contrast between the time limit mode and the Beat Your Time mode — gave us an opportunity to look at one particular issue: Does a focus on time always lead to detriments in fact performance, or is it specifically when time is limited?

Our analysis suggests that time limits may be the culprit. We compared students’ overall levels of accuracy and speed across modes, and found that students playing in the time limit mode had significantly (and practically) lower accuracy than when the same students played in the other two modes — but there was no practical difference in accuracy between the no time mode and Beat Your Time mode. So, in short, time limits were associated with lower accuracy, but time tracking was not.

When it came to speed, students were fastest in the time limit mode, but were still faster in the Beat Your Time mode than in the no time mode. So, Beat Your Time mode seemed to promote speed, without the detriment to accuracy associated with the time limit mode.

We were excited by this result. Although we can make no causal claims, the results do suggest that challenging kids to monitor their own speed when practicing facts could support the development of speed without promoting anxiety or other negative outcomes that can lead to lower accuracy (and general bad feelings about math). Although we did not see this result coming, it does make sense to us, upon reflection, that self monitoring could be helpful. In the future, we hope to do (or inspire others to do) more research on how metacognitive strategies could be applied to fact learning.

You can read the conference paper here and check out my slides here.

References

Baroody, A. J., Eiland, M. D., Purpura, D. J., & Reid, E. E. (2013). Can computer-assisted discovery learning foster first graders’ fluency with the most basic addition combinations? American Educational Research Journal, 50(3), 533-573.

Boaler, J. (2014). Research suggests that timed tests cause math anxiety. Teaching Children Mathematics, 20(8), 469-474.

Geary, D. C. (2010). Mathematical disabilities: Reflections on cognitive, neuropsychological, and genetic components. Learning and Individual Differences, 20, 130-133.

Kling, G., & Bay-Williams, J. M. (2014). Assessing basic fact fluency. Teaching Children Mathematics, 20(8), 488-497.

New Conference Paper: CT Implementation Profiles

Hi, all.

I just returned from the 2019 Annual Meeting of the Society for Information Technology and Teacher Education (SITE), where I made my first official presentation for the CT4EDU project. I wanted to share a bit about the paper and presentation for any interested folks who were not there.

We’re just finishing up our pilot year of the CT4EDU project. The project is an NSF funded research-practice partnership (RPP). Michigan State University (PI Aman Yadav and co-PIs Christina Schwarz, Niral Shah, and Emily Bouck) is working with the American Institutes for Research and the Oakland Intermediate School District to partner with elementary classroom teachers to integrate computational thinking into their math and science instruction. Over the spring and summer of 2018, we introduced our partner teachers to four computational thinking ideas: Abstraction, Decomposition, Patterns, and Debugging. Then we worked with our partner teachers to screen their existing mathematics and science lessons for opportunities to enhance or add opportunities for students to engage in these ideas. In the fall, the teachers implemented their planned lessons and we collected classroom video. (Note that in this first round of implementation, all of the lessons were in unplugged contexts.)

One of the first things we noticed was that there were some clear differences among teachers’ implementations of CT. In this work-in-progress paper, we share three patterns of implementation that we identified:

Pattern A: Using CT to Guide Teacher Planning
Some teachers were explicit within their plans about where they saw the CT ideas in their lessons, but did not make the CT ideas explicit to students during implementation.

Pattern B: Using CT to Structure Lessons
Among the teachers who did make CT explicit to students, some focused a lesson strongly on one particular CT idea. We described this pattern as structuring the lesson around a CT idea.

Pattern C: Using CT as Problem-Solving Strategies
Other teachers who made CT explicit in implementation seemed to reference the CT ideas more opportunistically. Rather than structuring opportunities to engage with one CT idea, they pointed out connections to one or more CT ideas as they worked through problems.

We’re looking forward to exploring how these different patterns of implementation relate to student thinking about CT as we go into our last year of the project — particularly as we begin considering ways to bridge students’ work in unplugged contexts to plugged activities.

You can find the conference paper here.

There is a version of the slides here.
(Sadly, the slides are missing the classroom video, which is clearly the best part of the presentation!)

Many thanks to everyone who came to my presentation.

New paper: Debugging LT

The LTEC project has a new paper out in the SIGCSE 2019 proceedings, authored by myself and my colleagues Carla Strickland, Andrew Binkowski, and Diana Franklin. It’s a new addition to our series of SIGCSE and ICER papers that detail learning trajectories that we developed through review of CS education literature. This time, the trajectory is about debugging (Rich, Strickland, Binkowski, & Franklin, 2019).

(If it’s helpful, you can read my description of what a learning trajectory is here.)

Although the overall approach we used to develop all of our trajectories was basically the same, we’ve tried to make a unique contribution in each publication by making particular parts of our process transparent through each paper. In our paper from SIGCSE 2017, we talked about the overall literature review and what we noticed as we examined the learning goals embedded in the pieces we read. In our paper from ICER 2017, we shared how we adapted our overall process from other work in mathematics education and focused on our synthesis of learning goals into consensus goals. In our paper from ICER 2018, we focused on one trajectory to give us room to discuss every decision we made in ordering the consensus goals.

This time, in addition to sharing a new trajectory, we also highlighted how we used the theoretical construct of dimensions of practice (Schwarz et al., 2009) to help us organize our consensus goals. We’re also really excited to be able to share more about the role that our learning trajectories played in the curriculum development we’ve been working on for two years now. We’re dedicating a significant piece of our presentation at SIGCSE to sharing an activity we are really proud of and how the trajectory shaped its development.

If you’ll be at SIGCSE, we hope you’ll come and check us out on Friday at 2:10 in Millennium: Grand North! (If you don’t come for me, come for Carla! She’s a great speaker whose PD facilitation is famous on Twitter.)

If not, please check out the paper if you are interested. Right now, the link above and the one on my CV page take you to the normal ACM digital library page. I’ll be switching the link in my CV to a paywall free version as soon as the Author-izer tool links this paper to my author page. At that time, we’ll also be sure to add a paywall-free link to the LTEC project page.

Although we have one more learning trajectory (on variables) that has been developed but not yet published, I suspect this might be the last conference paper from this work that I first author. The project is continuing to do wonderful work and you’ll be hearing more from us, but I’m into the thick of graduate school and not nearly as involved in the work any more. So, I just want to say that working with my colleagues at UChicago STEM Education on this line of work has been among my proudest and most gratifying professional experiences. I want to thank all of my collaborators, and also say a particular thank you to Andy Isaacs and George Reese, as without their graciousness I never would have had the opportunity to co-PI the project.

I’d also like to say thanks to all the folks in the CS education community who have been so receptive of our work and offered us such wonderful and helpful feedback. We’re particularly gratified for the shoutout that Mark Guzdial is giving us in his SIGCSE keynote this year.

From the bottom of my heart, thanks to all of you for making this longtime math educator who wandered into the CS education space feel welcome and like her contributions are worthwhile.

References

Rich, K. M., Strickland, C., Binkowski, T. A., & Franklin, D. (2019). A K – 8 debugging learning trajectory derived from research literature. In Proceedings of the 2019 ACM SIGCSE Technical Symposium on Computer Science Education (pp. 745–751). New York: ACM.

Schwarz, C. V., Reiser, B. J., Davis, E. A., Kenyon, L., Acher, A., Fortus, D., Shwartz, Y., Hug, B., and Krajcik, J. (2009). Developing a learning progression for scientific modeling: Making scientific modeling accessible and meaningful for learners. Journal of Research in Science Teaching, 46(6), 632–645.

 

 

Vocabulary, Part 3: What I Learned (3rd edition)

This is my last post for the semester! At the end of my last two semesters, I wrote posts listing five things I had learned. I figured since this post is also serving as Part 3 of my vocabulary series, I’d try highlighting five things I caught myself saying recently that illustrate some significant learning over the last year and a half.

 

  1. As our last research team meeting, in a discussion about a study my lab-mate is planning, I said: Oh, so if that’s when you’re doing the measure, that works. Then it’s a delayed treatment design. I entered my research design class rather skeptical at the beginning of this semester, but it’s clear I picked up some useful vocabulary for talking about research. Even if I understood the concept of delayed treatment before, I couldn’t articulate it.
  2. It has been similar with specific processes of data analysis. A few days ago, my office-mate asked me something to the effect of: You have three research questions but you’re using content analysis for all three, right? My response made very specific use of the terms content analysis, thematic analysis, linguistic analysis, and discourse analysis with a particular meaning behind each one. I can’t write perfect definitions, but I understand the difference. A year ago those would all have had the same connotation to me. (Essentially, they all meant to look at text and try to find patterns. Which is not entirely wrong, but overgeneralizes.)
  3. In a class recently, while talking to the professor about an assignment, he said, You always seem to anticipate me disagreeing more than I actually do. My response? Well, I mean, I’m just participating in discourse as I’m thinking. This was a joke — one you probably won’t get unless you have read some of Sfard’s work recently. I was joking that as I’m writing papers, I have a hypothetical discussion with my mentors in my head, trying to anticipate how they’d respond. This fits with Sfard’s (2008) notion of thinking as communicating with oneself, a theory we had discussed that day in class. The joke isn’t all that funny, even if you do know Sfard, but I did think it was interesting the way I was able to spontaneously use her definition of participation in context.
  4. This semester, I wrote and rewrote my practicum proposal justification at least three times from scratch. This was a rough experience in a lot of ways — wildly frustrating and anxiety-provoking. But I have to admit that when I finally landed on an approach that was working, I thought to myself, Oh my gosh. I think I know what a concept paper is! We talked about concept papers in one of my courses last year, and even after reading examples and attempting to write one, I really had no idea what it was. In the end, I’m reasonably sure the first half of my practicum proposal became a concept paper. I could use that term correctly now in conversation. That feels like a victory given how much I struggled with it last year.
  5. And now for a bit of sappiness (it’s the holidays, after all). When I started my master’s program, I can remember needed to learn the specific meaning of cohort used by academia. I knew of the word before that, but I knew it as an old-fashioned way to refer to a friend, collaborator, or partner-in-crime. I didn’t know the collective meaning of a group of students who enter a program together. I re-learned it again over the last year and a half, and it’s become even more meaningful as a PhD student. My cohort is my tribe, and I’m grateful for them.

Have a lovely break, everyone. Reflect on all you’ve learned. Try not to think too much about how much there is still to go.

Reference

Sfard, A. (2008). Thinking as communicating. Cambridge: Cambridge University Press.

 

Vocabulary, Part 2: Jingling Abstraction

In my last post, I talked about the value of using precise vocabulary in curricular resources. Today, I’m going to talk about (potentially) problematic vocabulary used in research and development.

There are twin problems of vocabulary in academia. Referring specifically to research on student engagement, Reschley and Christenson (2012) called these the “jingle, jangle” problems. The first problem is that we sometimes use the same word to refer to multiple ideas (jingle). The second problem is that we sometimes use different words to refer to the same thing (jangle).

I can think of plenty of examples of jangle — in particular, I think we use the terms real-world, relevant, contextualized, and authentic to refer to problem contexts when we really just interested in engaging problems. But I’ve been wrestling all year with an example of jingle. In short, I think that the word abstraction, as a learning goal, means rather different things to mathematicians versus computer scientists.

In general terms, abstraction can be used as a noun or a verb. As a noun, an abstraction is a representation that reduces complexity in order to focus attention on the core, essential elements of a situation or phenomenon. Usually an abstraction exists independently of any specific example or instance. The fraction ¾, for example, is an abstraction of three out of four equal-size pieces of pizza. The fraction can be used to represent a specific portion of any whole — it exists independently of the examples.

As a verb, abstraction is used to refer to the process of creating such representations.

For both the noun and verb meanings of abstraction, I do not think that mathematics and computer science differ too much. However, I do think there are subtle differences in the way the disciplines talk about abstraction as a learning goal. In short, I think mathematics is focused on the noun, and computer science is focused on the verb.

Admittedly, the Standards for Mathematical Practice in the Common Core State Standards for Mathematics (CCSS-M; Common Core State Standards Initiative [CCSSI], 2010) do highlight the process of abstraction. Standard for Mathematical Practice 2, Reason abstractly and quantitatively, says, “Mathematically proficient students make sense of quantities and their relationships in problem situations. They bring two complementary abilities to bear on problems involving quantitative relationships: the ability to decontextualize—to abstract a given situation and represent it symbolically …” (p. 6). (The other ability is contextualizing.)

However, when looking across grade levels within the content standards, I can’t help but notice a trend toward working with abstractions, rather than creating them.

Consider the following three standards about place value (CCSSI, 2010):

1.NBT.2     Understand that the two digits of a two-digit number represent amounts of tens and ones.

2.NBT.2    Understand that the three digits of a three-digit number represent amounts of hundreds, tens, and ones.

4.NBT.1    Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right.

The first- and second-grade standards are focused on understanding numbers written in base-10 notation as abstractions of quantities. The fourth grade standard highlights an even higher level of abstraction. It generalizes the base-10 place value system, independent of the examples of two- and three-digit numbers that are the focus in first and second grade. So there are two themes here: Understanding abstractions, and moving from less abstract to more abstract understandings and representations of mathematics. The movement is in one direction, and because that movement is separated across grades, it’s not clear that students will even be aware of the process of abstraction happening.

Getting to the highest possible abstraction is an implicit goal in a lot of mathematics education. In addition to sequences of standards like the one above, some instructional frameworks highlight this specifically. Two examples are the concrete-representational-abstract framework (Agrawal & Morin, 2016) and concreteness fading (Fyfe, McNeil, Son, & Goldstone, 2014).

By contrast, my read of computer science education literature is that there is a greater emphasis on using multiple levels of abstraction, and that necessitates greater focus on the process of moving among those levels. For example, the Computer Science Principles (CSP) framework references the process of abstraction first (and the result last) in its description of Abstraction as a Big Idea: “In computer science, abstraction is a central problem-solving technique. It is a process, a strategy, and the result of reducing detail to focus on concepts relevant to understanding and solving problems” (College Board, 2017, p. 14). The process also is highlighted in the specific learning objectives (which I’d consider parallel in specificity to the mathematics content standards): “Develop an abstraction when writing a program or creating other computational artifacts” and “Use multiple levels of abstraction to write programs” (College Board, 2017, p. 15).

Specific instructional approaches for abstraction in computer science also emphasize moving between levels of abstraction. For example, Armoni (2013) outlined a framework for teaching abstraction to computer science novices. She emphasized being explicit about moving between levels of abstraction in order to help students learn to move freely and easily between levels as they problem solve.

Thus, mathematics focuses on abstractions. Computer science focuses on abstracting. Both refer to abstraction, but often mean different things in terms of the goals of learning.

I’m not claiming that one focus or the other is inherently better. But I do think the difference is important to keep in mind, especially when thinking about integrated instruction.

Reference

Agrawal, J., & Morin, L. L. (2016). Evidence-Based Practices: Applications of Concrete Representational Abstract Framework across Math Concepts for Students with Mathematics Disabilities. Learning Disabilities Research and Practice, 31(1), 34–44.

Armoni, M. (2013). On teaching abstraction in computer science to novices. Journal of Computers in Mathematics and Science Teaching, 32(3), 265–284.

College Board. (2017). AP Computer Science Principles Course and Exam Description. Retrieved from https://secure-media.collegeboard.org/digitalServices/pdf/ap/ap-computer-science-principles-course-and-exam-description.pdf

Common Core State Standards Initiative (CCSSI). (2010). Common Core State Standards for Mathematics. Retrieved from http://www.corestandards.org/Math/

Fyfe, E. R., McNeil, N. M., Son, Y., & Goldstone, R. L. (2014). Concreteness fading in mathematics and science instruction: A systematic review. Educational Psychology Review, 26(1), 9–25.

Reschly, A. L., & Christenson, S. L. (2012). Jingle, jangle, and conceptual haziness: Evolution and future directions of the engagement construct. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of Research on Student Engagement (pp. 3–19). New York: Springer.

 

Vocabulary, Part 1: Why precision might matter in written resources

I’ve been thinking a lot lately about the role of vocabulary in learning, teaching, and research.

Until recently, I didn’t have a very well developed opinion on whether or not we need to worry about kids’ use of mathematical vocabulary, at least at the elementary grades. There were a few ill-articulated assumptions underlying my lesson writing style, though. Generally, I believed that:

  • Helping kids to learn definitions should never be the main point of a lesson. An idea is bigger than its definition.
  • Similarly, perfect use of the vocabulary shouldn’t be the main learning goal of a lesson. I don’t think imperfect expression should invalidate an idea, especially if it is coming from a young child.
  • On the other hand, if an idea is key to a lesson, there’s no reason not to introduce a term for it. I don’t believe in withholding words from kids because they are long or “difficult”. Words can be powerful tools.
  • Even though precise use of vocabulary isn’t an appropriate expectation for kids, I do think teachers should try to be precise in their use of the terms. And that means that curriculum materials should be precise in their mathematical language, too.

I was kind of a stickler about that last point in my curriculum writing days. I think it sometimes annoyed my coworkers, who believed that no teacher was going to notice or change her practice if we wrote The sides are equal in length instead of The sides are equal or said The angle measures 42° instead of The angle is 42°.

I had to admit at the time that they were probably right about that. Teachers have limited time to read curriculum materials and plan and I’m doubtful they spend that time paying close attention to subtle differences in language. Still, when I caught language that was imprecise, I was stubborn about changing it. I justified this mostly by arguing that if any mathematicians reviewed our materials, this would give them one less thing to pick at.

I read and article this semester, though, that made me wonder if there was a bigger reason for precise language than that. Gentner (2010) published a lengthy argument for the reciprocal relationship between language and analogical reasoning. The first half of the paper summarized research suggesting that making comparisons facilitates learning. The second half was a more specific argument about the role of language in learning and cognitive development. Gentner argued that:

  • Having common labels invites comparison of examples and abstraction of shared elements, and
  • Naming promotes relational encoding that helps us connect ideas that are learned in different contexts.

To illustrate her argument, Gentner cited research about learning to apply natural number labels to quantities. Studies of cultures whose languages do not contain specific number labels showed that people of those cultures were able to estimate magnitudes, but were not very accurate at assigning specific number names to quantities, especially as the quantities got larger (Gordon, 2004 and Frank et al., 2008, as cited in Gentner, 2010). Other studies showed that children who speak languages with number names (like English) first learn the count sequence by rote, but by comparing sets of objects (e.g. two trains and two dogs) that have a common number label attached, they gradually bind the number names to the quantities (Gentner, 2010).

This explanation makes perfect sense to me. This is why words are powerful — they are a means of connecting examples at their most fundamental, definitional level. They prompt looking for sameness in contexts where things feel very different.

This got me wondering whether my stubbornness was better justified than I originally thought. Abstract mathematical terms like equal (and its symbol) are known to be poorly understood (e.g., Knuth, Stephens, McNeil, & Alibali, 2006; Matthews, Rittle-Johnson, McEldoon, & Taylor, 2012; McNeil et al., 2006). At least one study has concluded that the contexts in which the equal sign is used impact students’ understanding of its meaning (McNeil et al., 2006). I would not be surprised if a similar study examining how the word equal is used in sentences showed that these uses impact understanding of the word. According to Gentner (2010), we both consciously and unconsciously use words as labels, which invite comparisons, which invite conclusions about meanings. It seems reasonable to suggest that if we use the word equal with counts and measures, but use the term congruent with geometric figures, teachers could abstract more sophisticated and precise meanings of those terms than if we use the term equal sides.

I don’t know for sure, of course. But I don’t think precision in language, especially in resources that teachers can continually reference, can’t hurt the educative power of the resources.

Reference

Gentner, D. (2010). Bootstrapping the mind: Analogical processes and symbol systems. Cognitive Science, 34, 752–775.

Knuth, E. J., Stephens, A. C., McNeil, N. M., & Alibali, M. W. (2006). Does understanding the equal sign matter? Evidence from solving equations. Journal for Research in Mathematics Education, 297-312.

Matthews, P., Rittle-Johnson, B., McEldoon, K., & Taylor, R. (2012). Measure for measure: What combining diverse measures reveals about children’s understanding of the equal sign as an indicator of mathematical equality. Journal for Research in Mathematics Education, 43(3), 316-350.

McNeil, N. M., Grandau, L., Knuth, E. J., Alibali, M. W., Stephens, A. C., Hattikudur, S., & Krill, D. E. (2006). Middle-school students’ understanding of the equal sign: The books they read can’t help. Cognition and instruction, 24(3), 367-385.

 

Integration, Part 3: Authenticity

The end of the semester is approaching, and I’m a bit crunched for time this week. But I do have one more brief thought about integration to share.

A few years before I left my full-time work as a curriculum developer, a freelance journalist interviewed me via email about mathematics word problems and my theories on why students often say they hate them. I told her that my years of curriculum development work really opened my eyes to just how inauthentic word problems can be. Is it possible to write word problems that target a particular mathematics concept and also are meaningful to children? Yes, definitely. Is it possible to write 50+ such problems targeting that same mathematics concept? I can tell you from experience that it gets really tough, really fast.

And that’s before applying the list of constraints that comes along with the task in large-scale curriculum development. During our latest round of development, we were not allowed to reference any junk food in our problems. We also had to stick to round objects when we talked about fractions, because our chosen manipulatives for fractions were circles. How many round items can you think of that aren’t considered junk food, but are round and make sense to divide into parts? It starts out easy: oranges, tortillas, cucumber slices. But then come the descriptors that help make semi-junky food sound ok: veggie pizzas, whole-wheat pita. By the tenth problem or so, I promise you’ll be grasping at straws. I’m pretty sure we wrote some fraction problems about cans of cat food.

My point is simply this: Starting with some form of disciplinary content and back-tracking to a reasonably authentic task is difficult after the first few times. And when tasks start to lose authenticity, kids notice. The activities they complete start to feel like busy-work (because they are).

The issue I’ve been thinking about this week is whether the task of contextualizing content becomes easier or harder when you’re thinking about two disciplines, as in integrated curricula. On one hand, it seems like finding a task authentic to both disciplines might be more difficult. But on the other hand, I think part of the difficulty of generating authentic tasks is that usually authentic tasks require multiple kinds of component skills. Finding one that gives kids exposure or practice to one particular thing, but does not require any other skills they don’t yet have, is a challenge. So I think it is possible that considering two disciplines might actually open up some space to move in task development.

Take the number grid activity I discussed last week. I’ve written other activities before asking kids to map out paths on a number grid. And I’ve asked them to limit their movements to moving in rows or columns — adding or subtracting 10s or 1s. But I never had a great reason for that restriction, other than a desire to focus on place value, so often the task felt inauthentic. But when I added the element of programming a robot, suddenly the restriction in movements had new meaning: Programming languages are made up of a limited set of commands. So a very similar activity became more authentic — along one dimension, at least — through integration.

I’m hoping to find more of these happy compatibilities as I continue to think about integrated curricula.

Integration, Part 2: Translation

Here we are at the end of the week, and that means that it’s time for Integration, Part 2!

Last week I wrote about shifting my views on the development of integrated curriculum. Rather than framing my efforts as trying to understand and consistently achieve a fully integrated curriculum, I started thinking about how a long-view of curriculum development might enable a different model: helping kids to walk across Kiray’s (2012) balance, rather than stay in the middle. This view doesn’t eliminate the need to find fully integrated activities, but it shifts their role. Rather than being the activity form that supports all kinds of conceptual development within an integrated curriculum, the fully integrated activities serve as a meeting point, or a gateway between the two disciplines. At other points in the curriculum, activities might make use of convenient connections between disciplines. In those key, fully integrated activities, though, I think kids probably need to look both disciplines straight in the face, note the similarities, and also wrestle with and make sense of some of the differences.

So. What might that look like? In my case specifically, the question is, what might that look like for elementary school kids working in mathematics and computer science?

I’ve spent a good bit of time thinking about this business of synergies and differences between mathematics and computer science and what they might mean for integrated curriculum. (I’m hopeful — please oh please oh please — that soon I’ll be able to point you to a published paper or two about this.) It’s a complex issue full of nuance and I always have a hard time coming up with a clear description or list of what’s the same and what’s different. For a while I thought I had to have a handle on that before I could think about writing an activity that asks kids to wrestle with the ideas.

But then I remembered a key idea that underlies my whole educational philosophy:

Don’t do the thinking for the kids. Let the kids do the thinking.

What does that mean for a fully integrated math + CS activity? First of all, it means I don’t have to have some definitive list of synergies and differences. Kids will notice things, and talk about them, and they may or may not map directly onto my ideas. And that’s ok, because there isn’t a perfect list.

It also means that I don’t have to generate a problem context that maximizes synergies and reduces differences as much as possible. I saw that as a goal, at first. Too many differences might either distract from the big ideas that lesson is meant to address or lead to confusion in one discipline or the other later on.

But I no longer think that’s true. The point isn’t to minimize differences, but rather to have kids think about them.

Based on this thinking, here’s my new idea for fully integrated activities: Instead of figuring out the best way to address both disciplines at once, we ask kids to translate the language of one discipline into another.

For example, take a common activity that happens throughout elementary mathematics: Counting, adding, and subtracting on a number grid. I’ve written plenty of activities in my career that have kids think about making “jumps” on a number grid. To add 23 to 37, for example, they might jump down two rows to add 2 tens, and jump to the right three spaces to add 3 ones. They land on 60, the sum.

NumberGrid1

They could think about this as a missing addend problem, too. How can I get from 37 to 60? Add 2 tens, then add 3 ones.

This activity, particularly the visuals associated with it, remind me a lot of the kinds of activities kids do in programming curricula aimed at elementary school. Kids direct on-screen characters and physical robots through pathways by programming them in terms of distance and direction of travel. When I first started thinking about this, it seemed like a superficial connection based on nothing but a common grid-like structure. But more recently I’ve been wondering if the similarities go deeper. In both cases, kids are giving directions to move from one point to another. The difference is in the way of communicating that information.

Is mapping the two kinds of language about directions onto each other something kids could do? I’m not sure, but I think at least upper elementary school students could. Not only that, but I think the kinds of thinking work it would take to translate directions like this:

Add 2 tens

Add 3 ones

… into directions like this:

Face toward increasing 10s

Move forward 2 units

Turn left

Move forward 3 units

… could be beneficial to kids. Assuming they start with a printed number grid and illustrate the mathematical directions with their fingers, to change that into spatial directions they’d need to engage in some perspective-taking. To orient themselves and “think like the robot,” much like Papert (1980) advocated using using the Logo turtle as an object to think with.

I think kids might come out of that translation experience with a different way of thinking about the structure of the number grid, and also a foundation for thinking about algorithms that would support other programming activities.

Maybe the “full integration” balance point isn’t about pointing out synergies and differences between disciplines to kids. Maybe it’s about allowing kids to translate their thinking from one to the other.

References

Kiray, S. A. (2012). A new model for the integration of science and mathematics: The balance model. Energy Education Science and Technology Part B: Social and Educational Studies, 4, 1181-1196.

Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books, Inc..