CHILDREN'S STRATEGIES IN NUMERACY ACTIVITY

CHAPTER 1 : Introduction

The National Education Monitoring Project (NEMP) Mathematics studies in 1997 and 2001 provide information about children's achievement in a range of task items (Crooks and Flockton, 2002; Flockton and Crooks, 1998). The tasks that focus on numeracy are found mostly (but not exclusively) in the Items from the 1997 Number and Money chapters and from the 2001 Number chapter. There are also number-related items from the Measurement, Algebra and Statistics chapters. The current Numeracy Projects have a focus on children's strategies and is a topic of some discussion within mathematics education (Walls, 2004). This highlights a tension between knowledge and strategies in the Numeracy Framework used in the current Numeracy projects. The NEMP Mathematics Framework (Crooks and Flockton, 2002, p.10) also includes connections between areas of Knowledge and Process and Skills.

The Commentary section in each NEMP Report provides brief details about children's errors and strategies after each Item. Further analysis of the task items may provide greater detail about children's mathematical and strategic reasoning as well as errors or misconceptions within the context of the tasks and among groups of children.

This Probe Study report sets out the method used to further analyse four task items from a small sample of the 2001 NEMP Mathematics data. The results of the data analysis are set out for each task and include the categories of strategies and errors for each item. Finally, the findings for each item and implications for research are briefly discussed.


       

CHAPTER 2 : Background

Assessment tasks in mathematics are designed to provide children with opportunities to solve mathematical problems and to demonstrate their knowledge of mathematics. Yet research suggests that assessment of children's understanding in mathematics is complex. Tasks can be presented in written form or verbally, with or without equipment. Problems can also be posed in everyday contexts, referred to as contextualized problems. These are designed to make a link with children's experiences including cultural backgrounds and assumed to be motivation for engagement in mathematical tasks. Tasks set in a context, however, can be problematic for children and can pose either a distraction or barrier for children in their attempts to solve the task (Boaler, 1993: Sullivan, Zevenbergen and Moulsey, 2002: Zevenbergen, 2002).

The language of the task also poses difficulties for children whether in written or oral form. The description of a context situation usually requires more text for children to read, placing greater demands on children (Eley and Cargill, 2002). They must be able to decode both the specialized terms in mathematics as well as the linguistic forms embedded in the questions (Eley and Cargill, 2003). School mathematics involves words that are not necessarily specific for mathematics but are signifiers of the type of task requirement. For example a question that asks “how many more than …?” includes a signifier 'more than' that provides a cue for the child about the type of task and possible solution strategies (Cooper and Dunne, 2000; Zevenbergen, 2000).

In New Zealand, further analysis of NEMP items has raised questions about the context and/or format of mathematical tasks. For example, a station task set in a context did not necessarily promote recognition. Although it was a familiar context of a family pizza dinner, the Year 4 children in particular relied on a prompt from the interviewer to continue with the questions in the task (Anthony and Walshaw, 2003a). The year 4 children were also more likely to talk about the features of the pizza context rather than the mathematical structure (Anthony and Walshaw, 2003b). Written tasks were found to be less popular with children than the one-to-one or multichoice tasks (Eley and Cargill, 2002). Question formats were examined and found that students were less successful with 'short answer' formats as these provided the least support to children without teacher clarification or equipment (Eley and Cargill, 2002). Children's explanations in one-to-one tasks described their strategies and/or reasoning and were more extended responses, generally resulting in higher scores than for other kinds of task formats (Eley and Cargill, 2002: Eley and Cargill, 2003). The one-to-one tasks had drawbacks, however, because some children gave as many responses as possible rather than taking time to articulate their thinking. Children may have felt watched with an interviewer present and were less likely to check answers. The multichoice format was somewhat contradictory as many children used the choice of answers provided to work out or guess the correct answer yet it was not popular as a format with the children (Eley and Cargill, 2002).

The children's verbal responses to tasks provide information about their mathematical thinking. This has become a common research tool as children's strategies or reasoning can be identified or inferred from what they say. For example, some seven to twelve year old students were found to retain the use of counting strategies rather than using number facts for addition and subtraction (Gray, 1991; Gray and Pitta, 1996;). The less successful students were found to have an over-reliance on counting strategies which was inferred as a tenuous knowledge of number facts.

Errors in written tasks can also be a source of information about children's mathematical reasoning. Errors can be based on misconceptions, plausible beliefs about numbers but applied to inappropriate situations (Hart, 1981; Johnson, 1989; Maurer, 1987). Recurring stable errors are also known as bugs, from a metaphor used in computing. Errors with decimal numbers have been a topic of much research in both primary and secondary schools (Irwin, 1999; Moloney and Stacey, 1996; Steinle and Stacey, 2001). Decimal misconceptions have been attributed to a variety of factors: application of whole number knowledge to decimal numbers (Steinle and Stacey, 2001), differing interpretations of some real life situations such as money, emerging knowledge of place value, or not enough emphasis on multiplicative thinking (Irwin, 1999).

This probe study used four task items from the 2001 NEMP data to investigate the following research questions:

1. What strategies are Year 4 children using within a particular task?
2. What are the common errors in the written Number tasks?
3. What is the range of strategies in a one-to-one interview task?
4. How might the identified errors and/or misconceptions relate to aspects of the written task, such as language and visual presentation?
         

CHAPTER 3 : Method
NEMP Items selected
Five task items from the 2001 Year 4 NEMP were selected for analysis, in consultation with USEE. These were chosen to provide a focus on numeracy or number-related activity, a range of contextualised and de-contextualised tasks, and a mix of written and video data (all from Crooks and Flockton, 2002).
Three independent tasks were selected and provided written data. The tasks were:
  Addition Examples (p. 14)  
  Speedo (p. 17)  
  Money A (p. 37)  
Two one-to-one tasks were selected, providing video data.
  36 and 29 (p. 20)  
  Number Line Y4 (p. 23)  
Addition Examples was a Trend Task and Number Line was an Access Task. These five tasks plus the survey data were available for each child.
         
Sample
The national sample for mathematics of 1440 year 4 children was divided into three groups of 480 (Crooks and Flockton, 2002, p. 5). Each group of 480 were given different tasks, so in order to track tasks for the same children, tasks from one of these groups were selected. The data sample provided by NEMP was 47 and the final sample analysed by the research team was 40 (representing 8.5% of the sample group of 480). The extra data sets from seven children were used for trial analysis before the researchers each took responsibility for the data sets of their eight children.
         
Changes made to the Method
On viewing the video data, the Number Line Y4 task was eliminated as it was not always possible to read the fraction on each card in order to identify where each card was placed. This illustrated how the interviewer in situ was better placed than the video viewer to accurately record the children's actions. Some of this task data was also compromised due to inconsistency of interviewing. Consequently, the research team decided to omit the Number Line Y4 task from this study.
         
Analysis of Data

The process of data analysis centred around immersion in the data, category generation, refining category analysis and peer review. These aspects were considered important course experiences for the researchers.

The research team familiarised themselves with the marking schedules for each task supplied by NEMP. They examined the task, became familiar with the marking approach, and generated further analysis categories guided by the course lecturer. Further categories focused on errors, and associated possible misconceptions or perturbations in mathematical activity. The research team refined the analysis protocol for each of the three written tasks, outlined in the next section. Each researcher used the analysis protocols for their sample of eight children, recording results in a grid for each task. This was followed by peer review of the analysis in order to check for consistency and to add further clarity to the analysis. Peer review involved a member of the research team analysing another's data set and recording their results. This process of comparing analysis decisions and discussing similarities and differences provided opportunities for clarifying and 'sharpening' the analysis protocols, resulting in greater consistency between researchers.

One of the constraints of analysing children's written work is that researchers are unable to use direct observation or ask children to self report their strategies. Children's strategies can only be inferred from the written record and, while inferences are influenced by research, interpretations can still be somewhat speculative. When known strategies could not be inferred or peer review revealed multiple interpretations, then the research team resolved to categorise these items as unknown.

For the video task (36 and 29), the researchers generated a transcript of what was said by the child and the interviewer, adding supplementary information observed on the video. The transcripts were analysed using the NEMP marking protocol and peer reviewed for one child. Finally, the research team put together a profile for each of their eight children based on the survey data and the analysis of the four task items.


prev page / next page

top of page    |    return to Probe Studies - INDEX   |    return to Other Studies menu
For further information and contact details for the Author    |    Contact USEE