AN AFTER WORD ON FAIR TESTING Figure
8 In these wider contexts fair testing becomes a tool that has relevance for a range of types of investigations, rather than an end in itself. The strategies described in Section Six would lend themselves well to this more inclusive approach, where teachers and their students draw on whichever of the various investigation types best suits the particular stage of their research, and where one investigation may well be an iterative, cycling series of mini-investigations. When our focus group teachers expressed a desire to play with the friction/heat impact on the travel of the toy truck just such a sequence could easily have begun! AN
AFTER WORD ON DEVELOPING MENTAL MODELS OF CAUSALITY How important is it that students be given opportunities to confront this challenging aspect of scientific inquiry? Clearly it would be inappropriate for young children to be doing other than gradually building their basic CVS skills in the manner outlined in earlier sections. But should we be expecting students who have mastered the idea of unconfounded CVS tests to be extending their skills in contexts that require them to consider interactions between variables? The literature suggests that such skills are within the grasp of even upper primary students – with appropriate teaching. But what might such teaching look like and how would it differ from the present emphasis on the “stand-alone” fair test? The ideas presented next are set in the context of the Ball Bounce task. They are, however, speculative. There was not sufficient time to engage the focus group teachers with the more lengthy sequence outlined. In any case our observations (both of children’s actions and teachers' ideas) suggest that this type of approach might be seen as going well beyond anything that should be expected of most school students. The least sophisticated possible approach to the Ball Bounce task is the familiar one taken in the NEMP testing context. Addressing the question of “Which ball is the bounciest?” has no meaning beyond task completion because no theory of causality is invoked or demanded. What would a test series look like if students were encouraged to clarify/explore their personal theories of causality? For example, a group who said “The ping pong ball will bounce the highest because it is the lightest” might be encouraged to arrange the balls to form a continuum from the lightest to the heaviest and check for a corresponding pattern of heights in the bounce zones achieved. If they used just 3 types of ball, this could result in the following 3-test series (assuming each test is repeated to detect data variations as outlined in Section Six):
Suppose the puzzled students now decided that hardness or softness could be causal agent. Again, they could be encouraged to form a continuum of ball types for this property – a process in itself that would demand more from them than simply measuring weight, since some test of softness would need to be devised. Again, results might be confirmed – or they might not. Students would then face the challenge of deciding if both variables were implicated in some form of interaction. This would require them to start comparing 2 variables at once. The question now might become: As suggested above, this new stage need not require all-new tests, if results have been kept systematically for all previous tests. Rather, evidence generated by previous tests could be reconsidered in a new light. Of course, it is entirely possible that other variables such as the material composition of each ball type are implicated in bounciness, too. As outlined here, such a sequence would not be inherently more difficult than planning and conducting single tests. It would certainly take much more time and might call for some interesting problem solving along the way, but the basic CVS strategy still sits at the heart of the process. Challenging teachers to work with children’s theories of causality in this way can apply equally well to the single tests devised and carried out by younger children using simpler tasks such as Truck Track. It could be modelled in the scripted responses of the teacher-facilitators who deliver the NEMP tasks, especially if they were given opportunities to explore the range of likely responses as part of their training programme. OTHER
RECOMMENDATIONS TO THE NEMP BOARD Given the clear finding that children can identify and select fair tests well before they can actually design these, it would seem advisable to design some assessment tasks that model this type of approach. The Emptying Rate card sets (Figure 4) shared with the focus group teachers might be a good strategy for use in more formal assessment contexts. Design of such sets would be possible for probing understanding of a range of investigations, including those set in Living World or Planet Earth settings that would require longer time periods if actually carried out. This might be one means of addressing the challenge of curriculum coverage in assessing investigative skills, and it would also subsequently provide interesting new “ready-made” resources for teachers. Children are not able to demonstrate their planning skills in unfamiliar contexts. Giving children time to play with equipment seems to be essential if they are to demonstrate what they can actually anticipate and manage. Testing planning ideas at the end of a simple investigative sequence, rather than at the start as at present, would seem advisable. The challenges provided by measurement need careful attention in planning the overall investigative context. The collection of categoric data, or visual data patterns, may assist children to more easily recognise data trends, and hence to more freely talk about their causal theories, and to display the skills of relating these to the evidence generated. However, these types of responses would seem to require scripted prompts because supervising teachers do not typically spontaneously prompt children to think in these “scientific” ways. An emphasis on repetition per se is unhelpful, especially as children seem to repeat because they think they have made a “mistake” and they are actually still seeking to make one “correct” measurement. Again, different strategies for measurement/data collection need to be modelled if children are to be given opportunities to demonstrate their emergent awareness of the need to manage experimental error. It might be helpful to de-emphasise allocation of role in scripted planning prompts. Such prompts seem to turn children’s attention away from the scientific aspects of the task at hand. |
![]() |
![]() |