USING NEMP TO INFORM THE TEACHING OF SCIENTIFIC SKILLS

SECTION SEVEN: AREAS FOR FURTHER CONSIDERATION


This section is something of an “after word”. It draws several loose threads together to complete the report whilst identifying areas where further work is needed. Areas where our findings might merit further consideration by the NEMP Board are also outlined, along with some suggestions for modified strategies to assess children’s progress in developing their skills of science investigation.

AN AFTER WORD ON FAIR TESTING
Section Four summarised concerns expressed by some science educators about an exclusive focus on fair testing as “the” method of primary science investigations. It seems to us that the strategies we have reported here need not be seen as exclusively supporting such a model, if teachers are encouraged to think more critically about the role of “fair” comparisons in various types of science investigations. If thought of in broader terms, the notion of “fairness” can come to be seen as an approach that helps to eliminate alternative explanations. However, other types of investigations might be better for generating those alternatives in the first place. A mental model for thinking about such relationships could look something like this:

Figure 8
Fair testing in the context of wider investigations

In these wider contexts fair testing becomes a tool that has relevance for a range of types of investigations, rather than an end in itself. The strategies described in Section Six would lend themselves well to this more inclusive approach, where teachers and their students draw on whichever of the various investigation types best suits the particular stage of their research, and where one investigation may well be an iterative, cycling series of mini-investigations. When our focus group teachers expressed a desire to play with the friction/heat impact on the travel of the toy truck just such a sequence could easily have begun!

AN AFTER WORD ON DEVELOPING MENTAL MODELS OF CAUSALITY
The mental models of causality outlined in Sections Four and Five present some very real challenges for teaching investigative skills in science. Typical approaches to fair testing treat each set of comparative tests as an entity complete in itself. Provided variables are managed correctly, appropriate measurements are taken with sufficient care, and the correct conclusions are drawn from the results reported, the student is considered to have demonstrated an appropriate skill set This is as true for secondary students as for primary, as demonstrated by the requirements for Achievement Standard 1.1 – the investigation standard for Year 11 students. Arguably, only the contexts and theoretical ideas that ground the investigation differentiate its skill level from the tests done by primary school pupils. However, as outlined in Section Four, adjacent sets of tests can potentially yield conflicting results when there are interactions between variables. Such results may generate considerable uncertainty until the puzzle that they present is unravelled – and that will demand a careful separation of theory from evidence, and a planned sequence of related investigations.

How important is it that students be given opportunities to confront this challenging aspect of scientific inquiry? Clearly it would be inappropriate for young children to be doing other than gradually building their basic CVS skills in the manner outlined in earlier sections. But should we be expecting students who have mastered the idea of unconfounded CVS tests to be extending their skills in contexts that require them to consider interactions between variables? The literature suggests that such skills are within the grasp of even upper primary students – with appropriate teaching. But what might such teaching look like and how would it differ from the present emphasis on the “stand-alone” fair test? The ideas presented next are set in the context of the Ball Bounce task. They are, however, speculative. There was not sufficient time to engage the focus group teachers with the more lengthy sequence outlined. In any case our observations (both of children’s actions and teachers' ideas) suggest that this type of approach might be seen as going well beyond anything that should be expected of most school students.

The least sophisticated possible approach to the Ball Bounce task is the familiar one taken in the NEMP testing context. Addressing the question of “Which ball is the bounciest?” has no meaning beyond task completion because no theory of causality is invoked or demanded. What would a test series look like if students were encouraged to clarify/explore their personal theories of causality? For example, a group who said “The ping pong ball will bounce the highest because it is the lightest” might be encouraged to arrange the balls to form a continuum from the lightest to the heaviest and check for a corresponding pattern of heights in the bounce zones achieved. If they used just 3 types of ball, this could result in the following 3-test series (assuming each test is repeated to detect data variations as outlined in Section Six):


Taken together these results may well confirm the personal theory expressed. In which case students could be challenged to try more types of balls, learning in the process about the dangers of generalising from a small sample. On the other hand, it is entirely possible that the combination of types of balls chosen will not yield the result expected – some small, dense, foam balls are very bouncy even though they are relatively heavy. If this should happen, students will be challenged to reconsider their theory in the light of the evidence generated. (This in itself would be a really useful skill for students to be encouraged to develop — especially given the critique that suggests teachers do not encourage students to make links between theory and evidence.)

Suppose the puzzled students now decided that hardness or softness could be causal agent. Again, they could be encouraged to form a continuum of ball types for this property – a process in itself that would demand more from them than simply measuring weight, since some test of softness would need to be devised. Again, results might be confirmed – or they might not.

Students would then face the challenge of deciding if both variables were implicated in some form of interaction. This would require them to start comparing 2 variables at once. The question now might become:

As suggested above, this new stage need not require all-new tests, if results have been kept systematically for all previous tests. Rather, evidence generated by previous tests could be reconsidered in a new light. Of course, it is entirely possible that other variables such as the material composition of each ball type are implicated in bounciness, too.

As outlined here, such a sequence would not be inherently more difficult than planning and conducting single tests. It would certainly take much more time and might call for some interesting problem solving along the way, but the basic CVS strategy still sits at the heart of the process.

Challenging teachers to work with children’s theories of causality in this way can apply equally well to the single tests devised and carried out by younger children using simpler tasks such as Truck Track. It could be modelled in the scripted responses of the teacher-facilitators who deliver the NEMP tasks, especially if they were given opportunities to explore the range of likely responses as part of their training programme.

OTHER RECOMMENDATIONS TO THE NEMP BOARD
It would help teachers to think more widely about types of investigations if NEMP investigation tasks modelled these. Simple, assessable investigations set in the Living World and Planet Earth strands are no doubt challenging to design, but their absence implies that investigations are most appropriately set in a narrow range of physical science contexts.

Given the clear finding that children can identify and select fair tests well before they can actually design these, it would seem advisable to design some assessment tasks that model this type of approach. The Emptying Rate card sets (Figure 4) shared with the focus group teachers might be a good strategy for use in more formal assessment contexts. Design of such sets would be possible for probing understanding of a range of investigations, including those set in Living World or Planet Earth settings that would require longer time periods if actually carried out. This might be one means of addressing the challenge of curriculum coverage in assessing investigative skills, and it would also subsequently provide interesting new “ready-made” resources for teachers.

Children are not able to demonstrate their planning skills in unfamiliar contexts. Giving children time to play with equipment seems to be essential if they are to demonstrate what they can actually anticipate and manage. Testing planning ideas at the end of a simple investigative sequence, rather than at the start as at present, would seem advisable.

The challenges provided by measurement need careful attention in planning the overall investigative context. The collection of categoric data, or visual data patterns, may assist children to more easily recognise data trends, and hence to more freely talk about their causal theories, and to display the skills of relating these to the evidence generated. However, these types of responses would seem to require scripted prompts because supervising teachers do not typically spontaneously prompt children to think in these “scientific” ways.

An emphasis on repetition per se is unhelpful, especially as children seem to repeat because they think they have made a “mistake” and they are actually still seeking to make one “correct” measurement. Again, different strategies for measurement/data collection need to be modelled if children are to be given opportunities to demonstrate their emergent awareness of the need to manage experimental error.

It might be helpful to de-emphasise allocation of role in scripted planning prompts. Such prompts seem to turn children’s attention away from the scientific aspects of the task at hand.


prev page / references

top of page    |    return to Probe Studies - INDEX   |    return to Probe Studies menu
For further information and contact details for the Author    |    Contact USEE