Performance of Subgroups
: Technology 2004
Loading Images
Although national monitoring has been designed primarily to present an overall national picture of student achievement, there is some provision for reporting on performance differences among subgroups of the sample. Eight demographic variables are available for creating subgroups, with students divided into subgroups on each variable, as detailed in Chapter 1 (p8).

Analyses of the relative performance of subgroups used the total score for each task, created as described in Chapter 1 (p8).
 
SCHOOL VARIABLES
Five of the demographic variables related to the schools the students attended. For these five variables, statistical significance testing was used to explore differences in task performance among the subgroups. Where only two subgroups were compared (for School Type), differences in task performance between the two subgroups were checked for statistical significance using t-tests. Where three subgroups were compared, one-way analysis of variance was used to check for statistically significant differences among the three subgroups.

Because the number of students included in each analysis was quite large (approximately 450), the statistical tests were quite sensitive to small differences. To reduce the likelihood of attention being drawn to unimportant differences, the critical level for statistical significance for tasks reporting results for individual students was set at p = .01 (so that differences this large or larger among the subgroups would not be expected by chance in more than one percent of cases). For tasks administered to teams or groups of students, p = .05 was used as the critical level, to compensate for the smaller numbers of cases in the subgroups.

For the first four of the five school variables, statistically significant differences among the subgroups were found for less than 15 percent of the tasks at both year 4 and year 8. For the remaining variable, statistically significant differences were found on more than half of the tasks at both levels. In the detailed report below, all “differences” mentioned are statistically significant (to save space, the words “statistically significant” are omitted).
   
School Size
Results were compared from students in large, medium sized, and small schools (exact definitions were given in Chapter 1 (p8)).

For year 4 students, there were no differences among the subgroups on any of the 27 tasks, nor on questions of the year 4 Technology Survey (p45).

For year 8 students, there were no differences on any of the 32 tasks. There were, however, differences on four questions of the year 8 Technology Survey (p46), with students from large schools most positive about doing technology at school (question 1), how much they learned about technology at school (question 2), how often they made things in technology at school (question 6g), and how often they learned to use tools and equipment in technology at school (question 6h).
   
School Type
Results were compared for year 8 students attending full-primary and intermediate schools. There were no differences between these two subgroups on any of the 32 tasks, but there were differences on eight questions of the year 8 Technology Survey (p46), with students from intermediate schools more positive about doing technology at school (question 1), how much they learned about technology at school (question 2), and how good they thought they were in technology (question 5). Within question 6, they reported more frequent involvement in school in five aspects of technology: designing things (6d), changing things to improve them (6f), making thinks (6g), learning how to use tools and equipment (6h), and checking how good their ideas or designs were (6i). These differences appear to indicate more specialist teaching in intermediate schools.
   
Community Size
Results were compared for students living in communities containing over 100,000 people (main centres), communities containing 10,000 to 100,000 people (provincial cities) and communities containing less than 10,000 people (rural areas).

For year 4 students, there was a difference on one of the 27 tasks: students from main centres scored highest on Biscuits (p38). There were no differences on questions of the year 4 Technology Survey (p45).

For year 8 students, there were no differences among the three subgroups on any of the 32 tasks, but there were differences on three questions of the year 8 Technology Survey (p46). Students from main centres thought they were better at technology (question 5), and had more frequent involvement at school of trying to find out what people want, need or like (question 6e), and checking how good their ideas or designs were (question 6i).
   
Zone
Results achieved by students from Auckland, the rest of the North Island, and the South Island were compared.
For year 4 students, there were differences among the three subgroups on three of the 27 tasks. Students from the rest of the North Island (excluding Auckland) scored lowest and students from the South Island highest on Egg Beater (p15), Link Task 2 (p23) and Jar Opener (p25). There were no differences on questions of the year 4 Technology Survey (p45).
For year 8 students, there were differences among the three subgroups on five of the 32 tasks: students from the South Island scored highest on Bikes (p14), Link Task 5 (p23), Biscuits (p38), Organic Waste (p42) and Link Task 12 (p43). There were no differences on questions of the year 8 Technology Survey (p46).
   
Socio-Economic Index
Schools are categorised by the Ministry of Education based on census data for the census mesh blocks where children attending the schools live. The SES index takes into account household income levels, categories of employment and the ethnic mix in the census mesh blocks. The SES index uses 10 subdivisions, each containing 10 percent of schools (deciles 1 to 10). For our purposes, the bottom three deciles (1-3) formed the low SES group, the middle four deciles (4-7) formed the medium SES group and the top three deciles (8-10) formed the high SES group. Results were compared for students attending schools in each of these three SES groups.

For year 4 students, there were differences among the three subgroups on 17 of the 27 tasks, spread evenly across the three task chapters. Because of the number of tasks showing differences, they are not listed here. Students in high decile schools performed better than students in low decile schools on all 17 tasks. There was also a difference on one question of the year 4 Technology Survey (p45): students from high decile schools reported fewer opportunities in technology at school to try to find out what people want, need or like (question 6e).

For year 8 students, there were differences among the three subgroups on 23 of the 32 tasks, spread evenly across the three task chapters. Because of the number of tasks showing differences, they are not listed here. Students in high decile schools performed better than students in low decile schools on all 23 tasks, with students in medium decile schools in between but usually much closer to those in high decile schools. There were also differences on two questions of the year 8 Technology Survey (p46), with students from low decile schools reporting more frequent opportunities in technology at school to make visits or have visitors related to technology (question 6c) and to check out how good their ideas or designs were (question 6i).
   
STUDENT VARIABLES

Three demographic variables related to the students themselves:

Gender: boys and girls
Ethnicity: Mäori, Pasifika and Pakeha (this term was used for all other students)
Language used predominantly at home: English and other.

During the previous cycle of the Project (1999-2002), special supplementary samples of students from schools with at least 15 percent Pasifika students enrolled were included. These allowed the results of Pasifika students to be compared with those of Mäori and Pakeha students attending these schools. By 2002, with Pasifika enrolments having increased nationally, it was decided that from 2003 onwards a better approach would be to compare the results of Pasifika students in the main NEMP samples with the corresponding results for Mäori and Pakeha students. This gives a nationally representative picture, with the results more stable because the numbers of Mäori and Pakeha students in the main samples are much larger than their numbers previously in the special samples.

The analyses reported compare the performances of boys and girls, Pakeha and Mäori students, Pakeha and Pasifika students, and students from predominantly English-speaking and non-English-speaking homes.

For each of these three comparisons, differences in task performance between the two subgroups are described using “effect sizes” and statistical significance.

For each task and each year level, the analyses began with a t-test comparing the performance of the two selected subgroups and checking for statistical significance of the differences. Then the mean score obtained by students in one subgroup was subtracted from the mean score obtained by students in the other subgroup, and the difference in means was divided by the pooled standard deviation of the scores obtained by the two groups of students. This computed effect size describes the magnitude of the difference between the two subgroups in a way that indicates the strength of the difference and is not affected by the sample size. An effect size of +.30, for instance, indicates that students in the first subgroup scored, on average, three tenths of a standard deviation higher than students in the second subgroup.

For each pair of subgroups at each year level, the effect sizes of all available tasks were averaged to produce a mean effect size for the curriculum area and year level, giving an overall indication of the typical performance difference between the two subgroups.

   
Gender
Results achieved by male and female students were compared using the effect size procedures.

For year 4 students, the mean effect size across the 21 tasks was +.01 (boys averaged 0.01 standard deviations higher than girls). This is a negligible difference. There were statistically significant (p < .01) differences favouring boys on two of the 21 tasks: Mowers (p19) and Link Task 1 (p23). There were no differences on questions of the year 4 Technology Survey (p45).

For year 8 students, the mean effect size across the 26 tasks was -.07 (girls averaged 0.07 standard deviations higher than boys). This is a small difference. There were statistically significant differences favouring boys on one task, Bikes (p14), and favouring girls on seven tasks: Fast Food (p16), Windsock (p20), Any Good? (p35), Link Task 6 (p23), Link Task 8 (p36), Link Task 9 (p36) and Cooking (p39). There were also differences on two questions of the year 8 Technology Survey (p46), with boys reporting more frequent opportunities in technology at school to think about how technology affects people (question 6a) and to find and use information to help make decisions (question 6b).
   
Ethnicity
Results achieved by Mäori, Pasifika and Pakeha (all other) students were compared using the effect size procedures. First, the results for Pakeha students were compared to those for Mäori students. Second, the results for Pakeha students were compared to those for Pasifika students.
   
Pakeha-Mäori Comparisons
For year 4 students, the mean effect size across the 21 tasks was +.31 (Pakeha students averaged 0.31 standard deviations higher than Mäori students). This is a moderate difference. There were statistically significant differences (p < .01) on 11 of the 21 tasks, with Pakeha students scoring higher than Mäori students on all 11 tasks: Egg Beater (p15), Fast Food (p16), Mowers (p19), Link Task 2 (p23), Jar Opener (p25), Pet Mouse (p29), Link Task 7 (p36), Link Task 8 (p36), Biscuits (p38), Link Task 12 (p43) and Link Task 13 (p43). There were no differences on any questions of the year 4 Technology Survey (p45).

For year 8 students, the picture was similar. The mean effect size across the 26 tasks was +.36 (Pakeha students averaged 0.36 standard deviations higher than Mäori students). This is a moderate difference. There were statistically significant differences on 15 of the 26 tasks, with Pakeha students scoring higher than Mäori students on all 15 tasks: Egg Beater (p15), Fast Food (p16), Mowers (p19), Ngä Pätaka (p21), Pack It Up (p22), Link Task 2 (p23), Link Task 4 (p23), Link Task 6 (p23), Jar Opener (p25), Computer Chair (p26), Pet Mouse (p29), Link Task 7 (p36), Biscuits (p38), Link Task 12 (p43), and Link Task 13 (p43). There was also a difference on one question of the year 8 Technology Survey (p46): Mäori students reported more frequent opportunities in technology at school to make visits or have visitors to help learn about technology (question 6c).

Pakeha-Pasifika Comparisons
Readers should note that only 20 to 50 Pasifika students were included in the analysis for each task. This is lower than normally preferred for NEMP subgroup analyses, but has been judged adequate for giving a useful indication, through the overall pattern of results, of the Pasifika students’ performance. Because of the relatively small numbers of Pasifika students,
p = .05 has been used here as the critical level for statistical significance.

For year 4 students, the mean effect size across the 21 tasks was +.41 (Pakeha students averaged 0.41 standard deviations higher than Pasifika students). This is a large difference. There were statistically significant differences on nine of the 21 tasks, with Pakeha students scoring higher on all nine tasks: Mowers (p19), Windsock (p20), Link Task 1 (p23), Link Task 2 (p23), Link Task 3 (p23), Computer Chair (p26), Link Task 7 (p36), Biscuits (p38) and Link Task 12 (p43). There was also a difference on one question of the year 8 Technology Survey (p46): Pasifika students reported more frequent opportunities in technology at school to make visits or have visitors to help learn about technology (question 6c).

For year 8 students, the mean effect size across the 26 tasks was +.45 (Pakeha students averaged 0.45 standard deviations higher than Pasifika students). This is a large difference. There were statistically significant differences favouring those whose home language was English on 18 of the 26 tasks: 11 of the 12 Chapter 3 tasks, four of the five Chapter 5 tasks, and just three of the nine Chapter 4 tasks. Because of the number of tasks involved, they will not be listed here. There were also differences on nine questions of the year 8 Technology Survey (p46): Pasifika students reported more frequent opportunities in technology at school to engage in all nine aspects of technology mentioned in question 6.
   
Home Language

Results achieved by students who reported that English was the predominant language spoken at home were compared, using the effect size procedures, with the results of students who reported predominant use of another language at home (most commonly an Asian or Pasifika language). Because of the relatively small numbers in the “other language” group, p = .05 has been used here as the critical level for statistical significance.

For year 4 students, the mean effect size across the 21 tasks was +.24 (students for whom English was the predominant language at home averaged 0.24 standard deviations higher than the other students). This is a moderate difference. There were statistically significant differences on 10 of the 21 tasks, with students for whom English was the predominant language spoken at home scoring higher on all 10 tasks: Mowers (p19), Ngä Pätaka (p21), Link Task 2 (p23), Link Task 3 (p23), Jar Opener (p25), Computer Chair (p26), Pet Mouse (p29), Link Task 9 (p36), Biscuits (p38) and Link Task 12 (p43). There were also differences on three questions of the year 4 Technology Survey (p45): students whose predominant language at home was not English reported that their class more frequently did really good things in technology (question 4), made visits or had visitors to help learn about technology (question 6c) and made things (question 6g).

For year 8 students, the mean effect size across the 26 tasks was +.33 (students for whom English was the predominant language at home averaged 0.33 standard deviations higher than the other students). This is a moderate difference. There were statistically significant differences favouring those whose home language was English on 17 of the 26 tasks: all of the Chapter 5 tasks, two-thirds of the Chapter 3 tasks and one-third of the Chapter 4 tasks. Because of the number of tasks involved, they will not be listed here. There were also differences on 10 questions of the year 8 Technology Survey (p46): students whose predominant language at home was not English were more enthusiastic about doing technology at school (question 1), about how much they learned in technology at school (question 2), about how often their class did really good things in technology (question 4) and about how good they were in technology (question 5). They also reported more frequent opportunities in technology at schools to engage in six of the nine aspects mentioned in question 6 (6a, 6b, 6c, 6d, 6e and 6h).

   
Summary, with comparisons to previous technology assessments
School type (full-primary or intermediate), school size, community size and geographic zone did not seem to be important factors predicting achievement on the technology tasks. The same was true for the 2000 and 1996 assessments. However, there were statistically significant differences in the performance of students from low, medium and high decile schools on 63 percent of the tasks at year 4 level (compared to 86 percent in 2000 and 27 percent in 1996) and 72 percent of the tasks at year 8 level (compared to 48 percent in 2000 and 41 percent in 1996). The high percentage for year 8 in 2004 is from the same cohort of students associated with the high percentage for year 4 in 2000.

For the comparisons of boys with girls, Pakeha with Mäori, Pakeha with Pasifika students, and students for whom the predominant language at home was English with those for whom it was not, effect sizes were used. Effect size is the difference in mean (average) performance of the two groups, divided by the pooled standard deviation of the scores on the particular task. For this summary, these effect sizes were averaged across all tasks.

Year 4 boys averaged negligibly higher than girls (mean effect size 0.01), but year 8 girls averaged slightly higher than boys (mean effect size 0.07). The corresponding figures in 2000 were 0.03 (boys higher) and 0.03 (boys higher).

Pakeha students averaged moderately higher than Mäori students, with mean effect sizes of 0.31 for year 4 students and 0.36 for year 8 students (the corresponding figures in 2000 were 0.38 and 0.38).

Pakeha students averaged substantially higher than Pasifika students, with mean effect sizes of 0.41 for year 4 students and 0.45 for year 8 students (the corresponding figures in 2000 were 0.56 and 0.47).

Students for whom the predominant language at home was English averaged moderately higher than students from homes where other languages predominated, with mean effect sizes of 0.24 for year 4 students and 0.33 for year 8 students. Comparative figures are not available for the assessments four years earlier.
 
Loading Images
top of the page | Technology Report 2004