RESEARCH
METHOD
 |
-Research
Questions |
The
research questions that guided this study related to the students’ ability
to plan, write and edit a piece of personal writing over three
days. The main research questions were supplemented with sub-questions. |
|
Research
questions |
|
Sub-questions |
Part 1 Effective
Planning What planning strategies were used by year 4 and year
8 students for expressive writing tasks? |
Was
there a strategy?
What was it? |
Part 2 Linkage
to Writing Was the planning process reflected in and used to structure
the writing exercise?
How much writing
was completed in the time available? |
Was
the Day 1 planning used?
Was there any evidence of editing and
proof reading during the Day Two writing?
Was the ‘My Place’ topic
maintained?
Was the content factual? |
Part 3 Editing
and Proof Reading What evidence was there of editing and proof
reading?
What was the
accuracy of editing and proof reading? |
What
changes did students make in spelling; punctuation; grammar and
making sense?
What proportion
of the editing and proof reading corrections (that should have
been made) were correctly identified by students? |
Part 4 Completion
of the Task To what extent were the students able to complete the
planning, writing and editing tasks in the time available? |
Was
the task completed in the time available? |
|
-Materials
for analysis |
The Educational Assessment Research Unit (EARU) at
the University of Otago and the Unit for Studies in Educational Assessment
(USEE) at the University of Canterbury supplied the following materials
for this study. |
• |
The task instructions; |
• |
The marking schedule; |
• |
A copy of
the video depicting a collection of ‘special
places’ that was shown prior to the planning session; |
• |
Six random scripts: one from each ability grouping
at each year level; and |
• |
171 scripts (92 year 4 and 79 year 8 students). |
|
-Defining
the ability groups |
All
archived student responses retained from the first cycle of NEMP
(1995-1998) were available for analysis in this study. This represented
a randomly selected 25% sample of the original NEMP sample. Three
ability groups of students (low, medium and high) were established.
The achievement status of each ‘My Place’ task response
had been calculated by EARU. The content of work had been marked
on four criteria – vividness of language (description/imagery); relevance
to topic; amount of detail; and communication of feeling. Each criterion
was marked using a 4 point scale, for example, as follows: |
|
Vividness (use of language, imagery)
4 - Extremely rich and vivid
description
3 - Good vivid description
2 - Some elements well described
1 - No or very little description |
The total mark across all criteria was calculated and
formed the basis for grouping students into three ability groups
(low, medium and high) at each year level. The groups were defined
as follows: |
Year 4 |
Low (0-3)
|
Medium (4-5)
|
High (6-11)
|
Year 8 |
Low (0-4)
|
Medium (5-7)
|
High (8-12) |
|
Examples
of students’ writing in each of these ability
groupings is in Appendix 1. |
|
-Characteristics
of the sample |
The
characteristics of the 171 students used in this study are described
in Tables 1 and 2 below. |
Table
1: The gender of students at year 4 and year 8 |
|
year 4 |
year 8 |
Males
|
34 |
50 |
Females
|
45 |
42 |
Total |
79 |
92 |
|
Table 2: The number of students in each ability grouping |
|
year 4 |
year 8 |
Low |
34 |
26 |
Medium |
21 |
38 |
High |
24 |
28 |
Total |
79 |
92 |
|
|
-Marking
and coding the scripts |
NEMP
assessed the extent and type of editing using three options – none, some, substantial. The aspects of editing assessed
were – extension (continuation of storyline); insertion (adding
to the content); reorganization (re-ordering the content); deletion
(removal of content); paragraphing; (non-specified) punctuation;
and proof reading changes.
A comprehensive
coding sheet was prepared to capture the information required
to answer the research questions. It included the scores for
the above NEMP aspects of writing, in addition to a number of
other criteria (Table 3).
|
|
-Marking
criteria |
The
following criteria and coding categories were developed for marking
students’ work.
Table 3: Marking
criteria and coding categories |
Criterion |
Coding categories |
|
Part 1: Planning |
Evidence
of a planning strategy |
None
|
Some
|
Substantial
|
Type of
planning strategy employed |
Brainstorm
First Draft
|
Mind map
Other |
List |
|
Part
2: Writing |
Use
of planning from day 1 |
Nil
|
Some
|
Substantial
|
Number of
words written |
|
Evidence
of proofing and editing |
Yes |
No |
|
Following
instructions of task by: |
|
Keeping
to topic |
Yes |
Partially
|
No |
Fact and
not fiction |
Yes |
Partially
|
No |
Completion
of tasks |
Barely started
Partially completed (began well)
Nearly completed (needed conclusion)
Completed (adequate)
Well completed (planning evident, expressive,
grammatical, conclusion) |
|
Part
3: Writing accuracy |
Spelling
|
All
mistakes and corrections were recorded |
Punctuation
|
Poor
(>20 mistakes, little or no use of basic punctuation)
Satisfactory
(10-20 mistakes, basic understanding and moderate use)
Appropriate
(<10 mistakes, understanding and use mostly evident) |
Evidence
of proofing |
|
Spelling
|
None
|
Some
|
Substantial
|
Punctuation
|
None |
Some |
Substantial
|
Sense |
None |
Some |
Substantial
|
Sentence
structure |
|
Simple sentence
usage |
Poor |
Satisfactory
|
Appropriate
|
Compound
sentence usage |
Poor |
Satisfactory
|
Appropriate
|
Use of non-sentences
|
Substantial
|
Some |
Nil |
Length
of sentences |
Inappropriate
|
Satisfactory
|
Appropriate |
|
|
The
coding categories were trialled with a sample of six scripts.
A reliability check with a colleague after the initial six samples
were coded led to several changes and refinements to the coding
categories before progressing on to the remainder of the scripts.
A separate punctuation sheet was the result of several more changes
once coding began.
Cross-marking
was also undertaken at the mid-point with six samples from year
4 and six from year 8. Two were selected from each ability group
at each year level. One was a random selection and the other
was perceived as ‘difficult to code’.
A colleague undertook the cross-marking, subsequent discussion
and consensus of opinion.
|
|
-Data
entry |
Once the
students’ writing was coded, two students
from the University of Canterbury entered the data for computer analysis. |
|