in the patient record. These criteria were
derived from the following sources:
x The Swedish law that stipulates that nursing
documentation should include the steps of
the nursing process as described above, the
signing and dating of each entry, a minimum
degree of legibility, and a nursing discharge
note
x The VIPS model which includes the nursing
process, the use of specified keywords, the
correct classification of the keywords in
accordance with the user manual, and a
nursing discharge note
x Common hospital policies that prescribe
that each patient should have a named nurse
with the primary responsibility for the
patient’s nursing care and care plan docu-
mentation.
At this stage, 19 questions were formulated
to determine whether this information was
documented in the patient record. Each
question was constructed to reveal both the
quantity and the quality of the written content
on a rating scale. A manual was designed to
explain how to score each question.
The quality and quantity values were scored
on a rating scale from zero to three, zero indi-
cating “poor” and three indicating “very
good”. The quantity value is expected to meas-
ure whether or not there is a written note and,
if so, how much is written. For example, for the
patient’s nursing status, a certain minimum
number of nursing areas, represented by
keywords in the VIPS model and relevant to
surgical care, should be described for a patient
in a surgical ward. The quality value is used to
measure to what degree the written notes are
clear and concise, without superfluous text,
and include all relevant nursing information
with a correct use of language. If all notes fulfil
these criteria, a full score of three is given; if
more than 50% of the notes, but not all of
them, fulfil the criteria, a score of two is given;
if less than 50% fulfil the criteria, though some
notes still fulfil the criteria, a score of one is
given, etc. Furthermore, the instrument is
expected to measure the extent to which it is
possible to follow a patient problem through
the nursing process. That is, whether the prob-
lem is properly assessed and described in a
diagnosis, with the expected outcome, planned
and implemented interventions, and an evalua-
tion. The instrument was named Cat-ch-Ing.
To test usability for understanding questions
and phrasing of the instrument, five patient
records collected from one hospital ward were
independently reviewed by three nurses using
the new instrument. The instrument was
revised after each of the three audits. The revi-
sions concerned the clarification of definitions
in the manual and the deletion or rephrasing of
questions. Two questions were omitted, one
about the evaluation of nursing care, which was
already covered by other questions, and the
other about the use of keywords other than
those stipulated by the VIPS model. One ques-
tion about the discharge note was rephrased.
TESTING OF RELIABILITY AND VALIDITY
Inter-rater reliability was tested by comparing
diVerent reviewers’ total Cat-ch-Ing scores
given to the same record. Twenty patient
records from each of three hospital wards at a
university hospital in Stockholm, Sweden were
used for this part of the development. The
records were selected from the registers of the
wards and were coded to protect patient iden-
tity. The specialty wards were surgery, neurol-
ogy, and rehabilitation. The criteria for the col-
lection of the records were that they should
concern the first 20 patients from each ward
who were admitted for five days or more during
a specific time period. The collected records
were audited three times, each time by a diVer-
ent reviewer. The auditors were nurses know-
ledgeable and experienced in nursing docu-
mentation and in the use of the VIPS model.
Before the audit, a calibrating process was
undertaken, which means that the use of the
instrument was taught and discussed with the
reviewers.
The inter-rater reliability was statistically
investigated by calculating the inter-rater reli-
ability coeYcient
37
between raters’ total scores
of each record. Additionally, score diVerences
between reviewers, on each question in the
same patient record, were compared and
calculated as percentages of agreement.
The content-validity ratio was calculated as a
means of quantifying the degree of consensus
in a panel of 10 experts, who made judgments
about the instrument’s content validity. Each
expert was asked to judge whether or not the
10 questions in the instrument, meant to
measure the nursing process, were indeed
essential in measuring the parts of the nursing
process documented in a patient record. The
method, developed by Lawshe,
38
is described
by the formula:
where CVR is the content-validity ratio, ne is
the number of panellists indicating “essential”
about a specific question and N is the total
number of panellists.
The criterion-related validity was estimated
by the degree of correlation between the score
of the Cat-ch-Ing instrument and the score of
the audit instrument developed by Ehnfors
9
and used in previous research. The Ehnfors
instrument was constructed to measure
whether each part of the nursing process (and
thereby also the VIPS model) was documented
for each nursing problem identified in the
patient record. The nursing process was the
chosen criterion in both the Ehnfors and the
Cat-ch-Ing instrument. The Ehnfors instru-
ment has a score from zero to five, giving one
point for each documented part of the nursing
process: assessment, goal and diagnosis,
planned intervention, implemented interven-
tion, and a discharge note, concerning each
specified nursing problem. The Ehnfors instru-
ment scores mainly the quantity; the quality
aspect is only present for evaluating the flow of
Development of an audit instrument for nursing care 9