Late June 2016's Teaching Resistance is written by Brian
Moss, who
teaches at a public Junior High School in San Francisco. From his
personal teaching experience and careful research, he has reached
some strong conclusions about the relationships between perceptions
of student literacy, the infamous “digital divide” stemming from
unequal access to resources, and overall inequity in education that
can severely impact student success. Brian can be reached directly at
hurricanewe@gmail.com.
While it may be a philosophical foundation of the country, the
existence of true and thorough equal opportunity in the United States
is a mere myth. It is used as a fabricated defense of greed and
ironically, our greatest divides. Belief in it perpetuates false hope
– bread and circus. In a society that is built around and
encourages extreme competition and economic individualism, inequality
is inherent. Children are born into our world as subjects of their
parents’ circumstance, with the odds inevitably stacked to varying
degrees for or against them.
Sadly, public schools often further exasperate these differences.
Whether it’s a matter of resources, the quality of teachers,
technology, community support, or familial/guardian support, all too
often, the education system aides in keeping obstacles in place for
the unlucky, disadvantaged, and marginalized, while making sure the
upper hand stays with those who were born with it. As educators, we
see an array of these disparities on a daily basis and must strive to
narrow or eliminate them, no matter how difficult or insurmountable
the task may seem.
The value and power of technology in the workplace and society at
large is undeniable, and its place in schools has created new equity
issues and perpetuated preexisting polarizations. It has become
apparent that there are vast differences in both students’ and
schools’ levels of access to and uses of computers. This
phenomenon, in schools and society at large, is often referred to as
the digital divide. The following is an excerpt and summarization of
a field research project I conducted during the 2014/’15 school
year:
Following a growing trend to utilize technology in the administering
of testing, the now implemented Common Core Standards involve
computerized testing of students. During the Language Arts test,
students are required to both navigate a basic operating system and
type timed writing responses. Thus far, public research on the new
test/s, their effects, and what factors influence student
performance, has been minimal.
For students who lack prior experience with computers, either at home
or in their educational histories, test scores may not be indicative
of a lack of comprehension of the curriculum that they have been
taught. One potential factor that could affect scores is that based
on their prior experience with technology and/or their access to it,
students simply do not have the skills required to convey their
knowledge in the medium of computerized assessment. Since the
inception of computerized testing, both educators and social
scientists have shared concerns regarding differences between
paper-and-pencil and computerized administration modes. Essentially,
a test that is designed to strictly measure content area mastery in
English Language Arts could simultaneously be unfairly measuring
students’ basic computing abilities.
As many students with limited or no access to computers come from
socioeconomically challenged backgrounds, this issue could
additionally perpetuate preexisting gaps in public education. In
Technology and Equity in Schooling: Deconstructing the Digital
Divide, Warschauer, Knobel, & Stone (2004) address this
threat. They suggest that there are “a host of complex factors that
shape technology use in ways that serve to exacerbate existing
education inequalities.” Perhaps computer-administered testing is
an example of this phenomenon. Technology in schools with low
socioeconomic majorities often is not functioning properly, is not
up-to-date, or is not used in educationally enriching manners
(Warschauer, Knobel, & Stone, 2004). Conversely, students
attending schools with a high socioeconomic status majority had a
significantly higher rate of access to both computers and the
internet at home (Warschauer, Knobel, & Stone, 2004).
Furthermore, in a country with a vast population of residents for
whom English is not their native language, studies that draw
correlations between English fluency and computer literacy present
the possibility that computerized assessment could increase divides
in education along ethnic and cultural lines (Ono & Zavodny,
2008).
My research situates itself in studies that have uncovered the
presence of the digital divide. The digital divide, as previously
mentioned, is defined as an unequal distribution of technology based
on varying factors such as socioeconomic status, ethnicity,
geographic location, and language ability. The growing trend of
moving from paper-and-pencil assessment to computer-administered
modes presents a new set of concerns. Considering that in public
education, standardized test scores weigh heavily on the judgment and
direction of students, teachers, administrators, schools, and entire
districts, ensuring fairness and equity in terms of assessment and
analysis of data plays a crucial role in narrowing the achievement
gap.
My study sought to answer the following question: What is the
relationship, if any, between sixth grade students’ access to and
prior experience with computers and their short written response
scores on computer-administered practice test items? As of
yet, research on relevant material has not provided students with
much of a voice. It is my belief that in order to truly understand
the topic, they must be heard. Thus, my study also delved into the
following subquestion: What are students’ thoughts and opinions
regarding computerized testing and its relationship to their own
experiences with technology?
The study commenced by giving sixty 6th grade English
Language Arts students a questionnaire survey. The survey included
seven questions with four point answer scales regarding access to
computers at home and in school, extent and length of experience with
computers, and frequency and type of computer use. Following the
survey, students were grouped into groups of high, medium, and low
prior computer experience based on their responses. Thirty five
participants were female, and twenty five were male. This number was
a result of certain students failing to provide individual and/or
parental/guardian consent. The sample groups consisted of fifteen
students in the low survey range, twenty nine in the medium survey
range, and sixteen in the high survey range. The survey
administration process, gaining consent, and grouping students took
roughly one month. No students were made aware of their groupings.
Once the sample was been obtained, students were assigned numbers to
ensure confidentiality and then randomly selected to complete either
a paper or computer-administered version of one question from the
Common Core 6th Grade English Language Arts Practice Test.
The question was identical on both tests and asked participants to
make an inference in paragraph form with textual evidence based on a
short reading sample. Both the paper and computer tests were
administered in a randomized order. Two weeks was provided as a
resting period. The group that initially took the paper version of
the test was given the computer version and vice versa.
Six students were then selected for interviews. They were selected
based on the principle of ensuring an equitable sample of gender,
ethnicity, and computer grouping. Students were interviewed
individually for roughly ten minutes after school in my classroom.
The interviews sought to gather information regarding their feelings
and experiences with prior technological use and education, both at
home and in school, as well the computer and paper-and-pencil
assessment administered in the study.
The numerical data gathered supported the study’s hypothesis.
Participants in the high computer skills and exposure group showed
less of a difference between testing modes than those in the low and
medium groups. Additionally, a somewhat significant percentage of
participants in the high group also scored higher on the paper test.
Data collected from the medium group showcased a substantial increase
in lower scores on the computer-administered test question. The low
group, as expected, contained a large number of participants that
scored substantially lower on the computerized version of the
assessment tool. Additionally, score differences favoring the paper
assessment showed greater point variances in the low group.
In a localized context, the findings of the study showcase a highly
problematic disparity between paper and pencil and computerized
assessments based on students prior exposure to and experience with
computers. As made evident by a steady increase in higher paper
scores within the medium and low groups, it is clear that within the
sample group, computer access and experience affects consistency in
scores between paper-and-pencil and computer-administered testing
modes. Furthermore, given that the high group also had 19% of its
participants scoring higher on the paper test, with none scoring
higher on the computer test, the findings also foster additional
questions about computer testing in general. This is further
supported by the fact that when examining the entire sample without
grouping, the majority was equally split at 46% each for no score
difference and lower computer scores, with only eight percent scoring
higher on the computer test. In consideration of the findings of
existing research, and the percentages from this study being so
heavily weighted in favor of paper-and-pencil testing, a strong case
is made against potential flaws of computerized testing.
Additionally, based on follow-up interviews, it is apparent that many
students in the sample group who come from backgrounds lacking in
computer familiarity and education feel as though they have not
received adequate computer education and experience increased anxiety
and lack confidence when asked to carry out academic tasks,
specifically testing, on computers. Furthermore, many of these
students may go into computer tests believing that they could score
higher on a paper version. Even students who have had a fair amount
of exposure to technology in terms of access and education often feel
unprepared when it comes to computer testing. Much of the anxiety,
lacking confidence, and feelings of being underprepared can be
attributed to a lack of typing skills and having to type a test
within an allocated time frame. The findings showcased a broad desire
amongst participants to increase their computer skills, specifically
typing, regardless of prior exposure and education. The data
suggests that exposure to computers, computer education, and computer
access needs to be increased, not only for students who lack
experience and education, but for those who have had it.
Ultimately, on account of the relationship between computer
experience and access and score differences, and students’
recurring expressions of anxiety and lacking confidence when it comes
to computerized test-taking, the validity of Common Core computerized
assessments comes into question again: Are these tests simply
measuring the content areas they claim to, or are they additionally
measuring students’ ability to effectively use technology, and
therefore placing students who have had more exposure, access, and
education at an advantage? Are these tests equitable?
In order to narrow the digital divide and foster equity in
technological education, more computer access and instruction needs
to be provided in schools where the student population comes from
backgrounds lacking in access and prior education. Additionally,
perhaps even for students who feel as though they have had ample
technological access and education, more is still needed. Without it,
technological disparity is inevitable and students may continue to
experience feelings of anxiety and lack confidence when asked to take
tests on computers. They may also feel and be unprepared for the
modern world’s workplaces and educational institutions that rely so
heavily on technological proficiency.
In order to meet the technological demands set by Common Core
computerized testing, typing and fundamental computer navigation
skills must be taught. If we are going to ask students to take
computer-administered tests that to some extent factor in computer
proficiency, we must provide them with the skills needed to take them
or allow for an option in the form of a paper test. If not, some
students may be facing unjust obstacles. Given correlations between
primary language, ethnicity, and class to computer access, we must
additionally consider what populations are being marginalized by
these obstacles and what the resulting effects could be. Furthermore,
results suggest that even those with experience could also be judged
unfairly, as computerized testing may still be a rather unfamiliar
process for many students.
Educators, administrators, policy-makers, and students face countless
and immense challenges in the push to overcome the obstacles that
lead to and perpetuate polarization and imbalance. Technological
access, education, and assessment is now situated at the forefront of
these challenges. Common Core computerized assessment, a tool by
which all students, teachers, districts, and states are evaluated by,
must be diligently studied and scrutinized to ensure that students
have a fair chance to express their knowledge without being judged
according to extraneous factors. --Brian Moss
No comments:
Post a Comment