While doing research for my sabbatical project I came across
an excellent summary
article on information literacy competency assessment by Lisa
O'Conner and Julie Gedeon from Kent State University. They
had presented at the last ACRL conference. Assessment guru Debra Gilchrist
had also mentioned that I should contact them. I was pleased to discover
they were conducting a working group in April for their SAILS project
and I emailed to ask if I could attend. They were happy to include
me and to have a broader geographic representation in the participant
group. There were 20 participants at the SAILS Working Group meeting,
from 7 states and 13 institutions, including a LibQUAL+ program
specialist from ARL.
The most recent activities of the SAILs project are focused
on developing a standardized, nationally normed multiple choice exam to
assess levels of information literacy competency. Their primary need is
to demonstrate that information literacy learning occurs throughout the
college years. They hope to measure student learning to help with financial
and administrative support for information literacy instruction particularly
on their campus. Their goal is to test incoming freshman, then in 4 years
test them as seniors to measure growth. The exam may also be used to assess
General/Liberal Education, graduating seniors and for other purposes.
They believe the quality of the test will come from testing many people,
over a long period of time. It is not intended to be used to meet a requirement
as they do not have an information literacy requirement at their institution
and they do not feel they will be able to get one.
On Friday we were introduced to the SAILS project, given
an overview of what had been done to date, and a lecture on item response
theory and computerized adaptive testing by Julie Gedeon. Julie is just
finishing up her PHD. in testing. In classic test theory the student is
graded on the number of correct responses. In item response theory we
assume the response is determined by ability. The SAILS exam will
be automated like the SAT so once a student answers a question of a certain
difficulty, the exam will immediately ask questions at the next difficulty
level. Lisa and Julie shared their sample 25 question multiple choice
exam. This exam has been given to hundreds of students and is being beta
tested in Oregon. One of their next goals it to build the exam question
database and to purchase expensive testing software. They will be seeking
additional grant funding for this.
On Friday afternoon the initial idea was for the group to
determine which of the ACRL performance indicators and outcomes were the
most difficult for students to gain competency in. I suggested the better
question was which outcomes were the most important to assess. The group
agreed and that afternoon we broke into small groups to discuss which
outcomes we felt were the most important. There was an amazing consensus
when we came together in the larger group.
On Saturday we began by discussing the various frameworks
for information seeking behaviors and conceptions including the ACRL standards,
Information power from the school library and media specialists, Carole
Kulthau's model and Christine Bruce's "Seven Faces of Information
Literacy Competency." I also recommended that the group take
a look at the new Australian standards. We discussed the merits and problems
with each framework as they pertained to competency testing. Some participants
feel a multiple choice question can be developed to test every standard,
outcome or competency. I'm still not sure its the best method of testing
competencies, but I do think it can provide some very useful shared data.
Overall it was a very successful meeting and everyone that
attended learned something new. Stewart Library's Information Literacy
Competency Exam is a better instrument due to this experience.