Skip to Main Content

UCLA Library Assessment Guide

"Assessment is used to make decisions that guide our future, not to validate decisions of the past."

Data collection

Your assessment question drives the methodology you choose for data collection, and conducting a literature review is one of the best ways to gain insight into the benefits and drawbacks of different methods used to answer similar questions. 

Selecting a tool that is valid and reliable

If you intend to use or create a tool for data collection (survey, rubric, etc.) as part of your assessment project, you must consider whether that tool is valid and reliable. A tool is considered valid when it measures what it purports to measure. A reliable tool gives the same value when the same thing is measured multiple times, including by different users. 

Conducting an assessment with a tool that is not valid and reliable may produce misleading results. Tools created internally need to be tested for their validity and reliability. It is often preferable to find a tool that has already been shown to be valid and reliable, and conducting a literature review is probably the best way to find such a tool. 

Sampling considerations

Unless you plan to include the entire population of interest in your study (e.g., every student at your university or every book in your collection), you will need to select a sample. Some common types of sample selection are listed here.

  • Random sample: Random sampling is the gold standard for research, but it is often difficult to achieve. In a random sample, each member of the population of interest for the assessment must be identified and have an equal chance of being selected. Using a random number generator is one way to achieve this, as is drawing names out of a hat. Even if your initial sample is randomly selected, there is a good chance that some of those selected will choose not to participate, making the sample nonrandom. 
     
  • Stratified random sample: A stratified random sample is representative of the population about certain variables that have been deemed important. If you wanted to ensure that each college at your university was proportionally represented in a survey, you would divide the whole population into subgroups (colleges) and randomly sample a percentage of each subgroup for survey distribution. 
     
  • Purposive sample: In purposive sampling, the researchers handpick participants based on characteristics that would make them particularly good sources for the research at hand. For example, assessing the depth of the library collection in applied linguistics might specifically select faculty members known to research in that area to interview them on their experiences. 
     
  • Convenience sample: Also known as accidental sampling, a convenience sample consists of volunteers. While very common, these samples have a bias in favor of participants who are more likely to volunteer (to take a survey, be in a focus group, etc.), and it will behoove the researcher to consider how this might affect their results. 

Your sample size may be dictated by practical considerations such as time and money, but keep in mind that for quantitative assessment, your sample size may restrict the statistical analyses you can conduct with your results. For more information about sample sizes, see (link).

Human Subjects

If your assessment involves human subjects, you should consult the institutional review board at your campus to determine whether you must submit a human subjects application for review. 

Data analysis

While the specifics of analyzing qualitative and quantitative data are beyond the scope of this toolkit, your campus may have some valuable resources as well. It is recommended that you contact your campus office of institutional research when designing your assessment to develop your plan for data analysis before you begin collecting data and ensure that the data you collect fits your intentions for analysis.

Questions to consider:

  1. What type of project will work best to gather the information you need?
  2. Is there existing data that could answer any part of your assessment questions?
  3. Do you need an IRB (Institutional Review Board) exemption? Check with your campus IRB.
  4. What is your sample size for this assessment? How many participants will you have?
  5. How will you identify participants?
  6. Do you have a means to access student data? At UCLA, the Registrar's Service Request (RSR) allows authorized staff to request student data.
  7. What will you do with the data after it's collected?
  8. Which of your departmental outcome(s) does this program, service, or aspect relate to, and how?
  9. What must you know/learn about it to more effectively achieve its related departmental outcome(s)?