The Intercollegiate Studies Institute again commissioned Prof. Ken Dautrich and Mr. Chris Barnes of the University of Connecticut’s Department of Public Policy (UConnDPP) to conduct this field study of undergraduate civic literacy. A representative sample of 25 schools was randomly selected from all four-year colleges and universities, and an over-sample of 25 elite schools was also chosen. Barnes oversaw the sampling and data collection. Heather Mitchell assisted in analyzing the findings and drafting the report. ISI’s Senior Research Fellow, Dr. Gary Scott, independently corroborated the statistical analyses in addition to testing hypotheses using regression analysis.
Dautrich and Barnes are internationally recognized for their public opinion research with projects ranging from international studies to local community surveys.
The survey instrument was designed in 2004 by a team of specialists in each applicable field of study across the nation who were charged with identifying the top 50 themes from their fields related to American ordered liberty. These themes were converted into multiple-choice questions and then pared down to the 60 most applicable questions through student focus groups and further scholarly review. In selecting the final 60 questions, the specialists sought to capture the essential facts and concepts of history, political science, and economics that contribute most to civic knowledge.
An optimum mix of elementary and advanced questions kept the test sensitive to potential gains in knowledge, no matter a student’s beginning civic knowledge, and to mitigate any ceiling effect. The survey is refined every year through panel review and examination of question validity measures.
In the pilot administration of 2004, each of the 60 questions was found to be statistically valid. Psychometricians rate a question valid insofar as students’ likelihood of correctly answering it increases as their overall score increases.
In addition to testing the reliability and validity of each survey item, the pilot administration also covered:
- The most viable method of data collection, including the ability to implement on all campuses.
- The optimal sample size of students at each school.
- The extent that student answers vary by method of data collection.
The pilot survey used four separate surveying methodologies at a selected sample of 22 colleges and universities nationwide. In-person, pen-and-paper administration was deemed the best method. The methods tested were phone interviewing, internet surveying, teststyle surveying, and in-person surveying. Phone interviewing proved to be logistically impractical. Internet, as well as test-style surveying, yielded few completed interviews.
2005 Survey Implementation
In 2005, the first full-implementation of the study was undertaken using the in-person, pen-and-paper methodology. Surveys were conducted of 7,405 freshmen and 6,689 seniors for a total of 14,094 students across the 50 colleges and universities. The robust quantity of data enabled in-depth analysis and validation procedures. Each of the 60 questions remained reliable and statistically valid.
The institutional sample for this project for the 25 representative schools was based on information from the National Center for Education Statistics’ (NCES) Integrated Postsecondary Education Data System (IPEDS). “IPEDS,” according to NCES, “is a single, comprehensive system designed to encompass all institutions and educational organizations whose primary purpose is to provide postsecondary education.” The sample was stratified to proportionally represent all four-year baccalaureate-granting public and private schools of various sizes in the continental United States, including Washington, D.C. This excludes two-year community colleges or satellite campuses not offering full bachelor degrees. Proportionately over sampling those colleges with higher undergraduate enrollment makes the sample self-weighting and thus representative of students at America’s baccalaureate institutions. The following is a table of the total population of students at four-year colleges and universities:
|Population||Number of Schools||Percentage|
|5,000 To <10,000 Students||201||14.2%|
|10,000 Or More Students||201||14.2%|
|Source: National Center for Education Statistics’ Integrated Postsecondary Education Data System|
The additional over-sample of 25 “elite” schools reflects those highly selective colleges of special interest to the Intercollegiate Studies Institute. These elite schools were selected based upon a series of criteria: U.S. News & World Report rankings, high selectivity of enrollment to add to the variance of freshmen civic achievement, flagship state universities providing regional interest, religiously affiliated colleges and other criteria.
Second-stage sampling and respondent selection was accomplished through the following steps. After the selection of the individual schools, operations staff collected demographics, population statistics, and geographic maps for each selected school. Dormitory and other residential student data, as well as classroom buildings and other data germane to establishing traffic-flow estimates, were assembled. A list of preliminary sites was selected based on these estimates. Regional survey supervisors verified site suitability upon reaching each campus. They established flow at selected locations, and they verified with the students recruited to work on the study that key traffic-flow areas were not inadvertently omitted. Where appropriate, off-campus sites were added to the list of intercept locations. Following the verification, a final selection of sites was determined and staff assigned to specific times and locations and given a target number of completes for each intercept location, based on the flow data gathered. Different times and days were used at each intercept location, based on trafficflow counts. Data collection ranged from three to eight days per school, depending on the size and complexity of the school.
Sampling ratio at individual sites varied by traffic volume and school size to accommodate target completes. School enrollment ranged from 1,400 total students to more than 20,000. A ratio was established for respondent selection and every “nth” person was verbally asked the screening question of “Are you a student at [college]?” and a verbal follow up of “Are you a freshman or senior?” The questionnaire repeated the freshman/senior screening question. Refusals were replaced with the next passerby. Data collection continued at each school until the total number of completes for the particular school were collected.
The systematic, multi-level verification process was further enhanced this year through additional Regional Manager training, added verifications and the establishment of an in-house quality checking procedure. Error rates were well within acceptable norms and provide assurance of high quality data. The numerous, rigorous quality-control measures are detailed below:
1. Regional Manager Field Training
As the Regional Managers are the first stage of quality control, training was held for a longer period of time to share lessons learned from previous implementations and to stress the importance of their proactive quality-control measures. In addition to intensive classroom training, this year’s Regional Managers had an additional week of in-field training with the Project Director, Heather Mitchell.
2. Regional Manager Verification
Every survey retrieved by a Regional Manager is hand-checked so that questionnaires with issues, such as incompletes, incorrect class status or bogus data entries, would not be included in their final submissions. The Regional Managers work side by side with up to five students at each school to ensure proper collection methods. If the Research Manager had any reason to doubt the validity of a student’s work, none of the student’s submissions was included in the dataset.
Each Regional Manager was visited by experienced personnel at least twice, with at least one visit being without advance notice. Although the visits involved assisting in surveying, the primary purpose was an inspection to ensure proper data-collection methods and student-employee supervisory practices. When not being monitored in-person, the Regional Managers stayed in continuous contact with the main office via both cell phone calls and e-mails at all times of the day and night.
4. Staff Verification
Office staff painstakingly administered a second round of handchecking all submitted surveys this year to further ensure that only legitimate surveys made it into the dataset. The staff concentrated on more complicated patterns and other such issues when re-checking surveys. This process effectively removed a few more surveys from most schools.
5. E-mail Verification
Due to high efficacy, more e-mail verifications were performed this year than last. A minimum of 5% of the completed surveys collected at each school were randomly selected to receive an e-mail confirming their eligibility. The verification e-mails asked if the student was a freshman or senior at a particular school and if they remembered taking the survey. The majority of schools responded at an 85% rate or higher. E-mail blocking systems at a few schools made them particularly difficult to reach.
6. Prize-Winner Verification
The prize-winner verification rate is 100%. Depending on school size, up to two freshmen and two seniors at each school who entered the sweepstakes were selected to receive either an Apple iPod Shuffle or a Kodak EasyShare Digital Camera. Fifty iPod Shuffles and 77 Kodak EasyShare Digital Cameras were successfully shipped to eligible students. To prove eligibility, the students had to send documentation proving to what class and school they belonged during the first semester of the 2006-2007 academic year.
7. Data-Cleaning Procedures
The dataset was further scrutinized for irregularities using statistical diagnostics. Further observations were removed from the dataset if incompletes, incorrect class, or other such items were present.
Data for the representative schools were weighted to account for variance in enrollment and governance (public/private). Enrollment data was gathered from individual schools as well as from the NCES’s restricted Peer Analysis System (IPEDS).
In order to further enrich the analysis, supplementary variables were created using publicly available institutional characteristics. These included the NCES (http://nces.ed.gov/) and The College Board (both in hard copy and at www.collegeboard.com).
University endowment figures were for fiscal 2006 and defined as the Market Value of Endowment Assets. Most were obtained from the 2006 NACUBO Endowment Study, National Association of College and University Business Officers, 2007 (publicly available NES Table at http://www.nacubo.org/x2376.xml). For universities not included in NACUBO’s study, the 2005 endowment value was obtained from www.usnews.com. Several university endowment values were part of more generally reported endowments of several institutions: University of California system, Texas A&M system, the Minnesota system, and the Massachusetts system.
Presidential salaries were for academic year 2004-2005. This compensation omits benefits, expense accounts, and housing and car allowances. Most came from Dunbar, Ben, et al. “Executive Compensation,” The Chronicle of Higher Education, November 24, 2006, available at http://chronicle.com/stats/990. Several salaries unavailable from this source were obtained directly from the universities.
Each question included was intended to test important knowledge. Working with a distinguished board of professors from around the country and outside reviewers, we identified 60 themes that appear in the first column of the following table. This listing illustrates the range of ideas tested in American history (questions 1–17), American government and political thought (18–31), international affairs (32–47) and the market economy (48–60). The themes consist of basic civic knowledge or concepts, not obscure or arbitrarily selected knowledge.