
Research Partners
The American Civic Literacy survey was contracted by the Intercollegiate Studies Institute (ISI) to the University of Connecticut’s Department of Public Policy (UConnDPP) under the direction of principal investigators Dr. Kenneth Dautrich and Christopher Barnes. This study is the culmination of a multiyear process with ISI and national experts in the area of civic literacy. Christopher Barnes oversaw the sampling for and data collection by UConnDPP. Under the direction of Christopher Barnes and Kenneth Dautrich, Heather Mills assisted in analyzing the findings and drafting the report. ISI’s Senior Research Fellow, Dr. Gary Scott, independently corroborated the statistical analyses in addition to testing hypotheses using regression analyses.
UConnDPP, a nonpartisan, nonprofit organization, is recognized nationally and internationally as a leader in the field of public opinion research. The scope of UConnDPP projects ranges from studies of public opinion and public policy to local community-based surveys. During the past two years, Christopher Barnes has conducted more than 70 national, regional, and local survey projects.
Questionnaire Design
Dr. Gary Scott directed the development of the survey instrument and was assisted by UConnDPP and a team of specialists in each applicable field of study from across the country. Each was asked to identify the top 50 themes from their fields related to American ordered liberty. Four hundred themes were then converted into multiple-choice questions and edited down to the 60 most applicable questions through student focus groups, validity analyses, and further scholarly review. In selecting the final 60 questions, the specialists sought to capture the elementary facts and concepts of history, political science, and economics that contribute most to the civic knowledge needed for citizens to participate responsibly in public life. The survey will be further refined and administered each year.
Questionnaire Pilot
In the pilot survey conducted in 2004, the 60 questions were found to be statistically valid. In addition to testing the validity of each survey item, the pilot administration also covered the most viable method of data collection, including the ability to implement on all campuses, the optimal sample size of students at each school, and the extent that student answers vary by method of data collection.
The pilot survey used four surveying methodologies at a selected sample of 22 colleges and universities nationwide. In-person, pen-and-paper administration was deemed the best method. The methods tested were phone interviewing, Internet surveying, test-style surveying, and in-person surveying. Phone interviewing proved to be logistically impractical, while Internet as well as test-style surveying yielded few completed interviews. As such, in-person was determined to be the most viable method of data collection.
College Selection
A total of 50 colleges were surveyed. The institution sample of 25 representative schools was based on information from the National Center for Education Statistics’s Integrated Postsecondary Education Data System (IPEDS). The sample was stratified to proportionally represent all four-year baccalaureate-granting public and private schools of various sizes in the continental United States. This excludes two-year community colleges or satellite campuses not offering bachelor’s degrees. Over-sampling those colleges with higher undergraduate enrollment makes the sample self-weighting and reflective of the typical experience of students at American baccalaureate institutions.
The following is a table of the total population:
TOTAL POPULATION | ||
---|---|---|
Population | Number of Colleges | Percentage |
Total | 1,403 | 100% |
Total Public | 510 | 36.4% |
Total Private | 893 | 63.6% |
<5,000 Students | 995 | 70.9% |
5,000 to < 10,000 Students | 203 | 14.5% |
10,000 or More Students | 205 | 14.6% |
Source: National Center for Education Statistics’s Integrated Postsecondary Education Data System. |
The sample of 25 elite schools reflects those that were hand-selected based upon a series of criteria: U.S. News and World Report rankings, high selectivity of enrollment to add to the variance of freshmen civic achievement, flagship state universities providing regional interest, religiously affiliated colleges, and a balance of geographic distribution.
Respondent Selection
A total of 14,094 students were surveyed, including an average of 148 freshmen and 134 seniors on each campus. Students were intercepted at various times of day and at several places of high student traffic on or adjacent to each campus to ensure randomness. Every student passing these sites was screened for qualification and asked to take the survey. The process took place for a minimum of three days to a maximum of seven days per campus until quota numbers of completed interviews were achieved, if not surpassed. Respondents were guaranteed anonymity; names and contact information were not recorded with the data set. Any identifying information was separated for use in verifications and for a sweepstakes drawing only.
Verification Methods
A systematic multilevel verification process was in place during the implementation of the civic literacy survey study. Error rates were well within acceptable norms and provide assurance of high-quality data. The rigorous quality control measures used are detailed below:
1) Regional Manager Verification
The Regional Managers were the first stage in quality control. Beyond working side-by-side with their student employees to ensure proper collection methods, the Regional Managers hand-checked the validity of every survey submitted. The Regional Managers were trained in reviewing surveys to remove submissions involving items such as incompletes, incorrect class status, or other faulty data entries.
2) Monitoring
The Project Director, Heather Mills, visited each manager twice without advance notice. Although she also personally assisted in surveying, the primary purpose of the visits was an inspection to ensure proper data collection methods and student employee supervisory practices. When not monitoring in-person, the Project Director stayed in continuous contact with each Regional Manager via phone and e-mail.
3) E-mail Verification—5 Percent
Of the completed surveys collected at each school, 5 percent were randomly selected to receive an e-mail confirming their eligibility to participate in the survey. The verification e-mails asked if they were a freshman or senior at a particular school and if they remembered taking the survey. A majority of students responded at an 85 percent rate or higher. E-mail block systems at some schools made them particularly difficult to reach.
4) Web Directory Matching Verification
Several of each Regional Manager’s schools were selected for Web directory matching verification as databases allowed. Georgetown University was verified at 100 percent of both freshmen and seniors. For other schools, only seniors were verified because they were more likely to be in the directory and more difficult to survey. The 15 schools in addition to Georgetown were as follows:
Appalachian State University Brown University Dartmouth College Eastern Kentucky University Harvard University Massachusetts Institute of Technology Princeton University University of California, Berkeley |
Stanford University University of Chicago University of Massachusetts, Boston University of North Carolina, Chapel Hill West Texas A&M University Williams College Yale University |
5) Data Cleaning Procedures
The data set was further inspected for irregularities using statistical diagnostics. Additional observations were removed from the data set if incompletes, incorrect class, or other such items were present.
Degree of Exam Difficulty
ISI has been determined throughout the preparation of this examination that the test be neither too easy nor too difficult. We wanted a fair measure of important knowledge that students should learn in introductory courses in American history, politics, and economics. The faculty who worked on the preparation of the questions included a range of items from the basic to the more difficult, in order to be able to distinguish the average C student from the high-achieving A student (see the full list of question themes).
The test was subjected to multiple levels of review over a period of more than a year. For example, UConnDPP pretested the questions with experienced test takers they use regularly to evaluate test instruments at the University of Connecticut, as well as a randomly selected group of undergraduates. If these test-takers could not understand the questions or the multiple-choice answers, then the questions were either discarded or rewritten. We also asked groups of knowledgeable faculty and experts in survey methodology to review the questions. Again, following these reviews, the questions were rewritten to enhance clarity and fairness, and those questions where even well-prepared students failed to recognize the correct response were eliminated.
We did not want a test that was so difficult that even well-prepared students would fail. In fact, a significant number of students did well on the test. More than 22 percent of the seniors tested scored a “passing” 70 percent or above, including 115 who scored above 90 percent (an A on a traditional grading scale). Several even had perfect scores. The students who answered the most questions correct were on average the students who had taken the most courses in relevant disciplines, and were therefore better prepared. Indeed, a statistical item analysis of the questions confirmed the validity of the questions because correct responses to individual questions increased as individuals’ general knowledge increased. We found, for example, that students with higher SAT scores did better on 93 percent of the questions, that students with more overall civic knowledge scored higher on 97 percent of the questions, and that completing more courses in history, political science, and economics correlated positively with the percent of correct responses on the civic literacy exam.
As an additional check to guard against an inappropriate degree of difficulty, ISI included six questions from a federal test designed to measure high school seniors’ knowledge of our history and institutions, the National Assessment of Educational Progress (NAEP). Had scores been notably higher on the NAEP questions, we would have asked whether the other questions were too difficult. Instead, the NAEP questions turned out to be even more difficult for the students we surveyed than the questions we prepared independently. On average, students did better on the questions ISI and its faculty advisors prepared than on the NAEP questions.
Interestingly, the students who took the test did not complain that the test was too hard. They commonly expressed dismay that their college education had not prepared them better. According to Heather Mills, Field Survey Supervisor for UConnDPP, students commonly said: “I felt like I should have known this. . . . I should have known it.” Not only did students express their frustration at not being better prepared, but 41 percent of the seniors said they were not satisfied with their college program. Far from being surprised that they failed this basic test of knowledge and concepts about American history, government, and economics, students thought that they should have been better educated in these disciplines.
Sampling Error and Statistical Significance
The true average score for the population of seniors is estimated to fall within ±1.25% of the sample average of 53.2% for the 6,689 seniors, or between 52% and 54.5% with 95% confidence. We also know from the total sample of 14,094 students that seniors outscored freshmen by 1.5%. The unknown and actual civic learning for the whole population of freshmen and seniors at the fifty colleges is estimated to fall within ±0.60% of this sample average learning of 1.5% with 95% confidence. This margin of error generally increases for estimates of civic learning within smaller subgroups.
Weighting
Data for the representative 25 colleges can be weighted to account for variance in enrollment and governance across all public and private colleges, even though the analyses in this report focused upon the combined set of 50 colleges. Enrollment data was gathered from individual schools as well as from the National Center for Education Statistics’s (NCES) restricted Peer Analysis System (IPEDS).