Lessons of the New Hampshire polling fiasco:
Rasmussen Reports, another firm that blew the primary, speculates that "polling models used by Rasmussen Reports and others did not account for the very high turnout." For instance, "Rasmussen Reports normally screens out people with less voting history and less interest in the race. This might have caused us to screen out some women who might not ordinarily vote in a Primary but who came out to vote due to the historic nature of Clinton's candidacy." The firm allotted 54 percent of its final weighted sample to women. In reality, women cast 57 percent of the votes.
You weren't aware that pollsters screen out respondents, or discount their stated preferences, based on sex, race, religion, and other "demographics"? You thought polls were raw data? Silly you. Read the pollsters' post-New Hampshire explanations, and you'll learn about all the formulas they use to "refine" their data before you see it. They apply "likely voter screens," "demographics," "turnout models," and "allocation of undecideds." In this case, their big mistake was underweighting responses from older women and overweighting responses from independents and young voters.
Lesson: Polls aren't raw data. They're data modified by assumptions. Pollsters should publish their assumptions so we know what we're eating.