[ Informação Donizete Rodrigues ]

Divulgação do autor

Curto, José; Lanza, Fabio; Rodrigues, Donizete (2019) (eds.). Special Issue – Memory, identity and social representations in the Lusophone world. Portuguese Studies Review, vol. 27, nº 1.

 


[ Informação Karl van Meter, Bulletin de Methodologie Sociologique ]

Articles – BMS-RC33 & AIMSl Lists and Five Articles

For several decades, each issue of the BMS included research articles and several information sections which were traditionally “Books/Livres”, “Journals/Reviews/Reports”, “Articles/Chapters”, “”Computers/Ordinateurs/Internet”, “New Meetings/Nouvelles reunions”, “Past Meetings/Réunions passes” and “Calls/Appels”, and, of course, the RC33 Newsletter in the April and October issues of the BMS. In parallel, and often in coordination with these information sections, we created the closed, members-only BMS-RC33 email distribution list in 2005 (first message in July 2005) and the free and open AIMSl list in 2007 (first email in May 2007). Since the change in editorship of the BMS in 2018, the information sections and the RC33 Newsletter have appeared together in every other issue of the BMS, and the two distribution lists have in many ways “taken up the slack” for the journal and for the modifications of the information sections. With the increased need for space to publish research articles, it has been agreed to increase the distribution lists’ effort to publish more information concerning new books, reports and articles, and that new policy begins with this presentation of five new research articles.

Karl M. van Meter

ARTICLES

Youri Davydov and Francesca Greselin, “Comparisons between Poorest and Richest to Measure Inequality”, Sociological Methods & Research, 2020, 49, 2, 526–561. The observed increase in economic inequality, where the major concern is relative to the huge growth of the highest incomes, motivates to revisit classical measures of inequality and to offer new ways to synthesize the variability of the entire income distribution. The idea is to provide policy makers a way to contrast the economic position of the group of the poorer p percent of the population and to compare their mean income to the one owned by the p percent of the richest. The new measure is still a Lorenz-based one, but the significant focus is based here in equally sized and opposite parts of the population whose difference is so remarkable nowadays. We then highlight the specific information given by the new inequality measure and curve, by comparing it to the widely employed Lorenz curve and Gini index and the more recent Zenga approach, and provide an application to Italian data on household income, wealth, and consumption along the years 1980–2012. The effects of estimating inequality indices and curves from grouped data are also discussed.

Jolene D. Smyth, “How Well Do Interviewers Record Responses to Numeric, Interviewer Field-code, and Open-ended Narrative Questions in Telephone Surveys?” Field Methods, 2020, 32, 1, 89-104. Telephone survey interviewers need to be able to accurately record answers to questions. While straightforward for closed questions, this task can be complicated for open questions. We examine interviewer recording accuracy rates from a national landline random digit dial telephone survey. We find that accuracy rates are over 90% for numeric response and interviewer-code, single-response items but are astonishingly low (49%) for a multiple-answer, nominal, interviewer-code item. Accuracy rates for narrative open questions were around 90% for themes but only about 70% for themes and elaborations. Interviewer behaviors (e.g., probing, feedback) are generally associated with lower accuracy rates. Implications for questionnaire design, interviewer training, and coding procedures are discussed

Eric Plutzer, “Privacy, Sensitive Questions and Informed Consent – Their Impacts on Total Survey Error, and the Future of Survey Research”, Public Opinion Quarterly, 2019, 83, S1, 169-184. Survey science is driven to maximize data quality and reduce Total Survey Error (TSE). At the same time, survey methodologists have ethical and professional obligations to protect the privacy of respondents and ensure their capacity to provide informed consent for their participation, for data linkage, passive data collection, and the archiving of replication data. We have learned, however, that both sensitive topics and the consent process can contribute to errors of representation and errors of measurement. These compound threats to data quality that arise due to broader concerns about privacy, the intrusiveness of surveys, and the increasing number of participation requests directed to the same respondents. This article critically assesses the extant literature on these topics – including six original articles in this issue – by viewing these challenges through the lens of the TSE framework. This helps unify several distinct research programs and provides the foundation for new research and for practical innovations that will improve data quality.

Brady T. West and Dan Li, “Sources of Variance in the Accuracy of Interviewer Observations”, Sociological Methods & Research, 2019, 48, 3, 485-533. In face-to-face surveys, interviewer observations are a cost-effective source of paradata for nonresponse adjustment of survey estimates and responsive survey designs. Unfortunately, recent studies have suggested that the accuracy of these observations can vary substantially among interviewers, even after controlling for household-, area-, and interviewer-level characteristics, limiting their utility. No study has identified sources of this unexplained variance in observation accuracy. Motivated by theoretical expectations from the observer bias literature, this study analyzed more than 45,000 open-ended justifications provided by interviewers in the US National Survey of Family Growth (NSFG) for their observations on two key features of all sampled NSFG households: presence of children and expected probability of household response. The study finds that variability among interviewers in the cues used to record these observations (evident from the open-ended justifications) explains much of the previously unexplained variance in observation accuracy

Caroline Roberts, Emily Gilbert, Nick Allum and Léïla Eisner, “Satisficing in Surveys – A Systematic Review of the Literature”, Public Opinion Quarterly, 2019, 83, 3, 598-626. Herbert Simon’s (1956) concept of satisficing provides an intuitive explanation for the reasons why respondents to surveys sometimes adopt response strategies that can lead to a reduction in data quality. As such, the concept rapidly gained popularity among researchers after it was first introduced to the field of survey methodology by Krosnick and Alwin (1987), and it has become a widely cited buzzword linked to different forms of response error. In this article, we present the findings of a systematic review involving a content analysis of journal articles published in English-language journals between 1987 and 2015 that have drawn on the satisficing concept to evaluate survey data quality. Based on extensive searches of online databases, and an initial screening exercise to apply the study’s inclusion criteria, 141 relevant articles were identified. Guided by the theory of survey satisficing described by Krosnick (1991), the methodological features of the shortlisted articles were coded, including the indicators of satisficing analyzed, the main predictors of satisficing, and the presence of main or interaction effects on the prevalence of satisficing involving indicators of task difficulty, respondent ability, and respondent motivation. Our analysis sheds light on potential differences in the extent to which satisficing theory holds for different types of response error, and highlights a number of avenues for future research.