Grundlagen
Alwin, D. F., & Beattie, B. A. (2016). The KISS Principle in Survey Design: Question Length and Data Quality. Sociological Methodology. https://doi.org/10.1177/0081175016641714
Batinic, B. (2003). Internetbasierte Befragungsverfahren. Österreichische Zeitschrift für Soziologie, 28(4), 6–18. https://doi.org/10.1007/s11614-003-0019-6
Bethlehem, J. G., & Biffignandi, S. (2012). Handbook of web surveys. Hoboken, N.J: Wiley.
Bogner, K., & Landrock, U. (2014). Antworttendenzen in standardisierten Umfragen. GESIS Survey Guidelines.
Brace, I. (2018). Questionnaire Design: How to Plan, Structure and Write Survey Material for Effective Market Research. S.l.: Kogan Page.
Callegaro, M., Lozar Manfreda, K., & Vehovar, V. (2015). Web survey methodology. Los Angeles: SAGE.
Couper, M. P. (2017). New Developments in Survey Data Collection. Annual Review of Sociology, 43(1), 121–145. https://doi.org/10.1146/annurev-soc-060116-053613
Czaja, R., Blair, J., & Blair, E. (2014). Designing surveys: a guide to decisions and procedures (Third edition). Los Angeles: SAGE.
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method (4th edition). Hoboken: Wiley.
Döring, N., & Bortz, J. (2016). Forschungsmethoden und Evaluation in den Sozial- und Humanwissenschaften (5. vollständig überarbeitete, aktualisierte und erweiterte Auflage). Berlin Heidelberg: Springer.
Ebel, T., & Trixa, J. (2015). Hinweise zur Aufbereitung quantitativer Daten. GESIS Papers, 2015|09.
Faulbaum, F., Prüfer, P., & Rexroth, M. (2009). Was ist eine gute Frage? Die systematische Evaluation der Fragenqualität. (1. Aufl). Wiesbaden: VS, Verl. für Sozialwiss.
Fowler, F. J. (1995). Improving survey questions: design and evaluation. Thousand Oaks: Sage Publications.
Fowler, F. J. (2014). Survey research methods (Fifth edition). Los Angeles: SAGE.
Friedrichs, J. (1990). Methoden empirischer Sozialforschung (14. Aufl). Opladen: Westdt. Verl.
Gabler, S., & Häder, S. (2014). Stichproben in der Theorie. GESIS Survey Guidelines.
Gräf, L. (2010). Online-Befragung: eine praktische Einführung für Anfänger. Berlin: Lit-Verl.
Groves, R. M. (Hrsg.). (2009). Survey methodology (2nd ed). Hoboken, N.J: Wiley.
Gubrium, J. F., & Holstein, J. A. (Hrsg.). (2002). Handbook of interview research: context & method. Thousand Oaks, Calif: Sage Publications.
Häder, S. (2014). Stichproben in der Praxis. GESIS Survey Guidelines.
Handreichung Datenschutz. (2017). RatSWD Output Series.
Harris, D. F. (2014). The Complete guide to writing questionnaires: how to get better information for better decisions. Durham, North Carolina: I&M Press.
Hollenberg, S. (2016). Fragebögen: fundierte Konstruktion, sachgerechte Anwendung und aussagekräftige Auswertung (1. Auflage 2016). Wiesbaden: Springer VS.
Jacob, R., Heinz, A., & Décieux, J. P. (2013). Umfrage: Einführung in die Methoden der Umfrageforschung (3., überarb. Aufl). München: Oldenbourg.
Jansen, H. (2010). The Logic of Qualitative Survey Research and its Position in the Field of Social Research Methods. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 11(2). Abgerufen von http://www.qualitative-research.net/index.php/fqs/article/view/1450
Kirchhoff, S., Kuhnt, S., Lipp, P., & Schlawin, S. (2010). Der Fragebogen: Datenbasis, Konstruktion und Auswertung (5. Auflage). Wiesbaden: VS Verlag.
Kromrey, H., Roose, J., & Strübing, J. (2016). Empirische Sozialforschung (13., völlig überarbeitete Auflage). Stuttgart: UTB.
Kuckartz, U., Ebert, T., Rädiker, S., & Stefer, C. (2009). Evaluation online: Internetgestützte Befragung in der Praxis. Wiesbaden: VS, Verlag für Sozialwissenschaften.
Lenzner, T., Menold, N., & GESIS-Leibniz-Institut Für Sozialwissenschaften. (2015). Frageformulierung. GESIS Survey Guidelines. https://doi.org/10.15465/gesis-sg_017
Levy, P. S., & Lemeshow, S. (1999). Sampling of populations: methods and applications (3rd ed). New York: Wiley.
Marsden, P. V., & Wright, J. D. (Hrsg.). (2010). Handbook of survey research (Second edition). Bingley: Emerald.
Mayer, H. O. (2013). Interview und schriftliche Befragung: Grundlagen und Methoden empirischer Sozialforschung (6., überarbeitete Auflage). München: Oldenbourg Verlag.
Moosbrugger, H., & Kelava, A. (Hrsg.). (2012). Testtheorie und Fragebogenkonstruktion (2., aktualisierte und überarbeitete Auflage). Berlin Heidelberg: Springer.
Petersen, T. (2014). Der Fragebogen in der Sozialforschung. Konstanz München: UVK Verlagsgesellschaft mbH mit UVK Lucius.
Porst, R. (2000). Question Wording – Zur Formulierung von Fragebogen-Fragen. ZUMA How-to-Reihe, 2.
Porst, R. (2014). Fragebogen: Ein Arbeitsbuch (4., erweiterte Auflage). Wiesbaden: Springer VS.
Prüfer, P., & Rexroth, M. (2005). Kognitive interviews. ZUMA How-to-Reihe, 15.
Prüfer, P., & Stiegler, A. (2002). Die Durchführung standardisierter Interviews: Ein Leitfaden. ZUMA How-to-Reihe, 11.
Qualitätsstandards, A. (2014). Qualitätsstandards zur Entwicklung, Anwendung und Bewertung von Messinstrumenten in der sozialwissenschaftlichen Umfrageforschung. RatSWD Working Paper Series, 230.
Raab-Steiner, E., & Benesch, M. (2015). Der Fragebogen: von der Forschungsidee zur SPSS-Auswertung (4., aktualisierte und überarbeitete Auflage). Wien: Facultas.
Rea, L. M., & Parker, R. A. (2005). Designing and conducting survey research: a comprehensive guide (3rd ed). San Francisco: Jossey-Bass.
Ritter, L. A. (2007). Using Online Surveys in Evaluation. San Francisco: Jossey-Bass.
Ritter, L. A., & Sue, V. M. (2007a). Glossary. New Directions for Evaluation, 2007(115), 65–65. https://doi.org/10.1002/ev.238
Ritter, L. A., & Sue, V. M. (2007b). Introduction to using online surveys. New Directions for Evaluation, 2007(115), 5–14. https://doi.org/10.1002/ev.230
Ritter, L. A., & Sue, V. M. (2007c). The survey questionnaire. New Directions for Evaluation, 2007(115), 37–45. https://doi.org/10.1002/ev.234
Salant, P., & Dillman, D. A. (1994). How to conduct your own survey. New York: Wiley.
Saris, W. E. (2014). Design, evaluation, and analysis of questionnaires for survey research (Second Edition). Hoboken, New Jersey: Wiley.
Schiek, D., & Ullrich, C. G. (2016). Qualitative Online-Erhebungen: Möglichkeiten, Herausforderungen und Grenzen. Springer Fachmedien Wiesbaden. https://doi.org/10.1007/978-3-658-11817-4_1
Schnell, R., Hill, P. B., & Esser, E. (2013). Methoden der empirischen Sozialforschung (10. überarbeitete Auflage). München: Oldenbourg Verlag.
Stiegler, A. (2015). Nutzung von Rückrufnummern bei Meinungsumfragen. GESIS Papers, 2015/13.
Stiegler, A., & Biedinger, N. (2014). Interviewer Qualifikation und Training. GESIS Survey Guidelines.
Sudman, S., & Bradburn, N. M. (1982). Asking questions: A Practical Guide to Questionnaire Design (1st ed). San Francisco: Jossey-Bass.
Theobald, E., & Neundorfer, L. (2010). Qualitative Online-Marktforschung: Grundlagen, Methoden und Anwendungen (1. Aufl). Baden-Baden: Nomos [u.a.].
Tourangeau, R., Conrad, F. G., & Couper, M. (2013). The science of web surveys. Oxford ; New York: Oxford University Press.
Witte, J. C. (2009). Introduction to the Special Issue on Web Surveys. Sociological Methods & Research, 37(3), 283–290. https://doi.org/10.1177/0049124108328896
Züll, C. (2015). Offene Fragen. GESIS Survey Guidelines. https://doi.org/10.15465/gesis-sg_002
Bildung
Baethge, M., & Arends, L. (2009). Measuring Vocational Competencies. RatSWD Working Paper Series, 95.
Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: the challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698–712. https://doi.org/10.1080/02602938.2012.691462
Davies, M., Hirschberg, J., Lye, J., & Johnston, C. (2010). A systematic analysis of quality of teaching surveys. Assessment & Evaluation in Higher Education, 35(1), 83–96. https://doi.org/10.1080/02602930802565362
Goodman, J., Anson, R., & Belcheir, M. (2015). The effect of incentives and other instructor-driven strategies to increase online student evaluation response rates. Assessment & Evaluation in Higher Education, 40(7), 958–970. https://doi.org/10.1080/02602938.2014.960364
Johnson, G. M. (2015). Record of assessment moderation practice (RAMP): survey software as a mechanism of continuous quality improvement. Assessment & Evaluation in Higher Education, 40(2), 265–278. https://doi.org/10.1080/02602938.2014.911244
Li, H., Xiong, Y., Zang, X., Kornhaber, M. L., Lyu, Y., Chung, K. S., & Suen, H. K. (2016). Peer assessment in the digital age: a meta-analysis comparing peer and teacher ratings. Assessment & Evaluation in Higher Education, 41(2), 245–264. https://doi.org/10.1080/02602938.2014.999746
Peterson, K. D., Wahlquist, C., Brown, J. E., & Mukhopadhyay, S. (2006). Parent Surveys for Teacher Evaluation. Journal of Personnel Evaluation in Education, 17(4), 317–330. https://doi.org/10.1007/s11092-006-5740-9
Rammstedt, B. (2011). Kompetenzmessung in der Bildungsforschung: Zusammenfassung des Forums (7) der 5. Konferenz für Sozial-und Wirtschaftsdaten. RatSWD Working Paper Series, 177.
Schneider, S. L. (2016). Die Konzeptualisierung, Erhebung und Kodierung von Bildung in nationalen und internationalen Umfragen. GESIS Survey Guidelines.
Weiß, S., Schramm, S., Hillert, A., & Kiel, E. (2013). Lehrerinnen und Lehrer kommentieren Fragebögen: wie quantitative Forschung von qualitativer Forschung lernen kann. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 14(3), 30.
Methoden extended
Alwin, D. F. (2014). Investigating Response Errors in Survey Data. Sociological Methods & Research, 43(1), 3–14. https://doi.org/10.1177/0049124113507907
Behr, D., Braun, M., & Dorer, B. (2014). Messinstrumente in internationalen Studien. GESIS Survey Guidelines.
Benfield, J. A., & Szlemko, W. J. (2006). Internet-Based Data Collection: Promises and Realities. Journal of Research Practice, 2(2), 1.
Bennett, L., & Nair, C. S. (2010). A recipe for effective participation rates for web‐based surveys. Assessment & Evaluation in Higher Education, 35(4), 357–365. https://doi.org/10.1080/02602930802687752
Bladon, T. L. (2010). The Downward Trend of Survey Response Rates: Implications and Considerations for Evaluators. Canadian Journal of Program Evaluation, 24(2), 131–156.
Blasius, J., & Thiessen, V. (2006). Assessing Data Quality and Construct Comparability in Cross-National Surveys. European Sociological Review, 22(3), 229–242. https://doi.org/10.1093/esr/jci054
Borg, I. (2000). Explorative Multidimensionale Skalierung. ZUMA How-to-Reihe, 1.
Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: the challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698–712. https://doi.org/10.1080/02602938.2012.691462
Brenner, P. S. (2017). Narratives of Response Error From Cognitive Interviews of Survey Questions About Normative Behavior. Sociological Methods & Research, 46(3), 540–564. https://doi.org/10.1177/0049124115605331
Christian, L. M., Parsons, N. L., & Dillman, D. A. (2009). Designing Scalar Questions for Web Surveys. Sociological Methods & Research, 37(3), 393–425. https://doi.org/10.1177/0049124108330004
Connelly, R., Gayle, V., & Lambert, P. S. (2016). Ethnicity and ethnic group measures in social survey research. Methodological Innovations, 9, 2059799116642885. https://doi.org/10.1177/2059799116642885
Connelly, R., Gayle, V., & Lambert, P. S. (2016). Statistical modelling of key variables in social survey data analysis. Methodological Innovations, 9(0). https://doi.org/10.1177/2059799116638002
Converse, P. D., Wolfe, E. W., Huang, X., & Oswald, F. L. (2008). Response Rates for Mixed-Mode Surveys Using Mail and E-mail/Web. American Journal of Evaluation, 29(1), 99–107. https://doi.org/10.1177/1098214007313228
Coryn, C. L., Gugiu, Pc., Davidson, Ej., & Schroter, D. C. (2007). Needs Assessment in Hidden Populations Using Respondent-driven Sampling. Evaluation Journal of Australasia, 7(2), 3.
Crawford, F. W. (2016). The Graphical Structure of Respondent-driven Sampling. Sociological Methodology. https://doi.org/10.1177/0081175016641713
Danner, D. (2014). Reliabilität – die Genauigkeit einer Messung. GESIS Survey Guidelines.
DeCastellarnau, A., & Revilla, M. (2017). Two approaches to evaluate measurement quality in online surveys: An application using the Norwegian Citizen Panel. In Survey Research Methods (Bd. 11, S. 415–433).
Dufrene, R. L. (2000). An evaluation of a patient satisfaction survey: validity and reliability. Evaluation and Program Planning, 23(3), 293–300. https://doi.org/10.1016/S0149-7189(00)00015-X
Du lmer, H. (2016). The Factorial Survey: Design Selection and its Impact on Reliability and Internal Validity. Sociological Methods & Research, 45(2), 304–347. https://doi.org/10.1177/0049124115582269
Felton, J., Mitchell, J., & Stinson, M. (2004). Web-based student evaluations of professors: the relations between perceived quality, easiness and sexiness. Assessment & Evaluation in Higher Education, 29(1), 91–108. https://doi.org/10.1080/0260293032000158180
Frank, K., & Min, K.-S. (2007). 10. Indices of Robustness for Sample Representation. Sociological Methodology, 37(1), 349–392. https://doi.org/10.1111/j.1467-9531.2007.00186.x
Freelon, D. (2013). ReCal OIR: Ordinal, interval, and ratio intercoder reliability as a web service. International Journal of Internet Science, 8(1), 10–16.
Freelon, D. G. (2010). ReCal: Intercoder reliability calculation as a web service. International Journal of Internet Science, 5(1), 20–33.
Fuchs, M. (2003). Kognitive Prozesse und Antwortverhalten in einer Internet-Befragung. Österreichische Zeitschrift für Soziologie, 28(4), 19–45. https://doi.org/10.1007/s11614-003-0020-0
Gabler, S., Kolb, J.-P., Sand, M., & Zins, S. (2015). Gewichtung. GESIS Survey Guidelines.
Garbarski, D., Schaeffer, N. C., & Dykema, J. (2016a). Interviewing Practices, Conversational Practices, and Rapport: Responsiveness and Engagement in the Standardized Survey Interview. Sociological Methodology. https://doi.org/10.1177/0081175016637890
Garbarski, D., Schaeffer, N. C., & Dykema, J. (2016b). Rejoinder: Response to Comments on „Interviewing Practices, Conversational Practices, and Rapport: Responsiveness and Engagement in the Standardized Survey Interview“. Sociological Methodology. https://doi.org/10.1177/0081175016651074
Geis, A. (2004). Texterfassung für sozialwissenschaftliche Auswertung. ZUMA How-to-Reihe, 13.
Gesell, S. B., Drain, M., & Sullivan, M. P. (2007). Test of a Web and paper employee satisfaction survey: Comparison of respondents and non-respondents. International Journal of Internet Science, 2(1), 45–58.
Gile, K. J., & Handcock, M. S. (2010). 7. Respondent-Driven Sampling: An Assessment of Current Methodology. Sociological Methodology, 40(1), 285–327. https://doi.org/10.1111/j.1467-9531.2010.01223.x
Goodman, J., Anson, R., & Belcheir, M. (2015). The effect of incentives and other instructor-driven strategies to increase online student evaluation response rates. Assessment & Evaluation in Higher Education, 40(7), 958–970. https://doi.org/10.1080/02602938.2014.960364
Göritz, A. S. (2006). Incentives in web studies: Methodological issues and a review. International Journal of Internet Science, 1(1), 58–70.
Göritz, A. S., & Crutzen, R. (2012). Reminders in Web-Based Data Collection Increasing Response at the Price of Retention? American Journal of Evaluation, 33(2), 240–250. https://doi.org/10.1177/1098214011421956
Groh-Samberg, O., & Tucci, I. (2010). Qualitative interviewing of respondents in large representative surveys. RatSWD Working Paper Series, 143.
Gummer, T., & Roßmann, J. (2013). Good questions, bad questions? A Post-Survey Evaluation Strategy Based on Item Nonresponse. Survey Methods: Insights from the Field, 10. https://doi.org/10.13094/SMIF-2013-00010
Häder, M. (2000). Die Expertenauswahl bei Delphi-Befragungen. ZUMA How-to-Reihe, 5.
Häder, M. (2009). Der Datenschutz in den Sozialwissenschaften. Anmerkungen zur Praxis sozialwissenschaftlicher Erhebungen und Datenverarbeitung in Deutschland. RatSWD Working Paper Series, 90.
Häder, S. (2000). Telefonstichproben. GESIS Survey Guidelines.
Heckathorn, D. D. (2007). 6. Extensions of Respondent-Driven Sampling: Analyzing Continuous Variables and Controlling for Differential Recruitment. Sociological Methodology, 37(1), 151–208. https://doi.org/10.1111/j.1467-9531.2007.00188.x
Huang, J. Y., Hubbard, S. M., & Mulvey, K. P. (2003). Obtaining valid response rates: considerations beyond the tailored design method. Evaluation and Program Planning, 26(1), 91–97. https://doi.org/10.1016/S0149-7189(02)00091-5
Jaspers, E., Lubbers, M., & Graaf, N. D. D. (2009). Measuring Once Twice: An Evaluation of Recalling Attitudes in Survey Research. European Sociological Review, 25(3), 287–301. https://doi.org/10.1093/esr/jcn048
Johnson, G. M. (2015). Record of assessment moderation practice (RAMP): survey software as a mechanism of continuous quality improvement. Assessment & Evaluation in Higher Education, 40(2), 265–278. https://doi.org/10.1080/02602938.2014.911244
Keller, H., Heinemann, E., & Kruse, M. (2012). Praxisbericht: Die Ratingkonferenz. Eine Kombination von Kurzfragebogen und Gruppeninterview. Zeitschrift für Evaluation, 11. Jahrgang, 2/2012.
Kember, D., & Leung, D. Y. P. (2008). Establishing the validity and reliability of course evaluation questionnaires. Assessment & Evaluation in Higher Education, 33(4), 341–353. https://doi.org/10.1080/02602930701563070
Kero, P., & Lee, D. (2015). Slider Scales and Web-Based Surveys: A Cautionary Note. Journal of Research Practice, 1(1), 1.
Kiernan, N. E., Kiernan, M., Oyler, M. A., & Gilles, C. (2005). Is a Web Survey as Effective as a Mail Survey? A Field Experiment Among Computer Users. American Journal of Evaluation, 26(2), 245–252. https://doi.org/10.1177/1098214005275826
Koch, A., & Blohm, M. (2014). Nonresponse bias. GESIS Survey Guidelines.
Krämer, W. (2011). The cult of statistical significance–What economists should and should not do to make their data talk. RatSWD Working Paper Series, 176(3).
Kreuter, F., Müller, G., & Trappmann, M. (2014). A Note on Mechanisms Leading to Lower Data Quality of Late or Reluctant Respondents. Sociological Methods & Research, 43(3), 452–464. https://doi.org/10.1177/0049124113508094
Krug, G., Carstensen, J., & Kriwy, P. (2017). Die richtige Mischung? Ein randomisiertes Experiment zur Datenqualität bei der Kombination von Telefon- und Onlineerhebung in der empirischen Sozialforschung. Zeitschrift für Soziologie, 46(2), 89–106. https://doi.org/10.1515/zfsoz-2017-1006
Krysan, M., & Couper, M. P. (2006). Race of Interviewer Effects: What Happens on the Web? International Journal of Internet Science, 1(1), 17–28.
Lalla, M., & Ferrari, D. (2011). Web‐based versus paper‐based data collection for the evaluation of teaching activity: empirical evidence from a case study. Assessment & Evaluation in Higher Education, 36(3), 347–365. https://doi.org/10.1080/02602930903428692
Lenzner, T. (2014). Are Readability Formulas Valid Tools for Assessing Survey Question Difficulty? Sociological Methods & Research, 43(4), 677–698. https://doi.org/10.1177/0049124113513436
Liebig, S. (2009). Organizational Data. RatSWD Working Paper Series, 67.
Liebig, S., Gebel, T., Grenzer, M., Kreusch, J., Schuster, H., Tscherwinka, R., & Witzel, A. (2014). Datenschutzrechtliche Anforderungen bei der Generierung und Archivierung qualitativer Interviewdaten. RatSWD Working Paper Series, 238.
Mason, M. (2010). Sample Size and Saturation in PhD Studies Using Qualitative Interviews. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 11(3). Abgerufen von http://www.qualitative-research.net/index.php/fqs/article/view/1428
Maynard, D. W., Freese, J., & Schaeffer, N. C. (2010). Calling for Participation Requests, Blocking Moves, and Rational (Inter)action in Survey Introductions. American Sociological Review, 75(5), 791–814. https://doi.org/10.1177/0003122410379582
Menold, N., & Bogner, K. (2014). Gestaltung von Ratingskalen in Fragebögen. GESIS Survey Guidelines.
Misra, S., Stokols, D., & Marino, A. H. (2012). Using Norm–Based Appeals to Increase Response Rates in Evaluation Research. A Field Experiment. American Journal of Evaluation, 33(1), 88–98. https://doi.org/10.1177/1098214011414862
Morris, M., & International Union for the Scientific Study of Population (Hrsg.). (2004). Network epidemiology: a handbook for survey design and data collection. Oxford; New York: Oxford University Press.
Morton, J. E., Mullin, P. A., & Biemer, P. B. (2008). Using reinterview and reconciliation methods to design and evaluate survey questions. Survey Research Methods, 2(2), 75–82. https://doi.org/10.18148/srm/2008.v2i2.93
Oravecz, Z., Faust, K., & Batchelder, W. H. (2014). An Extended Cultural Consensus Theory Model to Account for Cognitive Processes in Decision Making in Social Surveys. Sociological Methodology, 44(1), 185–228. https://doi.org/10.1177/0081175014529767
Pan, Y., & Fond, M. (2014). Evaluating Multilingual Questionnaires: A Sociolinguistic Perspective. Survey Research Methods, 8(3), 181–194. https://doi.org/10.18148/srm/2014.v8i3.5483
Paxton, M. (2000). A Linguistic Perspective on Multiple Choice Questioning. Assessment & Evaluation in Higher Education, 25(2), 109–119. https://doi.org/10.1080/713611429
Pforr, K. (2014). Incentives. GESIS Survey Guidelines.
Prüfer, P., & Rexroth, M. (1996). Verfahren zur Evaluation von Survey-Fragen: ein Überblick. ZUMA Nachrichten, 20(39), 95–116.
Ramanathan, S., & Faulkner, G. (2015). Calculating Outcome Rates in Web Surveys. Canadian Journal of Program Evaluation, 30(1). Abgerufen von http://www.cjpe.ca/secure/30-1-090_RTZCXMWAVC.pdf
Rammstedt, B. (2004). Zur Bestimmung der Güte von Multi-Item-Skalen: Eine Einführung. ZUMA How-to-Reihe, 12.
Ritter, L. A., & Sue, V. M. (2007a). Case studies. New Directions for Evaluation, 2007(115), 57–64. https://doi.org/10.1002/ev.237
Ritter, L. A., & Sue, V. M. (2007b). Conducting the survey. New Directions for Evaluation, 2007(115), 47–50. https://doi.org/10.1002/ev.235
Ritter, L. A., & Sue, V. M. (2007c). Managing online survey data. New Directions for Evaluation, 2007(115), 51–55. https://doi.org/10.1002/ev.236
Ritter, L. A., & Sue, V. M. (2007d). Questions for online surveys. New Directions for Evaluation, 2007(115), 29–36. https://doi.org/10.1002/ev.233
Ritter, L. A., & Sue, V. M. (2007e). Selecting a sample. New Directions for Evaluation, 2007(115), 23–28. https://doi.org/10.1002/ev.232
Ritter, L. A., & Sue, V. M. (2007f). Systematic planning for using an online survey. New Directions for Evaluation, 2007(115), 15–22. https://doi.org/10.1002/ev.231
Rudolph, C. (2011). Evaluierung von Usability durch standardisierte qualitative Leitfadeninterviews.
Ruusuvuori, J. (2016). Comment: The Constituents of Rapport in the Standardized Survey Interview. Sociological Methodology. https://doi.org/10.1177/0081175016644898
Ryan, K., Gannon-Slater, N., & Culbertson, M. J. (2012). Improving Survey Methods With Cognitive Interviews in Small- and Medium-Scale Evaluations. American Journal of Evaluation, 33(3), 414–430. https://doi.org/10.1177/1098214012441499
Sakshaug, J., & Crawford, S. D. (2010). The impact of textual messages of encouragement on web survey breakoffs: An experiment. International Journal of Internet Science, 4(1), 50–60.
Sanders, M., Gugiu, P. C., & Enciso, P. (2015). How Good are Our Measures? Investigating the Appropriate Use of Factor Analysis for Survey Instruments. Journal of MultiDisciplinary Evaluation, 11(25), 22–33.
Schaar, K. (2017). Die informierte Einwilligung als Voraussetzung für die (Nach-) nutzung von Forschungsdaten: Beitrag zur Standardisierung von Einwilligungserklärungen im Forschungsbereich unter Einbeziehung der Vorgaben der DS-GVO und Ethikvorgaben. RatSWD Working Paper Series, 264.
Schiek, D., & Ullrich, C. G. (2015). Conference Report: Qualitative Online Inquiry. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 16(2). Abgerufen von http://www.qualitative-research.net/index.php/fqs/article/view/2365
Schimpl-Neimanns, B. (2013). Methodische Herausforderungen bei der Erfassung von Bildung und Ausbildung im Mikrozensus. RatSWD Working Paper Series, 221.
Schober, M. F. (2016). Comment: Rapport in Survey Interactions. Sociological Methodology. https://doi.org/10.1177/0081175016644897
Schonlau, M., Soest, A. van, Kapteyn, A., & Couper, M. (2009). Selection Bias in Web Surveys and the Use of Propensity Scores. Sociological Methods & Research, 37(3), 291–318. https://doi.org/10.1177/0049124108327128
Schoon, I. (2009). Measuring social competencies. RatSWD Working Paper Series, 58.
Schrijver, A. D. (2012). Sample Survey on Sensitive Topics: Investigating Respondents’ Understanding and Trust in Alternative Versions of the Randomized Response Technique. Journal of Research Practice, 8(1), 1.
Schützenmeister, F. (2002). Die Bereitschaft, sich wieder befragen zu lassen, in postalischen Erhebungen/The Willingness to Be Re-interviewed in Mail Surveys. Zeitschrift für Soziologie, 31(2), 138–154.
Schwarz, N. (1994). Cognition, communication, and survey measurement: some implications for contingent valuation surveys. Mannheim. Abgerufen von http://nbn-resolving.de/urn:nbn:de:0168-ssoar-70186
Shih, T.-H., & Fan, X. (2007). Response rates and mode preferences in web-mail mixed-mode surveys: a meta-analysis. International Journal of Internet Science, 2(1), 59–82.
Shropshire, K. O., Hawdon, J. E., & Witte, J. C. (2009). Web Survey Design Balancing Measurement, Response, and Topical Interest. Sociological Methods & Research, 37(3), 344–370. https://doi.org/10.1177/0049124108327130
Siedler, T., & Sonnenberg, B. (2010). Experiments, surveys and the use of representative samples as reference data. RatSWD Working Paper Series, 146.
Skolits, G. J., & Boser, J. A. (2008). Using an Evaluation Hotline to Promote Stakeholder Involvement. American Journal of Evaluation, 29(1), 58–70. https://doi.org/10.1177/1098214007312777
Smyth, J. D., Dillman, D. A., Christian, L. M., & Stern, M. J. (2006). Effects of using visual design principles to group response options in web surveys. International Journal of Internet Science, 1(1), 6–16.
Stocké, V. (2004). Entstehungsbedingungen von Antwortverzerrungen durch soziale Erwünschtheit: Ein Vergleich der Prognosen der Rational-Choice Theorie und des Modells der Frame-Selektion/Determinants for Respondents’ Susceptibility to Social Desirability Bias: A Comparison of Predictions from Rational Choice Theory and the Model of Frame-Selection. Zeitschrift für Soziologie, 303–320.
Temple, E. C., & Brown, R. F. (2012). A Comparison of Internet-Based Participant Recruitment Methods: Engaging the Hidden Population of Cannabis Users in Research. Journal of Research Practice, 7(2), 2.
Toepoel, V., Vis, C., Das, M., & Soest, A. van. (2009). Design of Web Questionnaires. An Information-Processing Perspective for the Effect of Response Categories. Sociological Methods & Research, 37(3), 371–392. https://doi.org/10.1177/0049124108327123
Wagner, G. G. (2016). Methodenmix hilft beim finden und auswählen von sozialen Indikatoren: Anmerkungen zur Methodik des Projektes“ Gut leben in Deutschland“. RatSWD Working Paper Series, 260.
Weiß, S., Schramm, S., Hillert, A., & Kiel, E. (2013a). Lehrerinnen und Lehrer kommentieren Fragebögen: wie quantitative Forschung von qualitativer Forschung lernen kann. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 14(3), 30.
Weiß, S., Schramm, S., Hillert, A., & Kiel, E. (2013b). Teachers’ Comments on Questionnaires—How Quantitative Research Can Learn from Qualitative Research. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 14(3). Abgerufen von http://www.qualitative-research.net/index.php/fqs/article/view/1967
Williams, I. L., & O’Donnell, C. R. (2014). Web-based tracking methods in longitudinal studies. Evaluation and Program Planning, 45, 82–89. https://doi.org/10.1016/j.evalprogplan.2014.04.001
Wolter, F., & Preisendörfer, P. (2013). Asking Sensitive Questions An Evaluation of the Randomized Response Technique Versus Direct Questioning Using Individual Validation Data. Sociological Methods & Research, 42(3), 321–353. https://doi.org/10.1177/0049124113500474
Züll, C. (2014). Berufscodierung. GESIS Survey Guidelines.
Verwendeter Zitationsstil: American Psychological Association 6th edition (APA)