The cliché I used for my title is one of those things that is floating around in my head without any specific reference or example. The prototypical scenario involves someone who has acquired or is considering, a work of modern art or other non-traditional creative endeavor - dance, music, etc. - and is generally someone who is both wealthy and has poor taste. In an alternate or comparative scenario, the person has already acquired some stereotypical representation of "low brow" culture - like a velvet Elvis picture, or print of dogs playing poker - and this phrase is offered in defense of their selection. I started thinking about this when I was looking at checklists to assess qualitative research, which I was doing yesterday when I looked at potential journals for submitting a qualitative research report for publication. This particular report leans toward health science - versus social science or health behavior or methods - which are the other areas I typically focus on. Health science and related areas like healthcare research - are the types of areas in my experience where it is more common to encounter checklist requirements. The Lancet, for example, requires authors to use a standard checklist for any submitted research report. I am not against research checklists in general - and they definitely have utility in meta-studies where the quality of integrated results depends on quality of source articles - but I find some of the specific items concerning The checklist I'm going to talk about here is the COREQ or "COnsolidated criteria for Reporting Qualitative research) Checklist" (derived from Tong et al. 2007). This one has been around for a few years and Google Scholar, as of today, shows more than 25,700 citations so it is in a high rotation, so to speak. I mentioned it as an example of a qualitative research checklist in one of my papers (Chatfield, 2018 in the journal Sex Roles) although I did not say a great deal about it at that time.
The first warning to me looking at this now is in the title - "reporting qualitative research." As I know from writing the paper for Sex Roles where I mentioned this - that was itself an attempt to provide author guidance for reporting results from qualitative research - it is very difficult to do this in a way that is both complete, clear, and remotely concise. The actual article title that contains the COREQ mentions "interviews and focus groups," which to me presents another warning sign - these are two fairly routine, often used and perhaps misused methods for getting data, and not research designs. This does, however alert readers to a limitation of this form / process - it is not much good for secondary analysis, mixed methods, other types of data gathering like diary entries, direct observation and other ethnographic-type approaches and probably not remotely useful for creative or arts-based designs. I am not going to go through this item-by-item but instead pick on just a couple because after all, this is my blog and I don't have to offer equal time or opportunities. In the design section, there is this item: "Was data saturation discussed?" Saturation of course is a complicated and maybe contentious topic. Mostly people seem to mean they did not find any novel information offered by interviewees, after hearing and reading their responses to the interview guide item so they think they have "saturated" the information, like filling a sponge with water. This is not the same as the theoretical or construct saturation described by Glaser and Strauss (1967/1999) in their grounded theory book. The more frequent, probably more contemporary usage, as I suggested, has to do with repetitive responses. If you are asking questions for which the responses are items in a discrete category list - hair color, time of day you exercise, years of experience, favorite type of qualitative research approach - maxing out, or saturation for sure will happen and probably pretty quickly with homogenous participants. The same thing is likely if the experience of interest is pretty well defined and minimally variable. But when you look at your data as a whole - this person's story with regard to X - this gets trickier. Are you going to create prototypical cases or personas - so you can then say "we found just four distinct types among our participants?" and use that to argue you achieved "saturation." Absent application in a limited category list, or with the aim of creating prototypical personas, I struggle with the idea of saturation. I tend to think you could interview 99 people and stop, but there is a chance the 100th or the 101st offers something new, and you never knew this. Also and probably more relevant - how important is it to be able to say "we have exhaustive information about this category"? This brings to mind the normal curve - which I've posted about in the past - and the fact that most things fall cluster around mean and a smaller proportion are on the tails. So having a lot of data just tells you things occur within an average range, which you probably knew anyway. And the tails are where the interesting and useful cases live. Therefore, it is possible that readily achieving saturation may be a sign that there is not enough variation in your sample, rather than that you have found some version of the Truth. A couple of other problematic items for me on the COREQ include the question about a "coding tree" - what even is this? I think you can make one with some software programs but certainly that is not what is meant, and whether "themes were identified in advance or derived from the data." This may refer to whether was the code list created a priori - before analysis - as opposed to open coding, with codes created from the data in a simultaneous reading and coding process. If so, there is likely to be confusion with the term "theme" instead of code, which for first cycle analysis I would suggest is a more universal term. Some authors (e.g., Saldaña, various) provide definitions for both things. The COREQ also asks a question about returning transcripts to participants for comment and correction - AKA member checking - and I can fairly comfortably say there are times when you may not want to do this. But, as with the saturation item, I'm not sure what a "no" even means. The broader concern I have with checklists is use of them to guide research work. This is OK if the checklist is just a form to ensure the general elements are there: Did you submit to the IRB? Did you create an interview guide? Pilot test the guide? These items are qualitatively different than things like "saturation" which basically prescribes both a sample strategy and an analysis outcome. Interpretative Phenomenological Analysis (IPA), as one example, does not employ saturation and trying to use it is a mistake. Member checking is another thing not always used in IPA. Going back to "I know what I like" in title and opening, as I thought about COREQ and checklists in general, I thought about their use both to guide research and assess reports. I do not think I could confidently use the COREQ to determine the quality of a research report. A couple of alternatives - CASP (CASP, 2024) and the TQR checklist (Cooper, 2011) offer more generic (and applicable to multiple approaches) items that center around thorough and transparent reporting. Although I have not kept specific track, I'm confident that I have reviewed 100s of qualitative research articles and perhaps even more like a thousand, and I still see new ways of presenting information. At the same time, I am increasingly confident in my ability to identify good research practice - even when it informs a poorly written report. I also review works that are similar to my interests and some that are very far away from my interests - and in these instances my role usually is to look at methods, not subject matter. For me, whether the authors did good research is key, although the extent to which authors of a poorly crafted report are invited to revise, depends a lot on the philosophy of the journal itself. I did not get to this level of reviewer competence or confidence through use of a checklist but instead by hours of practice to work toward expertise. To reinterpret my title: "I do* know what's good, but what I like may not be relevant." *often but not always References Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. International Journal for Qualitative in Health Care, 19(6), 349-357. https://doi.org/10.1093/intqhc/mzm042 COREQ (Consolidated criteria for Reporting Qualitative research) Checklist. Available at: http://cdn.elsevier.com/promis_misc/ISSM_COREQ_Checklist.pdf Chatfield, S.L. (2018) Considerations in qualitative research reporting: A guide for authors preparing articles for Sex Roles. Sex Roles, 79, 125-135. https://doi.org/10.1007/s11199-018- 0930-8 Glaser, B. & Strauss,A. (1967/1999) The discovery of grounded theory: Strategies for qualitative Research. Routledge. Saldaña, J. (various). The coding manual for qualitative researchers (1st - 4th editions). Sage. Critical Appraisal Skills Programme (CASP). Qualitative checklist. https://casp-uk.net/checklists/casp-qualitative-studies-checklist-fillable.pdf Cooper, R. (2011). Appraising qualitative research reports: A developmental approach. TQR: The Qualitative Report, 16(6), 1731-1740. https://doi.org/10.46743/2160-3715/2011.1325 Disclaimer: I have been writing this blog for about a decade, and it is possible newer content replicates, overlaps, or contradicts older content. I do not generally mine my old posts for information to use in new posts, so duplication or contradiction occurs organically and unintentionally.
1 Comment
3/6/2024 03:19:09 am
Software-defined vehicles represent a revolutionary approach to automotive design, where software controls critical vehicle functions traditionally managed by hardware. This innovative concept enables flexible customization, over-the-air updates, and adaptive features, enhancing vehicle performance, safety, and user experience. By leveraging advanced computing and connectivity, software-defined vehicles pave the way for autonomous driving and smart mobility solutions, signifying the convergence of automotive and technology industries.
Reply
Leave a Reply. |
AuthorI am Sheryl L. Chatfield, Ph.D, C.T.R.S. I am a member of the faculty in the College of Public Health at Kent State University. I also Co-coordinate the Graduate Certificate in Qualitative Research and I am a member of the Design Innovation Team at Kent State. Archives
February 2024
Categories
|