Since I went to bed yesterday still thinking about constructionism/constructivism, and I read a summary of the methods section in a mixed methods research paper late yesterday, it is no surprise that these things are still on my mind this morning.
I am almost finished with Bryman's "Social Research Methods" (4th ed.) (Sage, 2012) and one of the first things he mentions in the chapter about conversation analysis is 'anti-realism.' He also aligns the approach with constructionism (not construtivism!). Some of the common things I see in qualitative methods descriptions (including the one I read late yesterday) are mentions of 'triangulation.' Less often, I see the term 'validity' used. In the report I just read, 'member-checking' (sending the transcripts back to the co-researchers) was used, as was consensus coding, plus another thing that escapes me now. I have thought a lot recently about consensus coding and have started to question the motive. When coming from a constructionist standpoint, what does consensus in coding 'prove?' (That two researchers agree on someone else's interpretation, even if it means that they need to compromise to do so?) I am not certain I see that this improves the validity - is it not possibly likely to point to some diluted version of the co-researcher's interpretation? Since I am not looking for the 'truth,' why would I be so worried about this type of agreement? It seems to me that using different methods to triangulate makes far more sense - because you can examine the consistencies between what the co-researcher says and does, or says and writes (depending on your different methods). As far as validity goes, I think we are always working somewhat in the near dark on that - what I write, even if it is merely a transcript - it is inevitably going to reflect my interpretation of the co-researcher's interpretation (IPA keeps emerging for me) of his or her experiences. Does it matter whether I am representing the construct of interest more than that I am representing how the co-researcher views this construct? What if he/she is way off base when compared to the usual and expected definitions of manifestations of some theoretical construct? I do think there is some merit to checking coding for completeness, and having some discussions where another researcher or co-researcher might question a code, but I do not see that much merit in striving for a percentage of agreement (a la 'interrater reliability'). I read some research not that long ago that suggested that the outcome of disagreements is very likely compromise - which to me would represent either dilution, as I mentioned above, or even alteration of the co-researcher's views. If I am that concerned about 'accuracy,' I might as well use a survey. I know that these issues have been discussed within my many readings but I think that the questions really only begin to make sense as I encounter them (or anticipate them) in my actual research. So, like with a lot of things, the more you do, the less you know.
0 Comments
Leave a Reply. |
AuthorI am Sheryl L. Chatfield, Ph.D, C.T.R.S. I am a member of the faculty in the College of Public Health at Kent State University. I also Co-coordinate the Graduate Certificate in Qualitative Research and I am a member of the Design Innovation Team at Kent State. Archives
February 2024
Categories
|