I am working on a secondary analysis project and encountered an interesting circumstance. One of the interviews I purposively selected - because it met the criteria to address the purpose of the research - turned out to be (or have been) conducted with someone I know, admittedly years before I met this person. This interview was housed in an open access, digital online archive with the consent of the individual, so I am not concerned about my use of the data, but rather how to best manage contact and offer and opportunity for some feedback on the research. I believe that it is important that I reach out to this individual at some point during the process. But there are some things to think about when considering when, and to what end.
0 Comments
I began this blog back in 2013 as a class-assigned research journal, and I made a lot of posts about my experiences with technology. I was interested in technology as data, technology to gather data, technology to facilitate or elevate analysis, and technology to express findings.. I decided recently that I am becoming a bit complacent and have not assertively explored new technology other than when I need to due to changes in availability or updates related to hardware or software I already use. So one of my other aims in 2020 is to explore and consider some alternative apps - new, old and updated versions.
I recently upgraded to Screenflow version 9. This is a program I have been using for many years and also one that I use for a limited number and type of tasks. The 31 second video here was done just for fun - I recorded desktop plus my image using the Facetime camera in my Mac, and saved and recorded over (actually beside) ensuing versions until I had 9 images. What I might have done, but didn't, was to make each longer so I had a series of moving, rather than what looks like still images. I spoke for a couple of seconds in each but not long enough to be able to keep playing back and recording over the active playback. Screenflow - a program that does screen capture and records computer sound, and can be supplemented with external audio and video - is maybe one of the most useful software programs I own. Last semester I was surprised to find that many students in an upper division research methods class - that has as a prerequisite an introductory statistics class - could not accurately describe or define a random sample. I found some similar confusion among graduate students around the same time, and again more recently. When asked to define random, the most common responses from students emphasized large samples. I concluded that random sampling was inaccurately viewed as a sample that approaches the population in numbers, rather than a sample that is representative of the population. It is easy and probably not entirely accurate to assume this confusion reflects something lacking in prior instruction. However, I asked a few exploratory questions about probability in a class - and that seemed to be another somewhat muddy concept for a few students. (I find this a little ironic in a state that has racetracks, casinos, and began a state lottery in the 70s.) My intro to stats books all had a chapter about probability/odds and random selection. I have not taught an introductory/basic statistics course for about six years, so maybe these (seemingly foundational) concepts are no longer discussed in detail.
Whether or not the content of texts has changed, I can identify two potential contributing factors to this confusion: 1)The tremendous interest in online surveys. In my institution, these are typically sent to the population (everyone in a group), so response rate and numbers are emphasized. Clearly it is possible to randomly select online participants but I suspect this is seldom done because it presents yet more challenge in securing an ample sample. This focus on number, rather than variability of participants, might have led to a subtle shift in how sampling is taught. Students I have encountered all seem to be aware of use of incentives to encourage participation; this supports my theory that increasingly common reliance on online surveys means that response rate is emphasized in course content over randomness. 2)Popular use of the word random. I think this peaked a few years ago but I still hear it now and then. it typically means eclectic, irreverent, varied, unexpected, and sometimes means unpredictable, which is probably the closest to an accurate use of the word. But it can also refer to silliness or nonsense. Unfortunately, when used in that sort of but not really self-critical sense: "My taste in music is just so random..." it comes of as a little apologetic, whereas "My taste in music is eclectic" is more something to be proud of. Any of these idiomatic/casual uses, however may obscure the real meaning. Clearly these are just my initial theories. I am going to keep investigating possible sources that contribute to what looks like a growing trend in not knowing what random really means, although my aim is to use a purposive, rather than random sample. Photo - Ohio Lottery Classic Lotto tickets. Taken by me and minimally edited (filter, crop) with Photos for Mac. I am current at the TQR: The Qualitative Report 2020 conference. I looked over my old presentations and determined this is my 8th consecutive year to attend this conference, that I always find inspiring and re-energizing.
As I reflected on presentations from this year and past conferences, I started to find a pattern (isn't that what qualitative researchers do?), in how people present. More accurately, I identified two common ways of presenting research papers and reports The first is what I am going to call classic - and this is what I typically do. This type of presentation usually follows a research manuscript sequence: introduction; methods; results or findings; discussion. Interestingly, the people who present in this relatively predictable way also seem to be those who are more likely to include an outline slide. The other type - that includes more variation - is what I am going to call conversational. I began to focus on qualitative (and mixed) secondary analysis projects about a year and a half ago, and I increasingly see the benefits that include having a sustainable approach to research. Although most of this work involves the first analysis of pre-existing data (so I might or might not be recycling the data), the resulting reports often represent the first attempt to make the data accessible and useful.
I point out to students in research methods courses this important thought: research is a limited resource because, in my experience in small to medium-sized US-based universities, research is at times treated as something that everyone needs to constantly produce more of. The increasing number of journals does increase the seeming need for product, but I fear that the growing mass of generated reports makes it more and more difficult to be aware of and eliminate duplication. One really positive thing about secondary analysis is that there is not a need to burden more participants (or tie up institutional review board resources, or print consent letters, or engage in any other processes that use and do not typically re-use resources), making it much more resource effective than some primary efforts. So, I aim to move into and through 2020 with a renewed focus on sustainable and useful research. One question I struggle with, that I want to share, is to what extent it is OK to compromise in quality and still rationalize taking on a given research project? This comes to my mind as an IRB member when I hear about small scale research projects that I fear will result in unpublishable, perhaps unpresentable reports due to design flaws. The typical responses when I ask design or quality questions have to do with time, narrow aims, student needs (i.e., thesis or undergraduate projects), etc. But if participants are involved (directly or otherwise), isn't it unethical to engage in research that most likely can never really be disseminated (or has such profound limitations as to be practically useless if it is)? So those are some things I will consider and be inspired by going into 2020. |
AuthorI am Sheryl L. Chatfield, Ph.D, C.T.R.S. I am a member of the faculty in the College of Public Health at Kent State University. I also Co-coordinate the Graduate Certificate in Qualitative Research and I am a member of the Design Innovation Team at Kent State. Archives
February 2024
Categories
|