Is there still value in the review of literature as a type of (secondary) research paper? I think the answer is "it depends." I wrote a mildly ranting post for a research interest group last week about reviews of literature and have been thinking about this since that time. I also collaborated on a scoping review earlier this year and have recently been having discussions with a doctoral candidate about the potential structure of a research review component of a three manuscript dissertation. Previously I worked with some students and faculty members on a couple of never completely finished (to the point of being submitted for publication) systematic reviews. My only actual authorship of a published review was a qualitative meta-study, which is a less often used variation within research reviews. This is probably an average level of experience but I have a sense that, relative to my peers, I have above average skepticism about reviews of literature. Some of the challenges of conducting an exhaustive search became clearer to me when I was involved with graduate students and other faculty in trying to conduct and report on a systematic review for an agency. This was about 2017, and we found problems replicating searches, even when using protocols saved in the databases or aggregators. One of the students investigated this and our reference librarian ended up reaching out to the database representative. This was just one of several complications in this process, including profound challenges related to retaining collaborators and we eventually abandoned the process.
I have had a couple of excellent training sessions with reference librarians and have come to understand that the best way to identify the most useful information is not to rely entirely on a key word search - even including use of database specific and discipline-preferred terms. It seems not just helpful but essential to search the reference lists of good papers (some journals that publish good papers are not indexed - I should know as I've been editor of "The Ohio Journal of Public Health," which is not indexed and publishes good papers!), to find and read the articles that have cited the papers you like (sometimes relevant papers are published in other disciplines that are not indexed by your databases of choice, or use inconsistent terminology so are not revealed in keyword searches), to search for chapters in edited books - that may be research reports or relevant methods-focused works, and basically look anywhere you have access to in your quest to find the best primary sources. This probably just sounds like diligent searching. But one issue with this approach, as far as the systematic review style of review goes, is the difficulty of describing the search. Replicability is one criterion for a systematic review and the more innovative (and potentially effective) your search is, the tougher it is to replicate. So then the dilemma because whether to compose a really high quality, relevant review, or whether to follow the conventions of a systematic review process. One challenge with the former is that many people, including me, do not really trust consensus conclusions from a narrative, not systematic review, because I am skeptical about the quality of a search that is not described to be replicable. Even though here I am advocating for doing that. This brings me back to thinking about the point in the first place of a research review, which is primarily to contextualize, identify specific direction, and provide a rationale for a new study. This can be done using a diligent and varied search and written up briefly as the introduction to a paper. Other applications of review processes include meta-studies -quantitative (meta-analyses) or qualitative (various labels) and the aim of these is to provide consensus or summary findings, often to show efficacy of a treatment or program. I think these definitely have their role and hope authors do describe a complex and inclusive, and not necessarily precisely replicable search process - in this instance, credibility and completeness to me outweighs replicability. Scoping reviews, which are a newer thing to me, feel like a sort of hybrid systematic and meta-study - there is a specific enough aim and (ideally) a very focused topic. These seem to have their use; for instance, a scoping review may be the best process to identify consensus findings related to a subgroup or somewhat unique, but not unlikely context. The couple of types I struggle most with are narrative reviews (as a free standing paper, not as the set up for a study) and systematic reviews. Narrative reviews, whether they have a search strategy or not, tend to have some waffly aim about looking at what the current state of knowledge with regard to X is. These are often given as class assignments and in this sense seem to be more about trying to get students to prove they can do library research than anything. Annotated bibliographies are similar but have no expectation of integrating the information, and to me this is ironic because that is perhaps the highest value activity for students in writing a narrative review. I've assigned narrative reviews in the past and when I use a topic list, I see most students cite the same sources because databases use some wonky algorithms to figure out the order in which to present things, whether you use date or relevance sorting, because there are typically multiple things that would come out as equal. Students tend to work from the top down regardless. Systematic reviews either yield so many hits that it becomes necessary to apply excessive filters, thus reducing the practical use, or yield so few hits that it is clearly not time yet to write a review of the, for instance, three studies about something - readers can just read the primary papers. Another challenge is the number of review papers being published, and authors unfortunately tend to start from scratch so while they may have the same focus and question they may include slightly different sources due to some minor variations in inclusion/exclusion criteria, like publication dates or location of research. I know there are reviews of reviews out there (and meta-analyses of meta-analyses) but at some point the overlap and/or recycling of information dilutes the value and makes me think - isn't it about time somebody did something instead of continuing to tell me what has been done and what they think needs to be done? The other really important component of any research review in my view is use of assessment tools for all papers included in reviews. I've gone back and forth on this and now believe that anything that does not pass assessment should either not be included, or, as with the GRADE or GRADE CERQual processes, should be clearly shown as low confidence/low plausibility findings. A review, after all, is only as good as the sources it includes.
1 Comment
3/6/2024 03:20:58 am
Software-defined vehicles represent a transformative approach to automotive design, where critical vehicle functions are controlled and managed by software rather than hardware. This innovative concept enables flexible customization, over-the-air updates, and adaptive features, enhancing vehicle performance, safety, and user experience. Software-defined vehicles signify the convergence of automotive and technology industries, driving innovation and reshaping the future of transportation.
Reply
Leave a Reply. |
AuthorI am Sheryl L. Chatfield, Ph.D, C.T.R.S. I am a member of the faculty in the College of Public Health at Kent State University. I also Co-coordinate the Graduate Certificate in Qualitative Research and I am a member of the Design Innovation Team at Kent State. Archives
February 2024
Categories
|