Documenting serious issues in a bee paper on “No Mow May”

And what happened when I alerted the journal

Zach Portman
9 min readOct 5, 2022

I read a lot of papers in scientific journals and I often come across papers that likely have some bad identifications, and I occasionally point them out on twitter or email the authors. However, in this case I came across a paper that had such serious issues that I took the step of alerting the journal. My goal here is to document the issues fully and to explain why I took that step. I also want to shed some light on my experience reporting it to the journal and my general disappointment in the entire process.

The paper in question is “No Mow May lawns have higher pollinator richness and abundances: An engaged community provides floral resources for pollinators” by Dr. Israel Del Toro and Dr. Relena R. Ribbons, both at Lawrence University in Wisconsin, published in 2020 in the journal PeerJ. When I initially read the paper soon after it was published back in September 2020, it was clear that there were big problems with the paper, and particularly the bee identifications. I fired off a twitter thread documenting some of those issues:

Digging into the issues

Here, I take a deeper look at the issues I originally raised and more fully explore ones that I didn’t bring up in the twitter thread.

The issues boil down to two main parts:

  1. The paper reports multiple bee species that simply do not occur in Wisconsin in May.
  2. The paper methods state that the majority of specimens were identified by sight in the field. This is a problem because many of the listed species require a microscope to identify.

In addition to the two main issues, there are some miscellaneous issues such as unexpected plants and many misspelled species names.

Issue 1: The paper reports multiple bee species that simply do not occur in Wisconsin in May.

One of the most glaring issues in the paper is that it reports finding 27 bees from five different species in the genus Melissodes. This is a big issue because Melissodes aren’t active in May in Wisconsin. In that area, they are only active in the summer and fall. Finding even one Melissodes species flying in May would be implausible. Finding five Melissodes species flying in May is biologically impossible. The multiple Melissodes species reported raise some big red flags and call into question the reliability of the entire bee dataset.

The apparent level of precision is also an issue — how were species-level identifications made when no Melissodes could be present? In other words, since there aren’t Melissodes present in Wisconsin in May, how was there enough certainty to identify five separate species? This further suggests that the rest of the species-level identifications are not reliable, even though most of the other species listed could plausibly be found at that place and time.

This precision issue is reinforced by the fact that two of the reported species — Melissodes druriellus and Melissodes rustica — are the same species. Melissodes rustica is just a defunct name for Melissodes druriellus.

The whole Melissodes issue is especially puzzling because the most common Melissodes species reported in the study, Melissodes bimaculatus (with 19 records), is a distinctive bee that is relatively easy to identify, even by sight.

Issue 2: The paper methods state that the majority of specimens were identified by sight in the field. This is a problem because many of the listed species require a microscope to identify.

In the methods of the paper it states:

As we netted suspected bee specimens, the bees were moved into storage mason jars. Collected bees were identified in the field … Unknown specimens were stored in 70% ETOH, and taken to the laboratory for subsequent identification using various keys and regional lists

The paper lists 33 total bee species. Of those, I would be able to comfortably identify probably 14 in the field and the remaining 19 would need to be collected and examined under a microscope. I am a bee taxonomist whose primary job is identifying bees, so I have a lot of experience identifying bees both in the field and in the lab.

The following species are ones that I personally would not be able to confidently identify in the field. For many of these I could make a good educated guess, but there would still be a relatively high level of uncertainty in any field identifications of these species.

  1. Andrena crataegi
  2. Andrena cressonii (males could be identified if you get a good look at their face marks)
  3. Andrena miranda
  4. Ceratina calcarata (males could be possible with a good camera or magnifier)
  5. Melissodes denticulatus
  6. Melissodes druriellus
  7. Melissodes rustica (again, note this is just an old name for M. druriellus)
  8. Nomada cressonii
  9. Hoplitis pilosifrons
  10. Hylaeus modestus
  11. Hylaeus mesillae
  12. Lasioglossum coriaceum
  13. Lasioglossum cressonii
  14. Lasioglossum laevissimum
  15. Lasioglossum pilosum
  16. Lasioglossum zephyrum
  17. Sphecodes cressonii
  18. Sphecodes dichrous
  19. Osmia pumila

Some people could reasonably argue that a portion of these 19 species could be identified by with some degree of certainty by a person with a decent degree of expertise using a magnifier or macro camera. However, the methods of this paper specifically state the bees were put in a “mason jar” and makes no mention of any magnification aids. However, even if magnifiers were used, most of the species would still be impossible to identify in the field. The 100+ records of Lasioglossum (Dialictus) species are especially hard to believe because the bees are so small and difficult to identify, and I am not aware of anyone who can identify them by sight. And again, how can one identify Melissodes by sight during a time period when they could not actually occur?

Though the paper does not state how many bees were identified in the laboratory, an earlier version of the paper available on the PeerJ website states “A dozen specimens that were difficult to identify in the field were collected”. A dozen specimens would not be sufficient to identify the many species that require a microscope for accurate identification.

Representative examples of three of the species of Lasioglossum (Dialictus) reported in the study: Lasioglossum cressonii, Lasioglossum pilosum, and Lasioglossum laevissimum. An American quarter is included for scale. In my professional opinion, these species require a microscope for accurate identification.

Another odd aspect of the bee identification methods is that the paper lists identification resources that are not sufficient to identify the bees. The paper states:

Unknown specimens were stored in 70% ETOH, and taken to the laboratory for subsequent identification using various keys and regional lists (Wolf & Ascher, 2008; Williams et al., 2014; Wilson & Carril, 2015; Gibbs et al., 2017).

The big issue here is that Gibbs et al. 2017 is a checklist of Michigan bees, Wolf and Ascher 2008 is a checklist of Wisconsin bees, Williams et al. 2014 only covers bumblebees, and Wilson and Carril 2015 is the guidebook “The Bees in Your Backyard”. With the exception of the Williams et al. 2014 book for bumblebees, none of these allow one to identify bees to species, making these identification methods rather nonsensical.

Other red flags

Another big red flag in this paper is that it reports implausible plant data. In Table 1 of the paper, it reports that 30% of homes and 7% of parks contained blooming Canada Thistle (Cirsium arvense). In the supplemental materials, it also reports one observation of Bull Thistle (Cirsium vulgare) from a lawn. I am not a plant expert, but finding thistles blooming in May in Wisconsin is biologically implausible. Reporting blooming Canada Thistle in May from 30% of lawns in this study is downright bizarre.

Another red flag is the many misspellings of bee names. Five bee species names are misspelled or misstated in various points of the paper: “Andrena cressoni”, “Nomada cressoni” and “Lasioglossum cressoni” should all be spelled “cressonii”. In addition, “Melissodes druinellus” should be “druriellus” and “Augochlorella pura” should be “Augochlora pura”. These basic errors further decrease my confidence in the results of the paper.

My experience reporting to the journal

After posting about the paper on twitter, I decided the issues were serious enough to report directly to the journal.

I sent an email on September 25th, 2020 to Dr. Brock Harpur, the handling editor of the article at PeerJ. He responded promptly, and also included Dr. Peter Binfield, who is listed as a “Publisher and Co-Founder” of PeerJ. After a little additional clarification about the issues and process, I gave them final permission to approach the authors and I agreed that they could tell the authors who was reporting it (i.e. I agreed not to be anonymous in my complaint). I sent that email on September 30th.

About six months later, I had not heard any updates, and I sent a quick followup email on May 7th, 2021. I received a prompt response from Dr. Harpur stating that they had been working on the issue.

Five days later, the journal posted a “Publisher’s Note” on the article page, stating “The Publisher has been made aware that some of the specimens in this study may be misidentified. We are working with the authors to clarify the situation and will update the article depending on the outcome of this inquiry.” A screenshot is below:

Then, on August 16th, 2021, the “Publisher’s Note” was updated, to now state: “The Publisher is conducting an investigation into issues raised by a reader. We are working with the authors to clarify the situation and will update the article depending on the outcome of this inquiry.” A screenshot is below:

That note is still currently up as of this writing.

After seeing a news article featuring the No Mow May paper, I sent a last email to the editor asking for an update six months ago on March 29th, 2022. Again, I received a prompt reply and was told they were working on it. I have not had any contact with the journal since.

Where we are now

Initially I was hopeful because the editors responded promptly and seemed to take my concerns seriously. I hoped that these issues would be quickly resolved by the journal, especially since I consider these issues to be quite straightforward and they can be easily verified by a number of independent bee experts or taxonomists. However, now that it has been two years with no resolution, or even an update beyond “we’re working on it” and a vague “Publisher’s Note”, I am distinctly less hopeful.

I was initially hoping that some kind of explanation would be forthcoming: an answer to how the study reported bees that couldn’t possibly occur at that time and place. Ideally, if sufficient reference specimens were saved, the project might be salvageable. However, I have yet to see any explanation for how the study found biologically impossible results or how most of the bee species were able to be identified given the stated methods. As a result, I personally think that the paper should be retracted. I will not attempt to speculate on how the study could have reported impossible results.

In the meantime, this study has been featured by various media outlets (such as the New York Times), it has accrued numerous citations by other scientific papers, and has been used in two meta-analyses that I am aware of (meta-analyses one and two). In light of the slow response of the journal and the proliferation of this research in the public and scientific spheres, I have decided to publicly and thoroughly document the issues here. I hope the journal will conclude its investigation soon and officially correct the scientific record.

Update (21 Nov 2022): The paper was retracted by the journal on 18 Nov 2022.

Second Update (15 Apr 2023): I have received numerous questions recently and would like to provide some brief answers and clarification. First, in my professional and scientific opinion, no part of the now-retracted paper should be considered reliable. In general, retractions of papers only happen for serious issues that cannot be addressed with a correction, and the findings of a retracted paper should be considered invalid. More information about the process of retractions is available on the website of the Committee on Publication Ethics, whose guidelines PeerJ follows.

Finally, my critique of the now-retracted study should not be taken as evidence either for or against the general concept of “No Mow May” as it deals only with this particular study. If you would like to learn more about “No Mow May” and lawns for pollinators, I recommend this presentation and panel by University of Minnesota researchers.



Zach Portman

I am scientist who studies bees. My research covers the identification, biology, evolution, and conservation of native bees.