Ŕ¦°óSMÉçÇř

Subscribe to the OSS Weekly Newsletter!

Humans Sometimes Get in the Way of Science

Scientific innovations, like COVID-19 tests and vaccines, can be sound but when humans under pressure get involved, mistakes can happen

I never expected to read the word “glooped” in a BBC article. An recently revealed troubling behaviour on the part of some technicians at one of the major UK COVID-19 testing laboratories. These laboratories receive the swabs that went up the nose and down the throat of potential coronavirus hosts. Technicians there must extract the genetic material present and try to amplify coronavirus-specific sequences using a test known as PCR to check for the presence of the virus. The BBC article, which summarizes what an undercover reporter observed while working 18 shifts at this lab, goes on to describe test samples that “glooped” over other samples, presenting a clear risk for contamination and thus incorrect results.

Instead of adhering to a strict protocol when the contamination was suspected, some technicians were filmed by the reporter pushing swabs back into their tubes in a manner that could easily cause a specimen that does not contain the coronavirus to become contaminated with one that does. Experts who were shown the footage for the programme were clear that this was a no-no. Furthermore, the in the UK pointed the finger at a “lack of leadership and professional rigour,” and the mentioned the need for registered biomedical scientists, hinting that the recruitment and training of laboratory staff to meet demands might themselves have cut corners.

As someone who spent many years working in a clinical diagnostic laboratory, I was appalled watching the footage. The behaviour captured on video is unacceptable. It can lead to incorrect test results being sent to individuals and, now that it has been exposed, it can have ripple effects in people’s trust in the test itself. To be clear, what was documented by the BBC does not invalidate the PCR test as a whole, nor does it say anything about the fake controversy over the high Ct values of the test (about which I have written here). This may sound pedantic. After all, PCR tests don’t conduct themselves. They require people, in much the same way that baked goods don’t magically appear in our pantries. A bad baker may serve you undercooked pie even if the recipe itself is solid. But it’s important to repeat that the PCR test for the coronavirus does work and is remarkably accurate on its own. What we have here instead is another unpleasant example of the ideals of science meeting the realities of human behaviour.

As a graduate student, I was tasked with sending out blood samples from cancer patients to a company that would screen them for over 200 members of a particular molecular family. We wanted to see if some of these molecules were consistently present in higher or lower amounts than expected when compared to people who did not have this cancer. Imagine finding out that, let’s say, five of these molecules are present in the blood in much higher levels but only if you have a certain type of cancer. This would constitute an interesting disease signature. Eventually, your doctor could have your blood drawn and sent to be tested for these five molecules, and the result of this minimally invasive test could indicate if you have this cancer or not.

When we received the raw data back, it was not obvious what the results were. The data had to be analyzed, specifically in a detached and objective way. This was not the time to pick a particular molecule that struck our fancy and bet on it like we were at the races. We called upon professional biostatisticians to do this work, as we lacked the in-house expertise. I was shocked when those professionals insisted that I pick the molecules I was interested in based on what was already known about them and they would calculate the corresponding statistical significance. I had them repeat this twice, wondering if I had misunderstood. They did not budge. My supervisor agreed with them. This was no longer an objective scan but an exercise in cherry-picking.

Thankfully, these biased results were never published. The distorted results we got from this sample set could not be replicated in a new set of samples and we moved on. I eventually surveyed the scientific literature to look at teams that had published similar signatures for all sorts of cancer and showed that most of these signatures had little overlap between them. It was, I suspect, the noise of false-positive results. When a diagnostic test is robust and the disease is suspected in a test subject, the rate of false positives can be very low, but in exploratory research, false-positive signals—a seductive fool’s gold when looking for the real deal—can be more common.

The decisions we make in science can be sloppy even when our tools are perfectly fine... although sometimes the tools themselves display some of that sloppiness. The BBC report on the UK testing lab shows technicians interacting with a robot of the sort I have worked with before. Many years ago, our diagnostics lab purchased a liquid handling robot. Laboratory testing involves a lot of moving liquids around and mixing them together. Each sample you want to test has to be pipetted out of a tube and into an individual well on a plate. They then have to be mixed with a number of reagents or, if you’re lucky, a master mix that contains all of the necessary reagents. To fill a 96-well plate with all the liquid it needs for testing involves a lot of pipetting, and to avoid contamination you need to eject the disposable plastic tip of your pipette and insert a new one in between each sample. A robot that does this for you comes in handy.

These liquid handling robots have arms that act as multichannel pipettes, with tips all aligned in a row like evenly spaced fingers. They can be programmed to move multiple samples from one plate to another in a single go with no human intervention. Sometimes, though, they fail. One plastic tip falls off as the arm pulls away from the tip rack, and the technician has to intervene. That’s where judgment comes in. If a particularly snotty specimen ends up “glooping” out of the plastic tip carrying it, and it “gloops” all over wells containing specimens from other people, you can’t simply wipe with a tissue and carry on.

The English lab the BBC investigated was conducting between 18,000 and 40,000 coronavirus tests each day, which is massive. Speed must be an incentive in a situation like this. We just learned that workers at a manufacturing plant in Baltimore, Maryland, made. Their plant was tasked with producing both the Johnson & Johnson and the AstraZeneca COVID-19 vaccines, but some of the workers mixed up the ingredients. This does not affect the doses currently being delivered and used, as they were made elsewhere and the Baltimore plant has yet to receive regulatory authorization, but it means that up to 15 million doses of Johnson & Johnson’s vaccine are potentially contaminated and may have to be scrapped following an investigation. The science, again, is sound but human error can always contaminate the process, especially when speed is of the essence.


Back to top