A milestone in vaccine development is the generation of high concentrations of potent and specific antibodies able to neutralize the virus of interest. Antibodies mostly work by inhibiting viral entry, and as such they have the ability to block infection very early on, preventing the onset of a disease. Although for some tough and challenging pathogens (like dengue virus) a good titer of neutralizing antibodies is not always sufficient to predict vaccine effectiveness, this is generally a good indicator. But how to measure it?
Measuring neutralization: the principles
Measuring antibody titers against specific viruses is paramount although – like anything in science!- not always straightforward. Various methods have been devised but the principle remains the same: a serial dilution of heat-inactivated serum is mixed with a fixed concentration of virus, and the two are then added to permissive cells for 1 hour before removing. A few days later, the dilution of antibodies that neutralized 50% of the infection is scored, and comparisons can be made between different sera. In these assays it is important that the concentration of virus is kept low enough to be able to detect infection but without saturating the amount of antibodies in the serum. This is critical to make the assay independent on the concentration of virus and uniquely dependent on the concentration and potency of the serum tested. It is therefore important for each new virus to determine the limiting number of infectious units that will give a clear readout without overcoming the antibodies in the sera. This can be achieved by testing different concentrations of virus against a fixed, neutralizing concentration of standard. Virus concentrations above which neutralization decreases should not be used in the microneutralization assay. At high concentrations of serum, other inhibitory factors may prevent infectivity, independently on the antibody present. It is therefore important to have a naïve serum, or serum from subjects immunized against vehicle only to determine baseline neutralization levels
PRNT vs TCID50 vs Microneutralization
The Plaque Reduction Neutralization Assay (or PRNT) is considered the gold standard to measure antibody neutralization. The readout of this assay is the visualization of plaques or foci of infection, which become visible after the virus has been allowed to spread cell-to-cell (rather than cell-free), generally after addition of a solid or semisolid overlay on top of the cells. The time it takes for a plaque to become visible depends on the kinetics of virus replication and spread, and can range from 24h to weeks. As plaques need to be at least 1 mm wide to be accurately scored (especially by eye), this assay is generally run in 24 or 6 well plates, as anything smaller would compromise readability and resolution.
This is a robust assay and the most established, as it has been around for a long time and has gone through thorough standardization and optimization. However, the large format makes it very laborious and hard to automate, and it also requires a lot of starting material, which may be problematic particularly for pre-clinical studies where the volumes of sera available are limited. Scoring of plaques can be done manually or automatically upon set up of appropriate imaging and algorithms; however only the wells where all plaques are clearly separated will give accurate enough values. Equally, too few plaque won’t be representative.
The TCID50 assay can be used as a more high-throughput alternative to PRNT. The readout is the visualization of cell death and while this can be more easily scaled down to a 96 wp format, particularly when a colorimetric dye is used to more accurately quantify cell death (as we described in our previous blog), cytotoxicity remains harder to quantify reproducibly, particularly in a small format.
At VRS we use an immunofluorescence-based microneutralization assay which combines the high-throughput and small volumes requirements of a 96 wp format with higher degrees of precision and reproducibility compared to the TCID50. In this assay, infection is quantified by immunofluorescence staining of a viral protein of choice. Stained plates are imaged on the Perkin Elmer Opera high-content microscope, and images processed to calculate percentages of infection compared to no-serum treated controls. As the time required for viral protein expression if shorter than the time required for cell death and/or plaque formation, assays can be run faster and in a more high-throughput manner. Also, cell integrity is less problematic than in TCID50 studies, giving more control over the experimental conditions. Furthermore, infection can be visualized, providing additional information on patterns of viral spread.
The importance of standards
Whichever assay is chosen, controls remain paramount. Standard serum whenever available should be used to benchmark each result and allow comparisons across laboratories and over time. If a standard is not available, we find valuable to use a commercially available antibody specific against the virus of interest and the relative isotype control: while the results are going to “cleaner” given the absence of additional serum components, these controls provide a good indication on assay performance.
The final choice will depend on needs and lab capability, but we find immunofluorescence-based methods highly reproducible and sensitive, providing quantifiable information for each well. The amount of virus used is little, and this prevent saturation of the antibodies in the serum. High-content and high-throughput applications are making the difference in many fields of biology research, and we expect this to happen in virology as well, upon systematic standardization for different viruses.