There has been a growing interest in digital PCR (dPCR) as technological advances are making it increasingly accessible and affordable. Consequently, dPCR has the potential to have a significant impact on life science research as well as clinical applications. In October 2019, we hosted a guest webinar, where guidelines for exploiting this promising technology were discussed.
We captured a lot of interesting questions from our webinar attendees. Read our compilation of the top 20 questions, along with insightful answers from our guest speaker.
To listen to the recording of this webinar, as well as post-webinar discussion, click on the link below.
“Tips and tricks for more accurate digital PCR” presented by Dr. Mikael Kubista, TATAA Biocenter AB and Department of Biotechnology, CAS
Recorded webinar session – watch here
In this expert webinar, Dr. Kubista shared his experience developing applications and providing services using digital PCR for nearly 12 years at the TATAA Biocenter. He and his team have overcome all the problems common to dPCR analytical workflows and developed robust standard operating procedures to minimize the risk of error and maximize robustness and repeatability. They have developed various controls to test the performance and validate the methods. He has also shared tips and tricks for dPCR assay design and validation with a focus on strategies for copy number determination and rare mutation detection.
Post-webinar Q&A:
The calculation of 1.6 copies/partitions and 80% saturation was based on 10k partitions. How does it change if the number of partitions increases?
It does not change. 1.6 copies per partition, corresponding to 80 % saturation, is an optimum condition with respect to precision independently of the number of partitions. The imprecision (error) generally decreases, however, with an increasing number of partitions.
If you increase the number of partitions, can you detect a single molecule instead of LOD=3?
Assuming assay is good PCR amplifies a single molecule, hence, detecting it. LOD is not about detecting a molecule if it is present; it's about the probability of a molecule being loaded onto the chip as we analyze only a small fraction of the total sample (e.g., patient's blood). The sample shall contain at least 3 target molecules per total volume analyzed for the analysis to be positive with 95% probability. Hence, if a sample contains 3 targets per volume analyzed and the experiment is repeated 100 times, we expect 95 of the experiments to be positive, but five negatives. LOD is independent of measurement technique; it's a consequence of sampling ambiguity.
When validating new assays against ValidPrime, using synthetic fragments containing both target sequences, how often do you see different concentrations measured with the two assays?
Concentrations are identical as they are part of the same molecule, but I presume you are asking if the measured amounts differ. It is important to run the assays as singleplex optimizing each assay separately as they may require different run conditions. Once optimized, most assays we have designed in-house pass validation (i.e., same count as ValidPrime).
Does absolute quantification accuracy improve with droplet-based dPCRs?
When estimating concentrations of test samples based on a standard curve, the uncertainty depends on several factors; measurement uncertainty being one. Since reproducibility is higher in dPCR than qPCR, because of the linear response, I would expect this to be the case in theory. In practice, most workflows have upstream pre-analytical steps that are more confounding than the analytical step, in which case, there would be no difference.
Based on the higher multiplexing options for dPCR explained, can one use half probe dilutions for all dPCR platforms?
Yes, but be aware you need clonal amplification, which is easier to achieve on platforms with a larger number of partitions.
Can primer efficiency influence the separation of target sequences when duplexing using a single dye assay. I recently experienced that using the same dye in a duplex assay, and the targets could be distinguished even without varying the primer concentrations. Could this be an issue of primer efficiency?
Assuming you refer to assay efficiency, you are indeed right targets can be distinguished using this strategy. If primers have different Tm or amplicons, different lengths require different elongation times, cycling conditions can be set that amplify the targets with quite different efficiencies. The targets can then be distinguished readily by the Cq values. This, however, requires real-time detection. They may also be distinguished by conventional end-point dPCR, but it is much less robust.
How to quantify DNA concentration using dPCR?
For mouse, human and rat ValidPrime assays are available. For other organisms, you must design and validate your assays. This is done by designing multiple assays targeting single copy loci in the DNA of your interest and running dPCR. If multiple assays give the same count, it is reasonable to assume those assays indeed target single copy loci and are quantitative. You find more information here: http://www.tataa.com/services/.
Can you recommend procedures for the validation of the dPCR with respect to accuracy?
Accuracy requires comparison to either a standard or a reference method. For human genomic DNA compare to SRM 2372a or a secondary standard calibrated against SRM 2372a.
How long is the ValidPrime target sequence?
Human ValidPrime target sequence is 143 bp, and the assay is extensively characterized.
How do you prove there are no incomplete side products of the chemical synthesis of the standard?
You don't. You measure the amount of (reasonably) intact standard using the ValidPrime assay.
Do we need to clean our reagents?
It depends on the study. For copy number variation (CNV) measurements, human gDNA contamination in reagents is expected to be negligible; for quantification of mitochondria in cells, it may be important, and for analysis of ancient DNA, it may be critical.
EvaGreen-based detection of a target is usually "messier" than probe-based assays. How can the EvaGreen assays be improved?
EvaGreen binds non-specifically to all dsDNA present, so background depends on the fragment length in the droplet. You can usually reduce the background rain by homogenizing the fragment length using preferably restriction cleavage (to avoid cutting your target sequence) or sonicating. You may also try to reduce rain by reducing the total amount of DNA loaded.
Is the performance of all dPCR instruments comparable, or are some better than others?
The performance of dPCR instruments differs as they have different features, such as the number of partitions, channels, real-time detection, multiple read-outs, open/close systems, etc. The cost of the instrument reflects this. Which is best for you depends on your needs. Consider joining the TATAA dPCR course, which will allow you to test all the leading platforms to guide your decision.
I use ddPCR in a molecular diagnostics environment pathogen detection. To what extent am I allowed to use ddPCR to calibrate my qPCR assays?
Calibration is reliable if the assay conditions are the same (same protocol and reagents). But inhibition in field samples is common and should be tested for. This is done using a spike.
Regarding the comparison of sensitivity between qPCR and dPCR. The sample volume to be put into the instrument is what determines the limit of quantification, right? What is the normal sample volume input for dPCR instruments, and qPCR?
Under conditions where sampling noise (Poisson distribution) is dominant, the total volume analyzed becomes limiting for quantification. For qPCR, there are immense variations in the volume analyzed. The BioMark IFC (Fluidigm) reaction volume is only about 10 nl, and there is no accuracy unless the sample is preamplified to increase concentration. At the same time, the aAmp (AlphaHelix) uses super convection to analyze 100 µl. In dPCR, you can usually control the total volume by running more subarrays or multiple chips. For example, the QX200 droplet size is 0.85 nl, which gives a total analyzed volume of 17 µl assuming 20,000 droplets.
Would dPCR be suitable for quality control applications using SNPs (ex. being able to estimate precisely the concentration of a rare allele at the 0.1% scale)?
Yes, indeed. This is one of the most popular applications. The assays still must be specific to avoid false positives in partitions that happen to contain multiple wild-type molecules.
What software to use to design probes for dPCR?
Assay design for dPCR is no different from qPCR. Use your standard assay design pipeline.
What is the maximum DNA size for Eva/SYBR ddPCR?
Longer amplicons are preferred as they give rise to more fluorescence. There is no upper limit if elongation time is enough to copy the template, and there is no problem with secondary structures. 100–250 bp is a reasonable range to aim at.
Could you explain a bit more on multiplexing using different concentration of primer in a single reaction. Is this easily distinguishable using instruments alone, or does one need additional tools to analyze these results?
For well-designed and validated assays, clonal amplification, and non-complicated samples, it should be distinguishable in the standard instrument software. However, it is a complex system to optimize, and we prefer using probes for multiplexing.
Can you use ValidPrime assays or reference assays for bacterial processes? Can you validate bacterial targets?
You can use standard ValidPrime assay to validate newly designed assays for any target as the validation is done on the synthetic template. However, to be used as an endogenous reference, the ValidPrime assay must target the strain. TATAA offers only a limited number of validated species-specific ValidPrime assays, but you may find more in the literature developed by academic groups. They may not be as extensively validated but could be good enough for your purpose.