|Year : 2015 | Volume
| Issue : 1 | Page : 32-34
Basics of diagnostic evaluation
Areej Al-Fattani, MP H
Department of Pediatrics, King Faisal Specialist Hospital and Research Centre, Riyadh 11211, Saudi Arabia
|Date of Web Publication||15-Apr-2015|
Department of Pediatrics, King Faisal Specialist Hospital and Research Centre, MBC 58, P. O. Box 3354, Riyadh 11211
Source of Support: None, Conflict of Interest: None
Over the past decade, the introduction and development of new diagnostic techniques have greatly accelerated. However, the methodology of diagnostic research is poorly defined compared with therapeutic and effectiveness type of research articles. Diagnostic researches usually address the value of new medical tests, which will be uses for screening for risk factors, or for assessing prognosis. Research questions like "How does one know how good a test is in giving the answers you seek?" or "What is rule that one can apply to judge which test is better in diagnosis" are the questions you might formulate for this type of research.
Keywords: Diagnostic, sensitivity, specificity
|How to cite this article:|
Al-Fattani A, H M P. Basics of diagnostic evaluation. J Appl Hematol 2015;6:32-4
| Introduction|| |
Making diagnosis is about moving from possibilities to high or low probabilities. Health care professionals have to be familiar with basic principles in the interpretation and clinical use of diagnostic tests.
Without any screening test for a given disease, the prevalence of this disease besides the history and the physical examination considered as the (preprobability likelihood) which given as the best guess percentage for that disease to happen. Usually, that is not enough for clinicians who want to be more confident and more accurate about the diagnosis. However, they regularly confront dilemmas when ordering and interpreting diagnostic tests. The probability of the disease to happen after the test is called (postprobability likelihood).
One of the essential principles is that every new test needs to be validated by comparison with "the truth" a gold standard. The most common design of diagnostic studies is the cross-sectional prospective which strengthened by several features shown in [Table 1]. 
| Definitions|| |
Sensitivity is the probability of positive tests in persons with the disease. Specificity is the probability of negative tests in persons without the disease. Both describe the relationship between the diagnostic test and the actual presence of the disease and they are used to rule in or out the disease in question. Another way to express sensitivity is SnNout "when a test has high sensitivity, a negative result will rule out the disease." Similarly, specificity is SpPin "when a test has high specificity, a positive result will rule in the disease." At the end sensitivity and specificity, tell us how accurate is our test compared to the gold standard. The calculations are shown in the example.
Positive predictive value (PV+) and negative predicted value (PV−) are more informative when it comes to the population being tested. PV+ is the probability of having the disease among patients with positive tests. While PV− is the probability of not having the disease among patients with Negative tests. These measures affected by the prevalence and the preprobability of the disease. 
As illustrated in this example [Table 2] and [Table 3], although the sensitivity and specificity are identical in the two population, the PV+ rose with the higher prevalence of the acute leukemia in the clinically suspected cases (38%). It becomes easier to confirm the presence of acute leukemia in patients with higher preprobability or baseline likelihood of disease, so the PV+ rises. In other hand, the PV− decreased with higher prevalence in suspected cases. It becomes easier to exclude the disease when the preprobability decreases, so the PV− rises. 
Now let's discuss what the clinical importance of difference in predictive values is. To evaluate the diagnostic value of the test, consider that postprobability likelihood improved compared to the preprobability. PV+ helps the clinician decide how to treat the patient after the diagnostic test comes back positive. Sensitivity, on the other hand, is a property of the diagnostic test and helps the clinician decide which test to use. Moreover, if neither a positive or negative test would change the decision of the diagnosis, that test in that particular population should be questioned.
Before the Sickling test was performed, the average likelihood of not having sickle cell anemia in this sample was 99 unaffected of total of 114 persons or 87%. After a negative Sickling test the probability of not having sickle cell anemia increased to 99%. That means a patient with negative result probably will not be undergo further procedures like electrophoresis. While a positive result of Sickling test will increase the probability of having sickle cell anemia from 13% to 64%. A patient with positive result will still have to undergo the confirmation process like electrophoresis for more accuracy in diagnosis. An important point here to consider, if the prevalence in sample is not identical to the prevalence of the population, the PV are meaningless.
Another point you will notice, illustrated in [Figure 1]. Let's consider a disease which has 10% prevalence, and a test which has a sensitivity of 95%, and specificity of 99%. Here, we note that we have got a PV+ of 0.9. That means 90% of the patients with a positive test, they actually have the disease. For another test which has a sensitivity of 95% and specificity of 95% for the same disease prevalence, we note that PV+ decreased to 70%. Moreover, for a test with sensitivity 95% and specificity of 85%, we have got a PV+ of only 40%. It means that only 40% of the patients who test positive are actually have the disease. We conclude from this example that specificity has a major role in determining the predictive value for a positive test, not the sensitivity. The same explanation applies to the relation of the sensitivity with negative predictive value. 
|Figure 1: Relationship between sensitivity, specificity, PV+ and prevalence of the disease.|
Click here to view
| Conclusion|| |
Sensitivity and specificity used to assess the accuracy of a given test, compared to a gold standard. While predictive values used its benefits in specific population. Specificity has a major role in determining the positive predictive value, and the same thing do sensitivity with negative predictive value. Whenever sensitivity rises, the specificity goes down
For calculation of diagnostic measures you can use this link by filling-up the four cells of the 2x2 table: http://www.medcalc.org/calc/diagnostic_test.php
| References|| |
Weinstein S, Obuchowski NA, Lieber ML. Clinical evaluation of diagnostic tests. AJR Am J Roentgenol 2005;184:14-9.
Hajeer AH, AlKnawy BA. Doctor's Guide to Evidence-Based Medicine. Riyadh: Alasr Printing Company; 2006.
Greenberg RS, Daniels SR, Flanders WD, Eley JW, Boring JR. Medical Epidemiology. 4 th
ed. United States of America: McGraw-Hill Companies, Inc.; 2005.
Wassertheil-Smoller S. Biostatistics and Epidemiology. 3 rd
ed. Bronx, United States of America: Springer; 2004.
[Table 1], [Table 2], [Table 3]