Every profession I know that deals with numbers also provides some estimate of the reliability of the result. An engineer would not say “the most probable life of this bridge is 87 years.” The engineer might provide a minimum, or a range, or some indication of how reliable the estimated life is.
My new journal article, Regression, Critical Thinking, and the Valuation Problem Today, just arrived in the summer edition of The Appraisal Journal. After the introductory part, the first section is entitled, “A desirable goal of valuation: quantifiable reliability.” As you might expect, the first question is “why don’t we?” What keeps us from providing at least a judgment of the reliability of our own work?
Three reasons to never provide a reliability score.
Those of you who follow these ramblings know the Dell Impossibility Theorem: “You can’t get objective output from subjective input.” In the world of computers, this has been known as the GIGO rule – “garbage in, garbage out.”
So this becomes the first reason we cannot quantify the reliability of our own work. We pick comps based on the “trust me” rule: “Trust me. I know a good comp when I see it.” We can only be judged on our judgment.
A second reason is perhaps psychological in nature. Why would I want to describe my appraisal as being of “low reliability – not to be depended on”? It feels like I am admitting my own guilt, or at least that my work is poor. What I want to say is that my work is really good! Pay me. I don’t want to be judged poorly. The problem is the false link between the assignment and my competence.
The measure we seek is a measure of the reliability of the assignment, not my competence. Yet as long as we have to claim that we pick good comps, we’re stuck.
The third reason is tradition, and even law, or the requirement of our clientele (like FannieMae) that we produce a point value. A point value.
So long as we have to produce a point value, no numerical estimate of reliability (like standard deviation) is possible. Ironically, the point value we produce is required to be the “most probable selling price.” Most probable. By definition, this requires a measure of certainty. Most probable. Most sure. Most reliable. Yet we have no way to get from “trust me” comps, picked subjectively, to a numerical score.
Recently, I was deposed in a case. The attorney asked about my use of the words “moderately reliable.” Other words could be used, such as low reliability, very low, high, excellent, or “wonderfully good.” The last might sound unprofessional, and it is. But the point is that subjective words could be used to describe the reliability of the appraised value.
With subjective input data, we can subjectively judge the quality of the data, the quantity of data, and whether the elements of comparison are there. The judgment of reliability is then also subjective. But it is possible. We can provide a score. It can even be a subjective score where a 3 is average, 4 is good, and 5 is excellent. While there are a number of other factors, we appraisers are quite familiar with ‘hard appraisals’ where the data is sparse, of poor quality, dated, or on the wrong side of the highway. It can be done. We just have to learn how to not take it personal.
Perhaps it will become a sign of a good, responsible, competent, conscientious appraiser. An appraiser who cares to provide the client with what they really need: a measure of risk.
So, can we get from here to there? Is “there” of value to the client? If it has value, can it be monetized? Maybe.