Every profession I know that deals with numbers also provides some estimate of the reliability of the result. An engineer would not say “the most probable life of this bridge is 87 years.” The engineer might provide a minimum, or a range, or some indication of how reliable the estimated life is.
My new journal article, Regression, Critical Thinking, and the Valuation Problem Today, just arrived in the summer edition of The Appraisal Journal. After the introductory part, the first section is entitled, “A desirable goal of valuation: quantifiable reliability.” As you might expect, the first question is “why don’t we?” What keeps us from providing at least a judgment of the reliability of our own work?
Three reasons to never provide a reliability score.
Those of you who follow these ramblings know the Dell Impossibility Theorem: “You can’t get objective output from subjective input.” In the world of computers, this has been known as the GIGO rule – “garbage in, garbage out.”
So this becomes the first reason we cannot quantify the reliability of our own work. We pick comps based on the “trust me” rule: “Trust me. I know a good comp when I see it.” We can only be judged on our judgment.
A second reason is perhaps psychological in nature. Why would I want to describe my appraisal as being of “low reliability – not to be depended on”? It feels like I am admitting my own guilt, or at least that my work is poor. What I want to say is that my work is really good! Pay me. I don’t want to be judged poorly. The problem is the false link between the assignment and my competence.
The measure we seek is a measure of the reliability of the assignment, not my competence. Yet as long as we have to claim that we pick good comps, we’re stuck.
The third reason is tradition, and even law, or the requirement of our clientele (like FannieMae) that we produce a point value. A point value.
So long as we have to produce a point value, no numerical estimate of reliability (like standard deviation) is possible. Ironically, the point value we produce is required to be the “most probable selling price.” Most probable. By definition, this requires a measure of certainty. Most probable. Most sure. Most reliable. Yet we have no way to get from “trust me” comps, picked subjectively, to a numerical score.
Moderately Reliable
Recently, I was deposed in a case. The attorney asked about my use of the words “moderately reliable.” Other words could be used, such as low reliability, very low, high, excellent, or “wonderfully good.” The last might sound unprofessional, and it is. But the point is that subjective words could be used to describe the reliability of the appraised value.
With subjective input data, we can subjectively judge the quality of the data, the quantity of data, and whether the elements of comparison are there. The judgment of reliability is then also subjective. But it is possible. We can provide a score. It can even be a subjective score where a 3 is average, 4 is good, and 5 is excellent. While there are a number of other factors, we appraisers are quite familiar with ‘hard appraisals’ where the data is sparse, of poor quality, dated, or on the wrong side of the highway. It can be done. We just have to learn how to not take it personal.
Perhaps it will become a sign of a good, responsible, competent, conscientious appraiser. An appraiser who cares to provide the client with what they really need: a measure of risk.
So, can we get from here to there? Is “there” of value to the client? If it has value, can it be monetized? Maybe.
jimplante
October 11, 2017 @ 7:36 am
…” It can even be a subjective score where a 3 is average, 4 is good, and 5 is excellent.”
Why use a subjective measure? What does “average” mean? What’s the difference between good and excellent?
In a forum long ago, someone asked how one quantified an adjustment for view. General consensus was that you really couldn’t; it was subjective. I maintained that you *could* do it, provided you defined what “poor, fair, average, good, and excellent” meant, and worked out how much a view contributed to value. If, for example, one determined that view accounted for 10% of the value of a property, one could then define the level of contribution for each of the levels of view. E.g., on a $500,000 lake front house, the view would account for 10% of the value, or $50,000, if the view were graded “excellent.”
Well, how do you know the view contributes the 10%? I dunno. Paired sales 🙂 Or maybe one could use principal components/ principal factors analysis. Or some other method of quantification.
Now the problem arises of how much each level of view (call them 1 thru 5 to save space) contributes to the whole 10%. In other words, is the scale linear? Answer: a resounding “NO”. Because a level one view, called “poor”, contributes no additional value, and may in fact detract from the value of otherwise similar properties. So we now embark on a journey to find out what the market thinks of this.
So in this little adventure, we have determined that it is *possible* to quantify the intangible, to square the circle, and to dream the impossible dream. The real question becomes, “can our client afford to pay us to do that?” It’s kinda like figuring out stigma on a residence. You can quantify the effect, if your client can afford it.
Michael V. Sanders, MAI, SRA
October 11, 2017 @ 7:50 am
Great points. The merits of providing an expected range, rather than a point value, has been debated for years, along with the associated benefit of having a quantifiable measure of reliability. But there is an additional issue, and that has to do with the value standard itself, as articulated in the operative value definition.
The post above references “most probable price,” which is the standard commonly used in the definition of market value for mortgage lending purposes. Acknowledging the subjectivity that often accompanies the estimate of value, most probable price is still a fairly objective standard, whether the appraiser uses a mean, median or some other measure of expected value. But most probable price is NOT the standard used for many litigation assignments, which instead use the “highest price” standard articulated in the CCP definition (for eminent domain in California), the CACI Jury Instructions (for damage to real property) or the 1993 Cream case (marriage dissolution).
The implications of inconsistencies in the various definitions of market value and fair market value are the subject of a panel I’m moderating at the upcoming SCCAI Litigation Seminar in November, but there is little question that “reliability” suffers under value definitions that effectively promote subjectivity.
Steven DavisMRICS
October 11, 2017 @ 9:08 am
From what you have said, the extent to which a report is credible is the extent to which a report is reliable.
Steve Owen
October 11, 2017 @ 3:51 pm
I have always contended that “credible” is a USPAP requirement and therefore does not really have a gradient. It either is credible or is not. However, after determination of credible, there is the question of reliability. Within a credible report, there can be relatively low or relatively high reliability. But, I agree with the author that this is subjectively described. If the assumption is that the appraiser followed USPAP, including the requirement for being competent, then the primary influence on reliability is the quantity and quality of data, I see no good reason not to give the client an analysis of that,,, subjectively, of course,
Is Bias Biased? - George Dell, SRA, MAI, ASA, CRE
February 8, 2023 @ 10:38 am
[…] Reliability: Accuracy, consistency, & stability across varying situations […]
Regulation Innovation Stagnation? - George Dell, SRA, MAI, ASA, CRE
April 26, 2023 @ 1:16 am
[…] traditional, judgment-based philosophy. Insistence on good ethics helps. Requiring measurable reliability solves […]