Data-driven is a buzz phrase. It can be useful. It can be counterproductive.
“And in this corner we have . . .” It’s the appraisers versus the quants in the boxing ring.
It turns out that people who are experts in a subject-matter field (appraisers for example) tend to discount the words “data-driven.” The experts may even consider these words the enemy of good judgment and common sense. “No computer can inspect a property and drive a neighborhood and gain insight.”
Yet others seem to take pride in the words. “The methods we are developing are ‘data-driven’. So it must be clearly obvious to anyone that any results will be superior. These tend to be experts in related fields, like computer programming and statistics – the ‘quants.’
Humans tend to be comfortable in what they know, and tend to avoid that which is unfamiliar. If you are good with a hammer, everything starts to look like a nail. Appraisers have the hammer of good judgment. Quants have the hammer of numerical sureness.
A problem is that few, (if any) of the quants have any real field experience like that of an appraiser. Such specialized quant knowledge tends to believe that a better, smarter data-driven algorithm can surely solve the problem. Just get them to give me better data. Yes, better data, and I can fix things. The motive then is to get better data-collectors. The problem here is that if it is just a matter of collecting data, anyone could do it! We don’t need appraisers!
The other side of the problem is that few appraisers have any real background in statistical modeling or econometric approaches to valuation and reliability measurement. What does this mean? It means that appraisers tend to fall back on what they know — good judgment and practical experience.
The reaction is the same, but of the opposite pole. “Why do I need an algorithm, when I can just look at the comps?! I have credibility! USPAP says so!
So one side claims “data-driven” methods, the other side claims “good judgment.” So why don’t we have a winner? Its been 30+ years! No referee holding up a hand in the ring!
Why no winner is simple and yet convoluted.
- Data does not collect itself. Someone or something (an algorithm) must decide what data is to be included, and what left out.
- The data itself may need tweaking. For example, year-built may sometimes be a good predictor, but at other times it can be terrible. What really matters is effective age not chronological age. In other cases a needed predictor variable “element of comparison” may be missing or badly entered.
- Judgment can be biased. The bias can be conscious or unconscious. It can be endemic. “Anchoring” is the human unconscious tendency toward the first number to arrive to the brain. If this number happens to be the sales price, it brings with it the ‘logic’ of market wisdom, and the ‘Maslow’ human drive – like money and keeping a client happy.
- The Standards of practice reinforce this concept: Do what your peers do. Do what clients expect. (These are the two sole ‘tests’ of an adequate scope of work!)
In a coming Analogue Blog we will consider yet another person in the boxing ring: The referee.
Steven R. Smith, MSREA, MAI, SRA
June 16, 2020 @ 5:12 pm
One can never get a letter in writing from Peers on what they do. This concept is of no effect or force. Doing what the client expects is in the language of the SOW agreement. Sometimes what clients want or expect, falls into USPAP violations for the licensee. Without clarity, the appraiser can find themselves in trouble should a complaint be filed.