We compared what you know with who you know in part 1 – in the context of diversity (or lack thereof) in the appraisal profession.
In the past many appraisers entered the profession because they were born into it or married into it. A third path was to become an admin assistant (or secretary), learn stuff, and eventually get to “go out” into the field.
That, in very brief, is how the appraisal profession hit the bottom on lack of diversity (and age-demographics).
A number of ways of generating more diversity and fairness have been proposed or implemented. Most (if not all) of these solutions address the “supply” side of the equation: How do we bring more minorities and young people into the profession?
Unfortunately, these solutions tend to be square solutions that fit into the (century-old) square holes.
- Licensing: “Pick comps, make adjustments” exam tests.
- Objective: “Form an opinion, support that opinion.”
- Report: “Show three to seven example sales.”
- Result: “It has to be a single point value.”
So, what solutions will fill the “new round hole” of today’s reality? The real question users need answered are about the reliability (risk) of today’s (not yesterday’s) value, and the forecast of tomorrow’s safety/security/return.
License testing on old “judgment-based” ways require master-trainee coaching. Arguing for an opinion requires “experience” training. Picking the subjectively “best” comps require coaching. Picking a statistical “most probable value” without statistics requires trained chutzpah. Best with a harumph!
All were required by the old, personal-training-by-an-experienced-trainer paradigm. An apprentice, taught by experience with a worldly trainer. “Opinion-based” training must be replaced by market-based data analysis. We call this Evidence Based Valuation (EBV)©.
Claim #1: How does EBV open opportunity and deliver diversity?
Simple. It is what you know, not who you know! It is about detailing data, not about picturing people and neighborhoods. Market data measurement, not class opinion comparison.
Appraisal is “an opinion of value.” Science is a “systematic study” by an expert in the scientific method.
Good science precludes — or directly confronts — personal bias. Personal bias can only hide behind analytic bias!
But good, unbiased analysis is based on facts, logic, and clear-cut algorithms – not opinion. Good analysis is reproducible and easy to follow and understand.
And what about claim #2? EBV can provide “superior results of accuracy and reliability.” EBV emphasizes the use of the optimal data set. Not too much, and not too little – but “just right!” as says baby bear.
“Just right” is known in data science (and economics and statistics) as the “bias-variance tradeoff.” In USPAP it is known as using all information necessary, and “such comparable sales data as are available.” I keep looking and looking. Nowhere does it say the best three or the best seven.
In Evidence Based Valuation (EBV)©, we call this the Competitive Market Segment (CMS)©