Appraisal Myths, Fallacies, and Foibles, #1 is the first in a series.

Hype vs Reality:  Adjustments and Comps –

is the title of my talk on August 4th, 2017, for the Northern California Chapter of the Appraisal Institute.  In preparing for this talk, it came to me that much of our discussions revolve around how technology is affecting our industry.  The technology reveals some relics, anachronisms of our appraisal habits which are no longer needed.  Some of these continue to hold back our profession, while our competitors are not restrained by tradition and inertia.

What surprises me is that data science thinking uncovers other weak fallacies in our valuation theory.  Data science is the answer to a big data world.  Data science is the science of data, which includes statistics, analytics, and mind/machine optimization.

I have often railed against the “inferential fallacy”.  Some appraisers pretend they draw random samples from imaginary populations of comparable sales, just so they can misuse p-values and other statistical tests to ‘prove’ how really good their opinion is.  Unfortunately, some of our ‘advanced’ education promotes this complex, inappropriate, outdated algorithm.  Inferential statistics are convoluted, require heavy assumptions, and are of little use for the appraiser.

Another fallacy, commonly found in residential software, is the use of regression coefficients as ‘proof’ of adjustment amounts.  The two are mathematically quite different.  A regression coefficient contributes to the prediction of the predicted (dependent) variable (usually property value).  An appraisal adjustment is an estimate of the marginal change, by assuming all other features are held constant (“ceteris paribus” as economists call it).  This conflict is prevalent, even as The Appraisal of Real Estate warns against it.  (14th ed., p.400)

R2 is easy to calculate, and looks sophisticated.  But its validity is tempered by several other factors which must be considered at the same time.

Information loss occurs when you discard valuable information without reason.  I’ve used a learning example of this for almost 15 years in the Stats, Graphs, and Data Science1 class.  Unfortunately, the ongoing version of the Fannie Mae, 1004mc form commits this information loss error.  The resulting trend moves in the wrong direction at the worst possible times:  market tops, and market bottoms.  The 1004mc alone may help create and exaggerate the next market meltdown, and taxpayer bailout of the GSEs and lenders who simply follow the error.

A false assumption is that an appraisal can reflect accuracy by its result.  It cannot.  As there is no true value against which to compare, analysis accuracy can only be judged by the process involved.

Although appraisal standards require that our result be objective, our education gives a circular explanation of “what is a comp”?  A comparable is competitive.  Competitive is similar.  Similar is if it can be compared.  Subjective, undefined data selection can never provide an objective result.

Assuming “provable” adjustments.  Some are; most are not.  There are three types:  1) those which are deterministic, where the result is exact based on the data input; 2) adjustments which may only be estimated, and require some regard for uncertainty, given the data input; 3) adjustments which are asymptotic (“sidle up to”) and helpful, but have a known direction of bias.

There are more.  This is the first of a series on myths, fallacies and foibles.