Statistical methods have been long touted in established appraisal education.  Yet they seem to remain mystical and remotely related to what we actually see in appraisal reports.  What happened?

I was an early proponent of more numerical methods as opposed to judgment opinion methods.  It seemed to me that adjustments taken off adjustment sheets seemed arbitrary, yet practical.  Picking comps seemed subject to bias yet the best approach at the time.

I also was fortunate to be on the very first development team for the original Statistics, Finance, and Modeling class.  It was a newly required qualifying education class.  I was waiting for the opportunity to submit a proposal for authoring this first class, but alas, no such opportunity ever was offered.  But at least as a development team member, I could contribute.  I had learned a lot from 14 years of graduate level classes in statistics, econometrics, math, and insurance (risk).  In each class, the old “trust me”- “trust my credible opinion” became less and less believable.

Yet the old ways persisted.  It started with that original Statistics, Finance, and Modeling class.  The finance part was excellent.  The statistics part focused on non-useful inferential statistics.  The modeling part, sorely needed, focused on the AVM industry.

The singular example was a data set of land sites.  Real data?  Homogeneity of the parcels was assumed.  The problem involved p-values and confidence intervals and standard error and all this from a “sample” of 7 non-randomly picked comps.  And all this was placed as a solution pattern for a regression.

The course assumed that the regression algorithm requires random sample inferential tests.  When asked about the random sampling protocol, there was some literal arm-waving and the statement “there is variation in the data.”

Variation in the data is called “variance.”  Approximation error in random sampling is “standard error.” The two are entirely different in concept, purpose, formula, and algorithm.  Different.

I wrote two peer-reviewed journal articles, published in The Appraisal Journal, to clear this up.  Unfortunately, the above conceptual errors (and others), continue to permeate appraiser education and the appropriateness of the “mystical statistical” models.

The appraisal model is quite different in intent and substance from the inferential statistics solution.  Seldom (if ever) do we have a population of sales of “similar” properties – sufficient to even think about using a random sample.  It is not “scientifical!”  It’s make-believe.

If we have the population, we just use it!  This is the essence of the data science approach to valuation.

We are appraisers.  We do not get to make up data to fit a solution.  We have to find a solution for the actual problem at hand.

We do not get to force this inferential statistical mystical solution onto limited data.  We do not get to ‘pretend’ (for example) that all the unsold houses in a neighborhood actually sold, in order to pretend that our hand-picked comp selection is magically random.

We do not get to select a non-random data set – then apply random sample statistics.  This is as bad, or worse than making up a comp sale, or misrepresenting it, in order to look good – or to please the client.

Unfortunately, the mythical statistical teaching continues, with some refinement.  It affects the ability of our profession to thrive, to survive, and to feel good about ourselves.

You can’t get objective results from subjectively picked data.  You can’t.