In our second fallacy, we begin with a question: have you noticed how a new thinking context reveals weaknesses in the traditional appraisal process?

The new thinking context is that of data science.  Data science reflects dramatic evolving changes.  The cause is simple, but twofold.  First is the availability of market data.  Second is the ability to manipulate and see the data.  In my econometrics classes, we heard:  “let the data speak to you.”

How does this concept affect appraisers?  Does it change how the valuation problem may be approached?  Does it enable algorithms or models which may have been impossible in the past?  Can we use computation to leverage our knowledge of markets and methods?

I have called the misuse of inferential statistics the “Inferential Fallacy” (IF).  For most of us, inferential statistics was the subject we hated the most.  In putting together thoughts for this topic, it occurred to me that the (IF) deserved more than one post.  This fallacy is like a convergence of bumper cars crashing into one point on the steel floor.  Each driver slams the others, assured of the invincibility of their p-value.

This fallacy helps us to understand human behavior. We’re resistant to new ideas, and insistent on old reliable ideas. It literally controls our response, often with no real thought or weighing of what might be better or worse.  Simply ignoring the challenge becomes the easier way to go.  The ego’s protected and it’s a way to avoid learning more new stuff. It’s a way to spend hard earned intellectual capital. And not have to invest in new, possibly disturbing ideas that challenge our solid concepts of how the world is.

How is the World?

Therein lies the problem.  The data world changed, is changing, and will continue changing at the speed of Moore’s Law.

The Inferential Fallacy did not evolve from the appraisal discipline.  It came from the statistical world, and is the nature of the problem.  Data was difficult.  It was on paper and measured with slide rules.   Most problems, we wanted to see what the whole looked like.  We couldn’t measure the whole.  We could only measure the parts.

If we look carefully at one part, we reasoned, perhaps that part will be like the whole.  Makes sense.  Then the mathematicians and scientists jumped in.  A guy named Fisher, and others, created “inferential statistics.”  Inferential statistics put a reliability number (a statistic) on how well the small part (the sample), represents the whole (the population).

Being mathematical, the assumptions and functions were rigorous, and mathematically provable.  The assumptions were the rules to be followed to make the number, (the statistic) valid.  The conditions are simple. First, you must make a list of the members of the population, or describe it precisely.  Then, you must draw the sample randomly from the population.  These two rules (plus one more) are absolute.

Finally, the problem you want to solve must be — must beto describe the population, because you cannot reasonably gather up all the members of the group you want to study.

Why does the inferential fallacy affect the appraiser?  Because this is not the appraisal problem!

Per The Appraisal of Real Estate, 14th ed., p.736:  “Predictive models are predominant in most valuation settings.”

We predict the value of a property.

Nothing more, nothing less.  “Appraisers don’t do no random samples.”  We pick them carefully.