My friend, David Braun, points out in his book,

The Valuation Analyst, that scope creep has turned into scope running.  I’m sure many appraisers, particularly residential people will agree.  He points out that as the reviewers ask for more, appraisers are finding ways to not be specific.  This has turned into a self-perpetuating loop:  “As the providers, users, and enforcement bodies have differing opinions on the proper level of the scope required . . .”

I have experienced this myself.  One reviewer or underwriter (or automaton) asks for [something].  Perhaps the same client expects that [something] every time.  Now it becomes part of my template (or more lovingly – my boilerplate).  Now other client reviewers see it, and think “I should be asking for that – I want to look good too.”  And so it goes.

So the reality is things do change.  The question is then:  In what direction are things going?  Are creepy scope things improving the product?  Are appraisers providing a better service?  And most importantly – are clients getting more useful, actionable knowledge?


The sequence is:
The question -> the data -> the analysis -> the information -> the decision
The question itself must be changed.

What do clients really need?

A point value of mysterious uncertainty?  Or do they need to understand real collateral risk and investment potential?

It appears creepy scope isn’t resolving the problem.  It may, in fact, be aggravating it as appraisers turn more and more to meaningless busy work.  Work intended to look good, pass the automaton checklists, and protect against errors and omissions claims.

Even as the goal should be accuracy and precision (trueness and sureness), we have increasing noise.  Why?

With today’s technology, it’s possible to do better.  Much better.  Today, we have the ability to follow econometrics and data science methods to produce a service — a product that is reproducible, as in the scientific method.  We have the data.  We have the computer power.  We have the software.  We have solid valuation theory.  And we have today’s data science methodologies.

So, what’s wrong with this picture? Several things.

There’s the reviewer/appraiser creepy scope thing.  And we have appraisal theory which gives no guidance on what is ‘truly’ a comparable.  The best I can find is “a comparable is competitive.”  Followed by, “a competitive property is ‘similar’ to the subject.”  You can always tell if it’s similar, because it “competes” with the subject.  Comparable = competitive = similar = comparable =  . . .

This brings up George Dell Rule #1:  (The Impossibility Theorem):  “You can’t get objective output . . . from subjective input.”

We cannot identify the ideal data set because we don’t know how to technically define “a comparable.”

Data science principles emphasize getting the data right is about 80% of the job.  Most appraisers would agree that getting the data right is about 80% of getting the answer right.  It’s very difficult to make a big mistake because of a bad adjustment.  It’s very easy to make a big mistake because of a bad comp, or just missing that the subject is an outlier itself.  Or perhaps the house isn’t really a house – it’s actually gas station land.

This is the problem.  It’s impossible to get adjustments objectively from a “trust me” data selection.

Trust me.  (See Rule #1).