The “Appraisal Process” requires good judgment in picking comps.

USPAP (Uniform Standards of Professional Appraisal Practice) uses the word comparable 95 times.  Comps MUST be important.  However, it is not defined.

The Appraisal of Real Estate states a comparable must be similar, competitive, and “able to be compared.”  In the 14th Edition, it is used 865 times!  It does say comparables should be similar in zoning and other characteristics, should have the same highest and best use, and that “the most probable buyer” is critical in sales comparison.  It  says that market analysis sets the stage for picking comps, however the diagram of the valuation process indicates you pick comps first, then analyze the market.  Hmmm.

In USPAP, adjust is used some 81 times, but not at all in the performance standards or integrity standards (scope of work, ethics, etc.).  It too seems to assume everyone knows what adjust is.  It must be obvious!  (It is not in the list of important definitions.)

Why three?  Well, it fit on 8 ½ X 14 paper.  Why then five or six?  Well, it was discovered that more data could sometimes give better answers and understandability.

Why not eight or ten?  Or twenty-two?  Or fifty-three?  Four good reasons:

  • The appraisal process was developed when data was a onesey-twosey collection job.
  • Three comps fit nicely, and printable spreadsheets did not exist. (Yes really!)
  • The human brain only grasps well three or four things at a time.
  • Everything had to be typist-typed and re-edited at least twice.

Adjustments were based solely on “personal experience.”  Experienced trainers passed on their knowledge of the market with pre-typed adjustment guides:  $65/SF living area,  $6000/bathroom, etc.  Capitalization rates were king as appraisers did not have access to calculators like the HP38c.  Simple division was possible on a calculator or pencil/paper.  When spreadsheet DCFs (Discounted Cash Flow) models came along they were first resisted, then abused, then banned, then reincarnated.

Adjustment ‘support’ then evolved:

  • Pairing – which never worked. (Or worked perfectly, depending on your motivation.)
  • Grouped pairs – logically worked better, but subject to selection bias and unsimilarity
  • Simple regression – common-sense debunked, due to confounding (“other factors”).
  • Multiple regression – clever, but worsened data problems, and gave wrong adjustments.

Good judgment was king!  Good judgment wins!  The 1930s method of appraisal process continues.  Recently — powerful professional committees doubled down and reiterated the “established” validity of legacy methods.  Why, even USPAP emphasizes how work is acceptable.

  • It must meet the expectations of clients;
  • It must meet or exceed peers actions.

Do what your users expect and what your peers do, or you are incredible!  (Not worthy of belief.)

And if you are not worthy, you are in violation – subject to administrative punishment.  Do not dare to use data classifiers, predictive algorithms, and visualization.  If you do you are bad!  Do what everyone has done since the 1930s.

Enforced compliance and biased believability must go.  Bias of all types can be challenged and solved.  We have the science: evidence, transparency, and reliability scoring.