We’ve heard that Fannie and Freddie are developing new forms.  So, what might the next 1004 form look like?  Will it require three comps, two listings, and a pending?  Will it require a full data download, or a description of search parameters?  Will it still even be designed to fit on 8 ½ inch paper?

What’s the difference between a form and a data entry page?  Will “forms software” even be necessary?  Will the result require less appraiser expertise – or more?  Will it encourage the “form-filler” people, or will it require some real understanding of problem identification, data selection, predictive methods, and communication?  Will the transmittal require both an electronic data stream and human actionable views?

Will it require appraisers at all?  Or will the “data analysts” simply create the ultimate model.

These are big questions.  From my point of view, some of the answers are obvious.  But first, let’s outline how we can even ask the right questions . . .

In the Data Science viewpoint, there are five primary components:

  1. Depict and quantify the problem. (Similar to scope of work)
  2. Delimit the overall data frame. (“Such comparable sales data as are available”)*
  3. Delineate/improve the information set: the Directly Competitive Market Segment (DCMS),© any indirectly competitive market data, and any needed analogous market data.
  4. Ascertain the predictor variables. (“identify the characteristics of the property that are relevant”)*
  5. Communicate the results.

* USPAP, 2018-2019, SR1-2(e) and SR1-4(a)

Each of these five points easily deserve an article/blog.  No doubt we will consider each of these points in the next year, as Freddie Mac and/or FannieMae develop the new appraiser “system.”  But briefly let’s consider whether or how an appraiser/analyst might be needed.

  1. In most residential assignments, the problem (including optimal use [HABU]) is nearly automatic. However, the worst loan losses and frauds center around problem identification.
  2. Data is substantially electronic. Only very-sparse data problems will require analyst augmentation.
  3. Similarly, data classification-selection methods are quite sophisticated and easy. Yet where the biggest errors will occur where physical inspection and/or specific market knowledge is critical.
  4. Again, automated and deep-learning methods can solve many problems. It’s the unknown or unexpected variable which can/will cause the greatest risk damage.
  5. Communication will consist of the appraiser modifying the data stream. The results will/should be able to plug directly into underwriting/investment decision systems and software.  Standardized underwriting dashboards will make such decisions consistent and easier.  On the other hand, the outliers, the exceptions – that cause the greatest risk exposure – will need to be explored, corrected, or set to a sensitivity analysis.

The coming future.  Coming soon.  In your office.  Will require something different.  Form-fillers will not be needed where there are no forms.  Machines can fill forms better, faster, cheaper.  What will be needed are analysts.

Who will be needed are people with valuation expertise who know how to see data, how to model, and how to communicate using modern tools.  Other problems will be done by automated models, supplemented by minimum-wage “inspectors.”



© You are free to share, copy and redistribute, transform, and build upon for any purpose, except for commercial use.  Attribution – You must give appropriate credit and may not restrict others. While I wish to help spread the benefits of the copyrighted terms and materials for the benefit of the valuation community, I wish to restrict those who would monetize or misinterpret or misapply the concepts as written.  My experience has been that commercial users are the greatest violators of the open-source, creative commons principles of knowledge sharing.

Thus, we request attribution, share-alike, and non-commercial use only.