Would today’s appraisal process pass the Daubert “checklist”?
Daubert vs. Merrell Dow Pharmaceuticals (1993), a US Supreme Court case, basically says judges act as gatekeepers when it comes to allowing or disallowing expert testimony.
Residential appraisers today face some of the issues, but with other types of “gatekeepers.” Our gatekeepers: underwriters, reviewers, opposing valuation experts and AMC employees. (Who needs competency when you have a checklist!)
In addition to the Daubert case, the 1990 revision of Rule 702, (Federal Rules of Evidence), also provides guidance as to admissibility. Note the parallel requirements:
- The testimony is based upon sufficient facts or data,
- The testimony is the product of reliable principles and methods, and
- The witness has applied the principles and methods reliably to the facts of the case.
Using appraiserspeak:
- “get enough comps,”
- use dependable theory and procedures, and
- apply that theory and procedures to the comps relevant to the assignment.
The Court was adamant in saying that their decision was not to be used as a “checklist,” even as they provided some “points.” The overall point is that the testimony must be grounded in the methods and procedures of science or “the scientific method.” The factors listed as being pertinent comprise:
- Have the theories and techniques been tested?
- Have they been subject to peer review and publication?
- Is there a known (non-statistical) error rate?
- Are there governing standards?
- Do they enjoy widespread acceptance?
The focus is on principles and methodology,” not the conclusions (opinions).
What tickles my throat – is simply this:
- Today’s electronic data and computation algorithms enable an objective, market-defined data set. This data set, (the complete directly-competitive market segment) is easy to obtain and define, and can be “scientifically” replicated.
- Our clients are asking us to show evidence of the comps we used (and why), and the comps we did not use (and why). The USPAP Scope of Work Rule requires us to identify “the type and extent of data researched.” Standard 1 requires us to use “such comparable sales data as are available.” Standards Rule 1-6 says: “reconcile the quality and quantity of data available and analyzed…” (emphases added).
Ah well. Too much. . . . “Trust me – I know a good comp when I see it.”
Steven R. Smith
May 24, 2017 @ 9:34 am
Providing a list of all the relevant data is a good thing, even for lender work. It helps prevent come-backs when we tell the reader what our selection criterion was, how many were found, what additional Tiers we searched in order to find adequate data, etc.
Gary Kristensen
May 24, 2017 @ 9:38 am
Great post George. I like the question, “Is there a known (non-statistical) error rate?” If I was asked this question in testimony on an appraisal, weather I used statistical math or not, I don’t know how I would answer it. You could provide an error rate for an individual adjustment supported using statistics, but how would you answer this question for an appraisal that is not an AVM?
Neil Cahill
May 31, 2017 @ 8:49 am
I have a question: what are we talking about here? a valuation range of what? so a bathroom is 1000 or 5000. and you have comps with same bath count- that wont result in a 10-20k up or down misvaluation. same with other ancillary “adjustments” the location, gla, site size & age/condition are the biggies, (okok, if you have “comps”- yes we have comps) and the comps set is where u derive adjustments, how the specific comp set differs per value. so whats the beef? we are going microscopic on a macro problem… splitting hairs on brushstroke issues, and then rounding, weighting, squinting (reconciling) throwing the micro considerations out the window, and concluding. A value estimate is an “all things considered” opinion, so this micro approach is like carrying a tax return out to .0001 only to round it all the end to the nearest dollar.
George Dell
June 14, 2017 @ 6:32 pm
Yes, valuation is a look at a multidimensional problem. The basis of the scientific method is to first reduce a problem into component parts. This is called “analysis.” This allows the brush to be divided into different colors and different palates. Each of the component parts is then recombined with the larger model. This is called “synthesis,” or reconciling. It is not an issue of black and white. It is an issue of getting better. More sure. More true. (Precision and accuracy).
Back to Gary’s comment — The essence of the data science approach to valuation is that the overall model is designed to enable the measurement of reliability. In cases where models, or a sub-model is indeterminate or not clear as to applicability, then it is easy to provide sensitivity analysis both ways — just because they each appear to have similar final reliability scores. This allows the client to judge the applicability to their question of risk or value.