How many comps should I use? What do “authorities” say?
Historical appraisal practice, as passed on from trainer to trainee, in the outdated master/apprentice model that imposed strong inviolable dictates. For residential forms, the magic number was three, sometimes up to five or six. Three and a subject column fit perfectly on a 8 1/2 x 14 paper! Wonderful.
For narrative reports, five or six are considered ‘adequate’ for believable opinion support. Once general appraisers learned to use the accountant’s spreadsheet, it fit nicely onto a horizontal orientation in an 8 ½ x 11 bound report! Convenient and impressive.
USPAP requirements have an internal self-contradiction about how many comps are required for the “believability” measure. On one hand, the requirement for “an acceptable scope of work” endorses the stated historical practices. (The standards require us to do what our peers historically do — right or wrong.) On the other hand, Rule 1-4 states the appraiser must use “all information necessary ” and “such comparable sales data as are available.” Which is it? All competitive sales, or just 4 or 5?
The ‘statistical’ approach is mostly brought to us by “common-sense” thinkers. Unfortunately, statistics is often not obvious nor intuitive. (See two of my journal articles in The Appraisal Journal.) Errors and Mistakes are quite commonly found. The greatest of these is assuming that somehow, some way, we can apply sophisticated and clever inferential statistics, and use such impressive things as p-values, t-scores, chi-squareds, confidence intervals. And then impose this on a multiple regression to rely on a R-squared. Look how smart!
Some vaguely recall from a high school stats class, that you needed 25 or 30 for your sample size to use “statistics.” So that must be the right number. It is good . . . for a random sample size, for one variable, But only if you know your population, and you have a reliable random sampling protocol.
But to use inferential statistics for valuation, we have to pretend our study population is more than the actual competitive sales. AND we can then pick comps, but only if we pretend they were randomly selected. Picking comps first, then claiming randomness in the comps chosen, does not work. Claiming “randomness in the whole data” does not work. Even claiming “I’m a random sorta person” does not cut it! Creating an imaginary super-population does not work. Magic and incantations do not work.
Unfortunately, some of the above (including the bad models, the Errors and Mistakes) are still taught in appraiser qualifying and “advanced” education, and then required by state boards for licensing.
The Econometric Approach applies some simple rules: 1) Use the complete relevant data set; 2) Apply objective similarity algorithms; 3) Integrate the competence of a field-related-expert (the asset analyst).
There is an econometric ‘rule’ for optimal data set size. The optimal data set must recognize the “bias-variance tradeoff.” This means that if too little data is used, it will reflect selection bias (whether human or algorithm). If too much data is used, it will simply make the result less sure (variance).
In the Valuemetrics curriculum (SGDS1 class), we call this the “goldilocks” point, as exemplified by baby bear’s (not too many, not too few), but “just right” citation of the USPAP.
Sometimes the ideal data set size will be 3, or 13, or 33. Only by chance will it be 30, or any other particular number. If appraisers are required to use any arbitrary number, the result will be less than optimal. Modern valuation progress requires three things: 1) a clear definition of “what is a comp?” 2) Appraiser training to reflect this simple data analysis rule; and 3) standards and user requirements (such as the GSE’s) that require this basic data science issue to be evaluated in every appraisal.
Patrick Egger
December 15, 2022 @ 12:32 pm
I don’t think USPAP has an “internal self-contradiction” when it come to the number of comps required for believability. I agree that USPAP endorses the actions of your peers. However, I believe the underlying assumptions in USPAP expect the peers in this case to act appropriately. Essentially, USPAP suggests you following the actions of your peers. In doing so, it assumes your peers are employing good appraisal practice, by adhering to all of the other principles of USPAP.
Comprehending what is and isn’t comparable to the subject (from the market’s perspective) is the key. I remember George citing a study that buyers will consider properties 14% larger, but only 7% smaller. Is the subject more similar to the grouped data or the outliers and how does the comparability impact the exposure, marketing time and value? In a declining market, are closed sales or current listings more reflective of value? Which ones are the most comparable and why? How do you convince the reader your are right?
What’s necessary is the appraiser’s ability to understand what the data is saying and how to translate that into findings that are logical and understood by the client. Believability isn’t rooted in the number of comps, but in the depth of the analysis and the presentation of the data supporting the conclusions.
The best thing about GDU (George Dell University) … George makes you think. Thanks for the article.