Recently, Keith Wolf (representing Freddie Mac), coined a new term for claims about adjustments, calling it the “adjustment fallacy.”
Adjustments to comparables first arrived on the scene as a way of explaining differences between the subject property and a compared comp. (A way to ‘support’ an opinion comparison.) In theory, an adjustment is supposed to reflect how buyers react to a particular “element” of comparison, taken separately. E.g., “a third bedroom is worth $8,500 more to a ‘typical’ buyer over two bedrooms.”
This presumes we can define a “typical buyer.” It also presumes that all other elements are “held equal” – same size house, same lot, same identical everything else.
Now we look at this another way: The typical buyer will pay $8,500 more for an extra wall, with one more door, and a closet.
Same identical. Different look. Different think. Fairy-tale think — exact matched-pair “support.”
This theory, by economists is called “ceteris paribus” – Latin for all other things held exactly equal. The adjustment is also called “marginal change.” A mathematician would call this the “first partial derivative” in a multivariate problem. A statistician would use this term, too, but add another term to quantify the uncertainty: the “error” term.
An appraiser calls this an “adjustment.” So what’s the problem?
The problem comes from the assumption that somehow, there is some magical way to calculate this marginal (ceteris paribus the uncertain partial derivative) from three carefully selected comps. This is the core of the “adjustment fallacy.”
Ahh, you say. But our “established” body of knowledge says: then you can go out to a “wider market.”
Problem: Our established, traditional, legacy, accepted, expected “process” says: you first pick comps, then find a way to “support” your unknowable incalculable adjustments.
Problem: Most of the claimed “adjustment support” methods “based on the market” – do not calculate the needed marginal change. They calculate the average change based on whatever data set is selected to represent “the market.” [In fact, the word meaning(s) of “market’ is not defined and greatly misused.]
Worse yet, this “carefully selected” data set is supposed to be “similar, competitive,” and “able to be compared.” Circular. Round and round.
A good comparable is one which can be compared! Wonderful!
The profession will never be able to advance until we know what it is we are doing. With logic. With evidence. With uncertainty defined and measured. With reproducible analysis, not “trust me.”
The good news: Time adjustments are not adjustments. They reflect a market trend. A trend which is easy to identify and apply. (Time-series analysis is quite different from other variables. Not like location, not like bedrooms or living area. It’s different.)
Trend analysis and time indexing takes just 9 seconds, using R. (A little longer in spreadsheet.)
The steps: 1) Download/identify the relevant sales. 2) See any trend change; 3) Apply “adjustment.”
The practice is repeatable, visual, calculated, self-explanatory, and bulletproof.
Oh well. In the meantime . . . Guess I just need to adjust.
These are issues we consider head-on in Valuemetrics.info education. Join us.
[ This is part one of an extended series regarding market analysis basis of valuation theory and practice. ]
Keith A Wolf
October 30, 2024 @ 4:04 am
Thanks George. Just to be clear the adjustment fallacy concept is rooted in how appraisers derive adjustments. To derive any adjustment for a variable you need know how those variables interact with each other. That is not taught in any appraisal curriculum. But it is taught in the Data Science curriculum. It is also how to control for those confounding variables. What is a confounding variables? That is what happens when we can’t split our variable influence, hence the true adjustment is hidden.