In our second fallacy, we begin with a question: have you noticed how a new thinking context reveals weaknesses in the traditional appraisal process?
The new thinking context is that of data science. Data science reflects dramatic evolving changes. The cause is simple, but twofold. First is the availability of market data. Second is the ability to manipulate and see the data. In my econometrics classes, we heard: “let the data speak to you.”
How does this concept affect appraisers? Does it change how the valuation problem may be approached? Does it enable algorithms or models which may have been impossible in the past? Can we use computation to leverage our knowledge of markets and methods?
I have called the misuse of inferential statistics the “Inferential Fallacy” (IF). For most of us, inferential statistics was the subject we hated the most. In putting together thoughts for this topic, it occurred to me that the (IF) deserved more than one post. This fallacy is like a convergence of bumper cars crashing into one point on the steel floor. Each driver slams the others, assured of the invincibility of their p-value.
This fallacy helps us to understand human behavior. We’re resistant to new ideas, and insistent on old reliable ideas. It literally controls our response, often with no real thought or weighing of what might be better or worse. Simply ignoring the challenge becomes the easier way to go. The ego’s protected and it’s a way to avoid learning more new stuff. It’s a way to spend hard earned intellectual capital. And not have to invest in new, possibly disturbing ideas that challenge our solid concepts of how the world is.
How is the World?
Therein lies the problem. The data world changed, is changing, and will continue changing at the speed of Moore’s Law.
The Inferential Fallacy did not evolve from the appraisal discipline. It came from the statistical world, and is the nature of the problem. Data was difficult. It was on paper and measured with slide rules. Most problems, we wanted to see what the whole looked like. We couldn’t measure the whole. We could only measure the parts.
If we look carefully at one part, we reasoned, perhaps that part will be like the whole. Makes sense. Then the mathematicians and scientists jumped in. A guy named Fisher, and others, created “inferential statistics.” Inferential statistics put a reliability number (a statistic) on how well the small part (the sample), represents the whole (the population).
Being mathematical, the assumptions and functions were rigorous, and mathematically provable. The assumptions were the rules to be followed to make the number, (the statistic) valid. The conditions are simple. First, you must make a list of the members of the population, or describe it precisely. Then, you must draw the sample randomly from the population. These two rules (plus one more) are absolute.
Finally, the problem you want to solve must be — must be — to describe the population, because you cannot reasonably gather up all the members of the group you want to study.
Why does the inferential fallacy affect the appraiser? Because this is not the appraisal problem!
Per The Appraisal of Real Estate, 14th ed., p.736: “Predictive models are predominant in most valuation settings.”
We predict the value of a property.
Nothing more, nothing less. “Appraisers don’t do no random samples.” We pick them carefully.
Steven DavisMRICS
July 26, 2017 @ 7:12 am
Though the ‘market samples’ are not chosen at random but are selected on the basis of their similarity to the Subject property being appraised, the context of the larger sample (the general market for the Subject property) must be kept in mind as this is the population that the Subject property is a member of. The selling price range of the ‘neighborhood’ sample could be above or below the mean of the general market but it will always be in the range of selling prices that is defined by the general market. As a result of using this method to set both the context and the basis for the Subject’s market value estimate we employ inferential statistics in the regression of the general market and descriptive statistics for the Subject’s immediate market so as to derive a value estimate that will replicate a buyer’s response to market conditions and property characteristics during the buyer’s search for a property to purchase.
Charles Abromaitis
July 27, 2017 @ 11:15 am
You describe your “larger sample (the general market for the subject property)” as the population that includes the subject. You also say you selected the members of the general market based on similarity to the subject; no random sampling there as you note and, therefore, as George asserts, probability, hypothesis tests, p values, confidence intervals have no role in your analysis. The parameters of the population are known, and your regression of the general market as you call it is a descriptive analysis, not inferential as traditionally defined. From a descriptive standpoint, regression is an estimate of the conditional distribution of the outcome, say sale price in your context, given the input property characteristics, so the appraisal process as a predictive exercise can proceed without the inferential baggage.
jimplante
August 3, 2017 @ 9:28 am
@Steven Davis: As you said, “market samples are not chosen at random.” Inferring general characteristics of a population–whether worldwide or merely in a defined neighborhood–requires random sampling of the population. Sampling what? Every property in the neighborhood? Nope, says George: sample only the comparables…er…similar properties…that are comparable.
What’s similar? Within 200 sq. ft. of the subject? How ’bout within 400 sq. ft? Is that similar? Okay, time for some masking tape: We’ll use percentages! Yeah, that’ll do it! Anything within 10% of the subject’s GLA is similar to it. So, anything within 200 sq. ft. is similar to a 2,000 sq. ft. subject, and anything within 2,000 sq. ft. is similar to a 20,000 sq. ft. mansion? At some point, GLA is not a value-influencing factor any more, no? You run into the same failure-to-generalize problem in formulating rules for age and condition, too.
One overriding condition to remember: We’re paid for our opinions, and all opinions are subjective. Don’t believe me? Give one person a 100′ tape; another a 30′ steel rule; another a Disto; and then bring in a licensed surveyor. Have all of them measure the same building’s exterior. Keep it simple: Rectangular footprint only. Did all four come up with the same measurement? Is anyone wrong? Answer: None are correct, and none are incorrect. All measurements are opinions.
Old SRPA’s have shown me a few early appraisals. The simplest was a polaroid photo (in color) of a house, with the address and a value estimate written in the white space on the bottom. No signature, no certification, no assumptions or limiting conditions. Just an opinion. That’s what they were paid for.
Charles Abromaitis
August 6, 2017 @ 10:24 am
Yes, they’re all opinions, no dispute there, but are they credible opinions? Seems to me that’s why George is arguing for an evidence based appraisal paradigm to replace the trust me, my opinion is solid holdover-hangover from the past. By the way, unlike the appraisal fraternities, the question of what is comparable has been advanced by statistical researchers and computational modelers far beyond the circular arguments of a comp is similar – similar to a comp – you know , they’re comparable because they’re similar. There are matching algorithms, classification algorithms, and even a predictive modeling approach aptly named nearest neighbors regression.
I recently came across a web site that challenged visitors to input their job and discover how that job is forecast to fare with the dramatic changes in tech, robotics, machine learning, AI etc sweeping the employment landscape. As a lark, I input my profession, real property appraiser, and clicked on the submit button. The 3 word reply returned was – “You are doomed”. As Willie would say, “don’t let your kids grow up to be appraisers”.
jimplante
August 6, 2017 @ 11:54 am
Charles, you wrote: “There are matching algorithms, classification algorithms, and even a predictive modeling approach aptly named nearest neighbors regression.”
And there you get into something George was emphasizing. What you’re referring to is machine learning algorithms, and machine learning is a significant part of the emerging field of data science. How does one set boundary conditions for matching or classification? How many nearest neighbors are evaluated to interpolate a data point?
There’s a cool, well-worth-it site at https://www.r-bloggers.com/in-depth-introduction-to-machine-learning-in-15-hours-of-expert-videos/ wherein Trevor Hastie and Rob Tibshirani provide a 15-hour long synopsis of machine learning and its parts and pieces. I found it fascinating. The textbook on which it is based is available as a free download, too. Its title is “An Introduction to Statistical Learning,” and it’s a sequel to their earlier work: “The Elements of Statistical Learning.”
One word of caution: Those videos will illuminate just how far behind in statistics we appraisers are. It’s like thinking you can dance, then meeting John Travolta.
Charles Abromaitis
August 12, 2017 @ 8:16 am
Jim, I agree with your assessment of that machine learning resource. Excellent stuff.
As I’m sure you know, there is really only one way to evaluate a forecasting model: how accurately does it predict on not yet seen cases. So we measure prediction error.
IMO, north american appraisal institutions have always lagged in their delivery of data analytic ed. I recall taking an income analysis course with the Canadian appraisal institute in the early 80’s. Financial calculators were widely available, but the course insisted on the use of financial tables. Fast forward 35 years and I see AI PD courses dealing with stat analytics requiring you bring your HP calculator but laptops are optional. Sigh!
jimplante
August 12, 2017 @ 9:02 am
Charles, I’ve been bitching out that 12-C ever since I entered the appraisal profession (about the last 17 years.) I even wrote to the national education folks at AI. They said that they couldn’t allow laptops because among other things their batteries might go out during a test. Apparently, 12C’s don’t do that. I responded that if loss of electricity were a concern, the courses should be taught using a soroban (japanese abacus). Just as accurate, very nearly as fast, just as cryptic to the new user as a 12C, and it doesn’t require batteries. One should not do DCF’s on a calculator. One should not do market extractions on a calculator either. Regression on a 12C? God forbid.
Hewlett Packard hit a winner with their financial calculators, but it’s past time that everyone should be using spreadsheets and stats programs for such work.
George DELL
December 15, 2018 @ 10:02 am
I too, when I first started teaching for the AI — asked why we were using an obsolete tool, the 12c. The answer I heard was “everybody uses and expects it.”