Statistics are infernal for a few basic reasons.
- Misuse of words. Using different meanings at different times. (Like ‘infer’);
- Assume a clever tool will give you a clever answer (like regression);
- The fallacy that inferential statistics works without random sampling;
- The worst fallacy that regression is always based on an inferential sampling process.
Let’s look at each of these briefly.
First, “inference.” The general meaning of inference is: “if A, then probably B.” In statistics, the meaning is much narrower and specific. Inferential statistics measures how well a random sample represents a population. The problem to be solved is to study or characterize (mean, median, standard deviation, etc.) a population in which you are interested.
Statistics also has two meanings, often in a confusing and confounding manner.
- Is the study of data;
- And, are measures of how well random samples represent a population.
Second, “regression.” A simple math formula to minimize a bunch of squared numbers. Impressive. If you assume the data was [here we go again] randomly selected, you can use even more clever statistics … of the second kind. And you can claim that you are clever at using statistics … of the first kind.
Third, “the inferential fallacy.” This claims that random selection or random assignment is not necessary to use the clever inferential statistics. But it is. It is a mathematical formula, not a modeling judgment. Appraisers do exactly the opposite of random selection. They carefully select a sample which is non-random, and is [carefully] similar to the subject property.
“Appraisers don’t do no random samples.”
And finally, infernality carries the above insensibilities to a deeper mystic and mystifying demonstration of overwhelming cleverness. This is the assumption that any regression is built on an inferential model, randomly sampled from an exactly defined population, AND that the purpose of the analysis is to characterize the population from which the regressed data came….
Yet we base much of our “advanced” quantitative analysis curricula on unsupported assumptions, as described above. We forgot the “beginning” and “intermediate” stuff.
Whew! Again: the appraiser “must be careful not to develop a result that is mathematically precise yet logically meaningless or inappropriate,” and “should be able to distinguish between descriptive and inferential statistics.” The Appraisal of Real Estate, 14th ed. p.400.
It’s not that hard!
Gary Kristensen
March 23, 2017 @ 12:30 pm
Wow, there is a lot of deep stuff in this post for appraisers to absorb. Thanks George, keep them coming.
Dave Towne
March 31, 2017 @ 9:12 pm
This is a very enlightening and helpful blog post by George. I attended a two-day CE class taught by George some time ago, and couldn’t understand the discussion about ‘inferential statistics.’ Mental block is my excuse. This blog post clarifies the issue. It’s kind of funny though … since I began using spreadsheet charts and graphs in 2008 to visually demonstrate market data, I never have used outlying dissimilar properties (the inferential fallacy properties) to help the presentation. I have always confined the data to relevant properties that compete with the subject. George’s classroom discussion and this blog post parallel what I’ve been doing all along! And now it fully makes sense! Thanks!
George Dell
April 21, 2017 @ 4:28 pm
What is interesting to me, is that the appraisal profession has developed sophisticated methods over the years. Methods which are meant to deal with the sparse and troubled data we historically had to deal with.
What we can do today, is put some scientific logic and reasoning and understanding behind what we have done so well intuitively. However, – – – today the world needs and wants EVIDENCE. In the past, it was difficult if not impossible to provide deductive or inductive logic. Often the best we could do is try to help the reader follow our thinking. The mantra in my demo writing class was: “support, justify, explain.”
The mantra we need today is “show, estimate, predict.” The objective is to create an analysis which can be replicated (not subjectively judged/reviewed). With complete data, computation, and visual interface (human/machine), and modern analytic software (NOT the accountant’s two-dimensional spreadsheet) — a dramatically improved valuation product is created. And, additional appraiser products and services are enabled. This is my personal goal: provide appraisers with the ability and tools to provide new much-needed products, along with better speed, reliability, and income for those competent in the new “paradigm.”
Can Expert Analysis Save Appraisers? - George Dell, SRA, MAI, ASA, CRE
January 25, 2023 @ 1:17 am
[…] teacher. He knows that we appraisers do not have the quantity of data to engage in a traditional inferential statistical analysis. And of course, appraisers don’t take random samples, so inferential […]