I just attended the American Statistical Association joint conference in Portland.  I learned some things:  1) I have forgotten a lot from my extensive graduate stats classes.  2) My narrow focus on valuation and risk has narrowed my world.  3) “Statistics” has widened, progressed, advanced, and gentrified.

Statistics has progressed.

Valuation has not progressed.

Both started with a similar problem:  DATA (or lack of).

Statistics historically moved to solve the data problem (cost and difficulty of gathering all the data) by taking a sample.  Samples which could represent, or “stand in,” for the population of interest.  It was found that sampling was not so easy.  How and why you picked a sample seemed to make a difference.  Later, it was discovered that random picking was better than judgment selection.

A whole body of knowledge, generated around a random sample, paralleled the features of the population of interest as closely as possible.  Randomness became the goal. It became the basis of what is called inferential statistics – which is what most of us had to learn (after we got past the mean, median, mode, standard deviation descriptive stuff).

Valuation came later.  It was an accepted paradigm with the goal of analyzing “the market” – the actions of buyers and sellers interacting to sale prices.  It was clear that the “population” was sales prices of competitive properties, not all the properties in a neighborhood.  Without prices, there is no market result of buyer/seller interaction.

A whole body of knowledge settled in.  It used sales comparisons, cost comparisons, and income comparisons (because income is the most important adjustment feature).

Today, “statistics” has taken a turn or three.  First, most statistics do not need a random sample because the sample is the whole population, available electronically, analyzed algorithmically.  This is called “data science.”  The challenge is in understanding the big amounts of data.  Second, the advent of artificial intelligence has simplified and speeded the analysis of that data.  Human expert judgment is still needed, but the result comes much more quickly.  Finally, LLM (Large Language Models), a concept less than some four years old, are coming to dominate the AI sphere.

Applications of statistical models is dramatically, exponentially changing the nature and delicacy of research, authorship, review, and policy.

And yet today, “appraisal” still continues to teach new appraisal licensees the old obsolete ways:  “Pick comps, make adjustments, support your opinion.”  Just show your work to be “worthy of belief“.

WE MEASURE MARKETS, NOT COMPARE COMPS.