Evidence is shortly defined as “Something that shows proof.”

It’s precisely defined in the law, and also addressed in philosophy and epistemology.

Epistemology deals with methods, validity, and scope in the theory of knowledge.  In the philosophy of science, evidence is what confirms (or not) your hypothesis.

Evidence justifies belief, but only if respected.  It is a guide to truth, and to objectivity. Read What is Evidence? Part 1 here.

Today’s world of data as evidence requires additional structure.  The analysis involves the selection, cleaning, and the application of non-data, underlying theory.  Mostly this is economics or human behavioral theory.  Economic and behavioral theory is then evidenced by proper analysis, and in turn creates today’s superior predictive models.

I had a course in law school entirely devoted to and succinctly titled “evidence.”  When I became an appraiser, my training seemed to involve nothing which felt like a legal explanation, nor a scientific process.  The appraisal process was a process.  Pick some comps, improve those comps, adjust the comps, and show your adjusted comps.  I learned it was important for me to be well-adjusted.

In practice, Nobody taught me what is a comp.  It was posed as a natural, intuitive thing.  The Appraisal Institute book says it should be:  competitive, similar, and “able to be compared.”

In USPAP, “evidence” is not defined.  However, it is used to explaincredible” results.  It’s also used twice in the section 1-4 relative to the income approach for real property.

In the law, evidence should be relevant, material to the case, and not specifically excluded (such as hearsay).  The evidence can be “real,” (physical see and touch things).  It can be “demonstrative,” (like a model, chart, image, or map).  It can be documentary in that documents (not witness accounts) are used.  Finally, it can be a witness, of personal observation, or expert observation or compilation.

In data science, the emphasis is on the data itself.  Today, computer power makes calculations and complete analyses easy.  It’s estimated that data scientists (and asset analysts) spend 80% of their effort on getting the data right.  Similarity matching dominates the research.

However, recall that data is just data.  Just a bunch of measured or categorized facts or probabilities.  The first task is to select the relevant data.  Now, relevant data becomes useful information.  The right selection of data must consider its relevance and its impact on the result.  For valuation this comprises the selection of sales in the competitive market segment (CMS)©.  It also comprises the relevant and influential predictors (“elements of comparison” in the traditional jargon).  But, wait, there’s more . . .

For valuation, there’s usually a limited number of sales which directly competed with the subject on the effective date of value.  It’s not a fixed number.  Nor will a fixed number be the ideal in each situation.  There’s a trade-off between too many (garbage data) and too few.  Too few sales invites analytic bias.

Remember that all measurements, selections, and sequences have some variation.  There is always error or uncertainty.  The best evidence reflects its authenticity, reliability, and inherent variability, as well as its relevance.  Likelihood of truth, measured,  is more important than any point statement claiming  or implying a truth.  Or worse: “It’s just my opinion.”

And finally, the best result will integrate the relevant computer algorithms with expert modeling judgment.  The nexus is visual:  graphs, tables, and simple descriptive numericals.

Data science explicitly recognizes the need for the right expert.  This is the appraiser.  The Asset Analyst.