Ai, Artificial Intelligence, needs guidance, the right questions, the right prompts.
How does judgment-based appraisal work with Ai? Or does Ai work better with algorithms?
Appraisal is defined as an opinion, or developing an opinion. Defined as worthy of belief (“credible”).
Data Science (DS) is defined as systematic study by an expert in scientific methods and the field of study.
These differing goals define the distinction between the two. One is believability, the other is reliability.
These differing goals each set up a structure and process — in a play of concepts, culture, and context.
An important part of this distinction is the concept of “objectivity.” In appraisal standards, objectivity is required, but it focuses on personal and psychological concepts – such as bias, misrepresentation, advocacy, or even in a “grossly negligent” manner. (So, a little negligent is ok?)
Objectivity in science certainly includes the personal elements above – but focuses on systematic methods, replication, diversity, and skepticism.
We can typify valuation, or any research venture, into three levels of intention:
- Claim of personal practice objectivity, without further support.
- Statement of professional opinion, and argue (with “support”) for that opinion.
- Development of research and analysis from data to logic, to probability, with “evidence.”
Ai seems to do best with clear prompts. “Prompt engineering” means setting up prompts for context, clear goals, any input data, and expected output. What works best is:
- Clear and specific;
- Structure of inputs and data;
- Testability for logic, subject theory;
- Testability for analytic bias and fairness.
Artificial intelligence, unless it’s told otherwise, will tend to follow instructions. It has trouble knowing whose opinion to follow. It does better following facts and logic. It has even been known to make-believe and hallucinate, hoping to please you rather than disappoint you! (A codependent personality!)
Appraisal standards — USPAP (Uniform Standards of Professional Appraisal Practice) require appraiser work as “necessary to produce a credible appraisal.” If given USPAP as a guide, Ai will strive to be believable, instead of providing reliable results. (Reliability can be defined as accurate and precise. Accuracy is trueness to the right result, while precision is sureness of that result.)
If Ai is told to pick some comps, it will probably try to mimic appraiser practice, whatever that is . . . .
If it is told to identify the ideal competitive data set (the Goldilocks point), it will search for something different from what the traditional body of knowledge prescribes, instead, pulling from sources like mass appraisal, statistical literature, or more recently – data science. (Data Science is 80% data selection!)
More science, less “art.” Objective results, not objective opinions. We believe that the use of Ai in valuation requires the analytics, data-centric tools, and reliability-focused benefits of the data science approach. The discipline we call Evidence Based Valuation (EBV)©.
July 25, 2025 @ 5:25 am
Interesting article Mr Dell. The aforementioned ai hallucination is actually an inherent bi product of all types of quantum computing. The nature of quantum computing requires multiple outcomes to be simultaneously probable at the same time for any given computation request. That’s why it’s referred to as quantum, not analog. It’s not hallucinating, bur rather operating exactly as expected, multiple outcomes to any given request are always probable, despite all odds against such an outcomes actual probability. You’ll never get around this fact, inherent to the nature of all manner of quantum processing. Completely different than objective analysis from a well informed human whom can distinguish obvious anomalies, unreliable data, choose what is relevant to incorporate, and what is not.
Let’s please move past this idea that appraisers need less human judgment incorporated into any process in order to be more ‘scientific’. Without the human factor all you end up with is infinite relatively. Under the same line of reasoning, in the AI world, everything is subject to dismissal (art), because there is no absolute truth (science). As the old saying goes; Lies. Damned lies. And statistics. More data does not always result in a more reliable data analysis.
Thank you but I do believe we’re going to pass on the AI sensation currently sweeping the nation. Most people using quantum technology can not explain the very nature of what quantum computing is, and what it is not. Roll the dice, see which direction the isotope points with the next computation in a vacuum. Repeat with minor directive adjustments until the desired results are achieved. There is a lot of science behind the theoretical physics and physical hardware related to quantum technology. However in the human applications seeking to utilize AI technology as incorporated process in our every day lives, it’s hard to imagine anything less scientific or anything more vulnerable to manipulation and exploitation.