Artificial intelligence (“Ai”) does things which mimic human intelligence. This includes learning, problem-solving, reasoning, and making decisions. These Ai systems use huge (or restricted) data to find patterns and make predictions. These are also the construct of data science.
How does this relate to appraisal?
This Modernizing Appraisal series is about the analysis process, not FannieMae and FreddieMac reporting requirements. The process is identical: 1) The problem; 2) The data; 3) Predict or adjust; 4) Communicate. We continue here on #1 – defining the problem to be solved. Read the earlier parts of this series here.
The Appraisal of Real Estate says: “Predictive models are predominant in most valuation settings.”
Scientific method works best:
Traditional appraisal: Scope of Work, Pick comps, Adjust, Reconcile.
Modern valuation: Structure the problem, ID relevant data, Predict/adjust, Assess reliability.
Artificial intelligence can bestow stupid results. Or it can sharpen human judgment.
The benefits to the appraiser are multiple: 1) it can help formulate the problem; 2) it can uncover, improve, and help select the right data set; 3) it can often “calculate/estimate” adjustments, and formulate a predictive model; 4) it can (with guidance) provide a reliability measure.
Benefits to users can include a specific risk score, forecast values, periodic (or real-time) asset measures, and directional velocity scores. Also possible incorporation of other types of risk for lenders, investors, equity enforcers, and the public economic measures (including non-financial benefits/costs).
Conclusion: Ai for appraisal works best with data-centric methods, versus judgment-based practice.
This means that the future of valuation (and risk metrics) lies in proper prompting of your “Ai machine.”
My current conclusions about the future embrace a broad view. These include aspects beyond just asset analyses, but also include their uses in industrial, financial, cultural and personal/social worlds.
In general, I believe ethics will become more important. This is because lying becomes easier – easier to cover up, more convenient, and even more profitable.
Also, it is the tendency of mass systems to make mass errors.
A recent example of this is the emphasis on current risk (making deals) by the GSEs, while underestimating outlier risks (“black-swan” occurrence). More specifically, Ai tends to make stupid judgments and conclusions, if not properly prompted with scenarios, examples, and guard-rails.
Science, as well as art (creativity), will also require improved connection between the human “operator” and the machine. This means the connection system will require emphasis on human strengths. Visuals and easily-seen data summaries are needed for human comprehension, as well as for problem (scope of work) definition.
Ethics requires this mind-machine connection, as well as the ability to direct the Ai machine.
For those new to Ai – start by playing. Start with a simple problem. Learn the basics of proper prompting: Set a scenario, boundaries, any specific outputs desired, and examples, as well as defined data sets, when necessary.