Technology in appraisal is the new exposure draft — Advisory Opinion (AO41) written by the Appraisal Foundation (Appraisal Standards Board).
It does not define what technology is.
More importantly, it seems to treat process technologies and product technologies without distinction. Appraisers mostly buy products (like measuring devices), and mostly sell data process technologies (like analytics, visuals, and ‘supported’ opinions).
The first exposure draft specifically mentions two algorithms (regression software and machine learning) and the AVM industry (Automated Valuation Models). And a specific type of Artificial Intelligence, called “Generative AI.” These deserve definition, given that they are specifically identified “technologies.”
Regression mathematically locates a line, or ‘smoother’, that fits the data. It has several appraisal uses, where the cause/effect relationship is known.
Machine learning uses different algorithms to find patterns in large datasets. It is marginally useful for individual appraisal work, due to generally limited, directly competitive (comparable) data.
AVMs (Automated Valuation Models) are neither a model nor an algorithm. It is an industry that provides value estimates. AVM companies keep proprietary algorithms secret. Property features and scope of work/analysis are decided by the client/lender, based on their policies. Most importantly, AVMs are not subject to the “core principles” of USPAP, as noted in the first paragraph of this Advisory Opinion draft. (Generally as set out in the Preamble and Ethics Rule of USPAP.)
Generative AI uses statistical algorithms to find patterns in large or ‘vast’ datasets. The results create new and original content of several types, including text, images, code, music, and video.
The motivation for this new/revised opinion AO-41 for appraisers is the advent and rapid growth of artificial intelligence.
What seems to stand out is not that this is a new way of forming an opinion, but a way to merge new tools and new analytics into old expectations and established patterns. Old ways as imposed by underwriter/reviewer opinion, regulations, performance standards, and legacy education.
The USPAP Scope of Work Rule says little or nothing about the importance of the market analysis. It focuses on subject features and characteristics. It only mentions “economic supply and demand,” and “market area trends … if necessary.” The entire modern market analysis process in two lines!
The emphases in this systematizing (Scope of Work) part of USPAP is to comply with established ways – what users expect, and what peers do. This focus is in direct competition with the admonition of Standards Rule 1-1(a) which emphasizes the need for proficiency in new methods and techniques.
(New methods are never “established” methods!)
On the positive side, USPAP seems visionary, when Standards Rule 1-4 requires an appraiser to analyze all information and such comparable sales as available. (Not just 4 or 5 hand-picked comps.)
This last thought is accentuated by the fact that the most influential appraiser educational organizations clearly state the appraiser must first pick comps, then do market analysis. Backwards. Backwards.
Judgment first, facts later.
It is my hope that, as others provide public comments for the Appraisal Standards Board exposure draft AO-41, this outline and definitions will help clarify the discussion.
January 14, 2026 @ 7:26 am
Yep my thought, so far it looks like I need to understand what the tool is doing and how it got there. I don’t know what information Excel uses in its analysis of a simple regression, yes price vs time, that I got, but can I explain the nuisances of the data tool, heck NO. So guess that means I am in violation of USPAP per AO-41. Does the market data fit the picture of the market, is it a reasonable comparison with the subject property. Would a buyer consider this sale vs others in their shopping cart as they drive thru their RE home store. Plus I still would like to meet 3 stakeholders and bend their ear. Tired of them making things harder than they need to be. Course also would like to say thanks to them as I don’t care about lending work, but they still manage to find a way to screw up my private party work indirectly.
January 14, 2026 @ 12:36 pm
George, this is a thoughtful critique, and much of it resonates with me. In my experience, many of the concerns you raise—especially around definitions, conflation of technologies, and the risk of forcing new tools into old procedural frameworks—have been with the profession for a long time.
In my opinion, USPAP has always struggled with striking the right balance between encouraging proficiency in new methods and reinforcing what are viewed as “established” practices. That tension didn’t start with AI or AVMs, and it won’t end with AO-41. What’s changed is the scale and opacity of the tools now being introduced into appraisal work.
I’ve seen this cycle before. When multiple regression analysis (MRA) was first pushed into mainstream appraisal education, it was often presented as a way to produce “market-supported” adjustments with mathematical precision. In practice, MRA worked very well in certain markets and very poorly in others. The problem wasn’t regression itself—it was that many appraisers were encouraged to use it without a strong enough conceptual understanding of when the relationships made sense and when they didn’t. The result was often false confidence rather than better judgment.
I see a similar risk today with AVMs and AI-driven tools. Appraisers are not being asked to become statisticians or software engineers, but they are being asked to rely on outputs that are increasingly opaque and client-defined. I don’t read AO-41 as an attempt to pull these tools under USPAP so much as an attempt to force a more honest conversation about what competent reliance looks like when the mechanics are hidden.
On AVMs specifically, I share the concern that they are not subject to USPAP, are not transparent, and are governed largely by lender scope and policy. That reality is uncomfortable, but it’s also unavoidable. In my view, AO-41 is less about merging new analytics into old expectations and more about separating judgment from mechanics. Appraisers are not being asked to replicate models or understand algorithms; they are being asked to understand enough to decide when reliance is appropriate, limited, or unwarranted.
From my perspective, that space between blind trust and outright rejection is where the profession has historically struggled. That’s also where independent context becomes important—not as regulation, not as certification, and not as a substitute for judgment, but as external evidence of how tools tend to behave across markets, price tiers, and use cases.
I also agree with the concern about market analysis being undervalued or treated as secondary. In my opinion, the way market analysis has often been taught and applied has been backwards—selecting comps first and analyzing later—and AO-41 doesn’t directly fix that. But it does reinforce, at least implicitly, that judgment informed by broader data, not just a handful of hand-selected comparables, matters.
Ultimately, I see AO-41 less as an endorsement of any specific technology and more as a signal that the profession needs clearer education, shared reference points, and more realistic expectations around what appraisers are—and are not—being asked to understand. If this exposure draft leads to better clarity around those boundaries, then it will have served a useful purpose, even if the draft itself evolves.
I appreciate you putting this blog together. This is exactly the kind of discussion the exposure process is supposed to encourage.