Time “adjustment” is not really about time (or so they say). It’s about changed market conditions since the time of the comparable sale. This wording works fine for legacy appraisal practices.
The modern, data-centric, data science approach is clear that the market conditions are measured at points in time (or a continuum of time). Thus, the “predictor variable” is time, identified by the closing (or contract) dates of the subject, comparables, and competitive market sales.
This series of writings is concurrent with GSEs (government sponsored entities) requirements coming into effect during 2025, and the trauma of compliance on the part of lenders, management companies, and those pressed to provide understandable and accurate price indexing, like appraisers.
The AVM (automated valuation model) industry currently has no known requirements for transparency of market analysis nor conclusions, as do appraisers. This regulatory discrimination of one product over another is unheard of in other industries. Yet it exists. In fact, the cost of what little regulatory/standards there are regarding AVMs – is substantially paid for by appraisers through state licensing, and pass-through to the joint Appraisal Subcommittee. (Thus, AVM trend analysis algorithms are not considered here.)
In this blog post and the next, we will consider how each of the most important governmental, quasi-governmental, quasi-non-governmental, federal agencies and state regulators affect this one simple, straightforward issue: How does market-specific data calculate to a price index. (This was called “adjustment” in the historical/legacy/traditional practice developed in the previous century (1900’s.)
We will consider the historical, judgment-based, models “acceptable” in the past, their potential and their inadequacies. Then we will look at how the ideal model using ideal algorithms should appear.
FHFA
The Federal Housing Financial Agency (FHFA) has ostensibly been the overseer of the GSEs. Controversy around accusations of bias in loan making and valuation have motivated research into the possible causes of continued “protected group” bias from valuations. Issues include personal prejudice, and the effects of historical mores (such as red lining) and model/algorithmic bias.
Relative to the “bias issue”, this author believes analytic clarification would stop a great deal of angst, clarify differences, and lead to needed mutual solutions – if we first differentiate as between personal bias, and analytic bias. (I believe that until this distinction is clear, arguments will continue to talk past each other, and create more noise and confusion, not explanations and solutions.)
The FHFA Working Paper 24-07, published November 7, 2024, by William M Doerner, and Scott Susin, both PhD economists, researched any connection between appraisal use of “time adjustments” and racial bias issues related to property values and their appraised value opinions.
We will not get into political positions — whether belief-based, or evidence-based.
We note that the authors identified four “common” methods used by appraisers to “handle” changes in the value of the dollar (the measuring stick), and changes in demand/supply in a market segment.
- Grouped data. Contrast a current group of similar sales with a prior group (say month) of sales. The mean or median difference then represents the change from one month to the current month. The greatest issue with this method is the “Simpsons Paradox” logic error.
- Paired sales. This method compares a current sale price with a prior sale price of the same property. The theory being that this property and everything else stays identical, except time.
- Indexed prices. This method involves a graphic scatterplot of similar sales, and a regression trend line, using the coefficient as an adjustment factor, similar to other adjustments.
- No adjustment. This is the easiest This “method” is explained attributed to the mental difficulty, the uncertainty or lack of data, and lack of “official” anointed education. This, the authors found, is the most common method. (Typically, in over 80% of cases.)
In coming blogs in this series, we will examine each of these “common” methods. We will itemize what is the likely optimal method in both plentiful-data and sparse-data situations.
Finally, we will work towards a defined strategy for reviewers, underwriters, and risk analysts. Our goal is to create a clear review/audit policy, along with “checklist” review documentation (both for administrative reviewers, and professional appraisal reviewers).
Carolyn Mueller
January 14, 2025 @ 3:19 am
Thank you. I appreciate your thinking.