Enter your email address for weekly access to top multifamily blogs!

Multifamily Blogs

This is some blog description about this site

Drowning in Dirty Data

Drowning in Dirty Data

As AI continues to lead conversations, several RETCON sessions, such as Technology & Business Intelligence: Using Technology & Data to Make Better Decisions, discussed the underlying challenge still plaguing multifamily companies: Dirty Data. In order to implement predictive and decision-making AI solutions, data must be as clean as possible, and yet operators are still struggling in multiple ways.

To bring more clarity to the discussion, it's important to understand what "bad data" truly means, as it can reflect a variety of widely differing issues.  So rather than continue to use such a broad umbrella term, here are the different types of dirty data shared by assorted industry leaders:

Chris Blackman
Managing Director, Information Technology
Rose Associates

Doug Pearce
Executive Vice President of IT
Waterton

Jonas Bordo
Co-Founder & CEO
Dwellsy

Yetta Tropper
Managing Director, Head of Multifamily Asset Management
PGIM Real Estate

  1. Incorrect or incomplete data
    • Missing data
    • Incorrectly inputted data
    • Improperly formatted data
  2. Inconsistent data definitions
    • Formula discrepancies. Strangely enough, three different speakers across multiple panels noted the challenge of having multiple calculations used for NOI across data sources.
    • Inclusion differences. For example, when benchmarking relative to the market, some data sources may define a region different than other data sources.
  3. Limited, cumbersome, or sporadic access to data
    • Data access rights from supplier to client or between suppliers of a common client
    • Technical integration issues
    • Data complications when working with multiple ownership groups who have their own preferred systems
  4. Visibility issues
    • Data access among different silos of the business


This is surely not an exhaustive list of challenges facing the industry trying to obtain clean data, but clearly the issue of bad data is being impacted on multiple fronts. Some of these roadblocks were addressed by panelists. For example, to help address the issue of formula discrepancies, Doug Pearce indicated that every piece of data included in a report or dashboard has a documented definition attached to it, so all stakeholders can speak the same language on that front.

Arunabh Dastidar discussed the potential of using AI to root out bad data through assessing common formatting issues and issuing alerts when incorrect data has potentially occurred, such as when outliers are present.

Chris Blackman encouraged companies to expand the scope of where data is seen by different stakeholders in an organization. For example, if a leasing consultant is trying to push a rent increase while the resident has experienced multiple maintenance failures, that knowledge could assist or alter the renewal strategy completely.  So in that way, the data wasn't bad as much as it was invisible to all the correct stakeholders.

At the end of the session, the panel stressed the importance of creating a data-driven culture that is supported by top leadership, as well as making data related functions as easy as possible to maintain. As Jonas Bordo so eloquently shared, "One of the most powerful forces in the human mind is laziness."

What are other data-related challenges you see at your organizations?

 

Recent Blogs