Determining Data Value: A quick overview

 

For any given data point in a data set, there are primary factors that determine it’s worth. The primary factor is whether a data point is relevant to your market.

If you are selling wine, for example, the first factor is whether they are wine enthusiasts or not. This factor has a range of meaningfulness that is based on source. A member of a mail-order wine club is strong: so is a long-term paid subscriber to a Wine Enthusiast type magazine. This is considered ‘transactional’ data and carries more weight than any other source. A self-reported acknowledgement, such as filling out a questionnaire for a free case of wine, is indicative, but not as strong as a buyer. Someone who checked the wine box in a general questionnaire for a general sweepstakes offer is weaker still, and lowest on the relevant range is a guess, based on modeling know wine enthusiasts using general demographics, creating profiles of likely wine enthusiasts.

The next factor is recency, which is basically a sliding scale of validity. Recency confirms an active verse a possibly dormant interest in wine. The more recently the purchase was made, the more likely the interest is still there. As a rule of thumb, recent months are best, the last 12 months is good, within the last two years is still valid, and up to 4 years old can be used in broad marketing indicators, but after 4 years the data shouldn’t be used.

Next in line is demographics. Demographics won’t find your enthusiasts by themselves but can be applied very effectively after the fact; age, income, ethnicity, gender, income and geo area can clean up the source interest and intensify it, narrowing your best prospects or customers down even more effectively.

Then comes accuracy and connectivity. You have identified a recent wine enthusiast, but is she still at that address? Does she have the same email address, smart phone # and device id’s? Accuracy and connectivity is obtained by adding multiple sources of data together that are programed to confirm and up-date each other, creating a ‘most-recent’ data set of core records that is then ‘cleaned’ by various software and produced at the conclusion of an update cycle.

Of course, all of these factors are inter-connected and affect each other. At Dunn Data we have been programing and refining the application of all these factors into a powerful super-set of data for decades, using measurable performance for our clients as the ‘proof’ of our formulas. We are in a constant state of updating, refining our algorithms and matching logic based on both current and new types of data sources. As ours is a user-friendly superset with a basic menu that is indexed by level of interest, our carefully-built data products, using 40 years of programing and experience, can easily be put into play with your data application needs.