Product Insights: Understanding Validity

Product Insights: Understanding Validity

Joshua Spears

Chief Product Officer, Co-Founder | Passionate about product design and user experience | Confidant in News

At Traitify, we believe in the importance of providing our customers with science-based, validated assessments. While that sounds great, what do we really mean when we use that term “validated?”

 

What is validity?

“Convergent” or “construct” validity scientifically checks whether we’re really measuring what we’ve set out to measure. In the case of pre-hire assessments, are we really measuring personality, or instead identifying something else about our test-takers -- their reading ability, English fluency, memory, or ability to stick with a task? Are we left with actual personality data, or just a dataset of people who could finish a lengthy questionnaire?

These are important questions upon which we base everything we do. If we get this wrong, we’re making decisions thinking we’ve accessed something we may not have at all. So how can we ensure that we’re getting this right? If we say we’re measuring the Big Five dimensions of personality, where can we find assurance that we’re really doing what we set out to do?

 

A closer look: convergent validity

Here’s the basic question we ask when we measure convergent validity. If our assessment concludes that a job candidate is high in Extraversion, for example, do other measures of Extraversion agree? If Traitify is routinely saying one thing, but other measures say something else, one would have to wonder if Traitify’s conclusions were helpful at all.

So in the process of developing our Big Five assessment, we gave the same individuals not only our assessment, but two other established measures of the Big Five. We then looked for correlations between our conclusions and those of both other measures. If we’d fallen short on this, we’d have to keep refining our measure until we achieved industry-standard correlation levels. Our specific findings are reported in our Big Five Manual, an important resource for all our customers. Since our Big Five assessment has well-established convergent validity, you can rest assured that our results are consistent with what other measures would obtain -- and yet we do so via a shorter, more engaging, more accessible path.


Taking a closer look at validity of Traitify's Big Five assessment

 

But do users believe us?

Another important measure is whether our users believe the results. Have you ever taken a quiz or assessment that produced a result you simply felt was wrong? Perhaps you have friends on social media who post results of short web-based quizzes for fun -- one that guesses their age as 21, when they’re in fact 46? Or that says they’re a strict, authoritarian parent when they’re known as the most permissive Mom or Dad around? Those are useful for a few laughs, and not much else. These measures lack what we call “face validity” -- the belief that the way something is measured, and the results it generates, seem in line with reality.

Our assessment output includes a simple question: “Do you feel like your personality results match you?” with the same “Me” or “Not me” choice they’re given for each image of the assessment. We’re pleased that 98% of our respondents select “Me” for this question. So not only are our results in agreement with other measures of the Big Five, they’re aligned well with our own understanding of ourselves -- hence, we clear that “face validity” hurdle.

 

Final big question: Is this information useful?

There’s one more key metric we need to use to gauge the value of our assessment. Simply put, can we put it to work? Do we gain any insights that can be used to understand how people will respond to a workplace? If we know one piece of information about an individual (in this case, their personality profile), can we use that to predict a second piece of information (for example, their likelihood of staying on the job for a full year)?

Once again, consider what an assessment would look like that didn’t have this power. An assessment that measures a person’s acuity in differentiating between “sweet,” “salty,” and “bitter” foods might be a well-designed gauge of one’s sense of taste. If we then tried to use this “taste score” to predict a job candidate’s success as a merchandising clerk, a store manager, or a nursing assistant, the results would only be as predictive as random guesses. So we must be certain that what we’re measuring is relevant for the outcomes we care about. If we can use a measure to predict relevant behaviors, then we say it has “predictive validity.”

Our Big Five assessment has achieved this. We have provided a diverse slate of clients with the ability to identify best-fit candidates who go on to success in their roles. The metrics we use to gauge “success” vary from customer to customer. For example, a Traitify client in the healthcare industry recently found that lower scores in most of the Big Five personality dimensions were predictors of longer tenures. Candidates with lower scores in Conscientiousness, Extraversion, Agreeableness, and Emotional Stability were more likely to be on the job 12 months later. Portrayed in the chart below, that’s just one example.

Traitify client in the healthcare industry found that lower scores in most of the Big Five personality dimensions were predictors of longer tenure.

 

A validated assessment brings lasting value

In all, we can’t cut corners on this. If our assessment isn’t validated, it’s just an opportunity to have a little fun or reflect on ourselves. When it comes to gleaning actionable insights for hiring, we’ve got to do much better -- and we do. When you administer the Traitify Big Five assessment to your job candidates or employees, you can be sure you’re giving them a measure that will do what it’s meant to do. Yes, it’s fun, engaging, and easy -- but it’s also a robust, validated measure with solid science behind it.

 

Interested in our Big Five Manual? Contact us.

 

RSSSubscribe to Our Blog

Archive