AI’s role in compensation planning: A double-edged sword

Compensation planning software and AI

Compensation planning software or “compensation tech” has been an exploding category over the past few years. When my co-founder and I first created Pequity in 2019, we had no idea of the explosion of technology that would follow. In that world, one of the first questions we got was how would technology make compensation easier – and AI became a topic.

Now with the AI storm raging all around us, I hear similar questions to those early days coming up – can AI negotiate an offer? Could AI predict the trend of salaries or equity and bake that into an employer’s model for promotions? Is there a way to anticipate employees at risk and auto-send them bonuses? Since this has come up, I want to share my stance, and how I’m viewing the possibilities of AI in compensation.

Is it possible for AI to assist in compensation planning?

Compensation at first glance looks like a mathematical problem. We have numbers, multipliers, dividers, and sums. How could it not be a math equation?

However, when you really look at pay decisions, you find it’s an economic relationship. Where outcomes are driven by supply and demand, scarcity, and subjective theory of value. Meaning, that most compensation decisions are driven by the quantity of talent, the number of companies hiring for that talent, and the context at the time they need that talent.

For example, right when I was leaving Google, cloud services was an emerging technology the leadership wanted to invest in. So to attract more cloud engineers, the compensation leadership team added a 30% premium to the salary and equity market data for regular software engineers. This is a great example of the subjective theory of value – where an object’s value is not intrinsic but changes according to its context. In this case, the context was Google needed more of these cloud engineers, so the perceived value of those engineers changed – therefore increasing their value and compensation.

Now if we were to add AI to this mix, do we believe that it could have made the same recommendation to increase compensation by 30% for cloud engineers?

Perhaps it could have looked at the data and made a better decision. Perhaps AI could create a formula to weigh this type of subjectivity and design a compensation plan to reflect the value these cloud engineers would bring to the company. 

In my opinion, AI could potentially create a model to aid in this decision, but in the above example, it would have to know how to weigh the vision Google had for its talent and business. If AI had looked at historical models or compensation market data, it would have seen Google weighing in on search, security, and possibly even machine learning. It’s possible it would have known to focus on cloud engineers with the right directives, but in order to do so it would have to include the biased weighting of the leaders.

AI compensation biases

This brings me to the main concerns I have with AI biases.

See, in the above example, it’s a mass decision – every cloud engineer’s role would be changed as a company strategic initiative. What about individual pay decisions though?

When a company extends an offer to a candidate, there are many factors that they will consider. The list could include:

– candidate’s years of experience
– how well did they perform in the interview
– how much others on the team are paid
– what market data says to pay the role
– what the candidate is asking for
– the budget the team set aside for the role
– expectations of the role’s responsibilities

All of this would go into the decision-making process.

However, when we feed data into an AI model, it looks for patterns and builds recommendations based on the historically predicted outcomes. 

Historically predicted outcomes of compensation

Do you know what the historical predicted outcomes of compensation have been? As a teaser, the wage gap still exists for women and people of color.

This is where my main concern with compensation AI comes in – if historical data is fed to an AI model, all the baked-in biases will exponentially be learned across the model. This means that AI in compensation if not carefully designed to offset previous decisions, will have biased outcomes.

If you don’t believe this could happen, Amazon – one of the largest tech companies in the world with a plethora of brilliant engineers and data scientists – had an AI recruiting tool that started to pay women less. This was because it went through Amazon’s resume history and found that Amazon had historically paid women less in previous decisions. This isn’t a one-off either – many of the FAANG/MAMAA companies have experimented with building similar compensation tools internally that never see the light of day because of the biases they found.

You may say “Well that could have easily been corrected for,” but the question that keeps me up is – could it?

When it comes to compensation planning, we may have the inputs I mentioned above, but there are actually innumerable inputs to candidates and job offers, such as:

– what talent the job description attracted
– who was referred into the role by peers
– what resume was selected from the applicants
– what school the applicants went to
– the school and education that the applicant could afford
– the internships and previous jobs that the applicant had access to
– the team’s perception of that applicant’s “cultural fit”
– the current culture of the company
– the job title you’re given
– the types of demographics and backgrounds that represent this role in the salary survey
– the way that compensation market data is aggregated

… honestly, I could keep going, but you get the gist.

In all the cases above, there has been study, after study, that the school you go to, the opportunities you have access to, the culture of your company, and the people at your company – all have incredible biases in them. This makes me worry that if we were to fully lean on AI to make compensation decisions or to codify things like market data, we would in effect be baking biases into our pay decisions without even knowing it.

Now the above is an extreme case to the negative of AI – I do want to be clear that I believe AI is a useful tool and should be experimented with at companies, and I do believe that it will make its way into compensation planning decisions eventually.

However, I want to caution on the how and the when. Companies need to carefully consider what AI will not just reveal about their historical decisions, but also what it will cause in their future. 


As a compensation data nerd and a tech founder, whose company happens to have a mission of “fair pay and opportunity for all” – if you want my honest opinion on AI in compensation, we need to test and verify each step into this uncharted land. Most tools have two sides – as my grandparents once told me “A shovel can plant a garden or dig a grave.” AI is no different.

In the market for cutting-edge compensation planning software? Schedule a demo with our solutions team.

Kaitlyn Knopp
CEO & Co-Founder, Pequity

Kaitlyn Knopp

Kaitlyn is a renowned compensation expert, with experience as an analyst and leader of compensation teams in the tech industry with companies including Google, Cruise, and Instacart. Her passion for equitable compensation and efficient systems led her to create and launch Pequity, built on the principles of fair pay and opportunity for all.


Related posts

Search How to create geographic pay differentials 
Doug Hall joins Pequity’s leadership team Search