Using AI to Get Hired: Cheating or Important New Work Skill?
Embracing "AI Enabled Work Ethic" is essential for hiring in the age of AI.
Sophia,
A 23-year-old from a non-English-speaking household, stares blankly at her laptop. She is an excellent problem solver, but writing has always been a struggle. Sophia has been making an extra effort to build her writing skills in preparation for landing her first job. She approaches her job search with mixed emotions, confident that she has what it takes to begin a successful career as a data analyst but anxious about her ability to create an effective cover letter and resume.
Sophia uses generative AI every day, so using it to assist her in landing a job seems like a no-brainer. With the help of her favorite generative AI co-pilot, Sophia refined her cover letter and resume, ensuring they clearly communicated the capabilities that will allow her to excel on the job. Sophia was careful not to have her co-pilot do all the work, using it as a way to compensate for her struggles expressing herself through writing.
Sophia was elated to find what seemed like the perfect job for her, but her heart sank when she saw that this employer explicitly forbade the use of AI in the application process. Sophia was sure that the company’s employees would be using generative AI on the job. Hoping the company would appreciate her transparency and AI skills, she added a note to her cover letter explaining her use of generative AI to help her application as an example of her resourcefulness and ethical use of technology to solve problems.
Unbeknownst to her, the company’s recruiter really liked Sophia’s resume and her drive, but wished Sophia had not tipped her hand about using AI. Because Sophia violated the company’s policy on using AI, she could not move forward, resulting in a lose-lose situation for everyone involved.
Mark,
Works retail and studies coding at night, hoping to land a job as an entry-level developer. Mark’s coding skills are shaky, but have been improving with the help of an AI coding co-pilot. Determined to land a better-paying job with more opportunities, Mark applied to as many entry-level coding jobs as he could find. After hundreds of applications, Mark finally landed an interview!
Mark noticed that the job posting clearly stated a policy against the use of generative AI in the application process. A bit of online research revealed that developers at this company use AI co-pilots on the job. Determined to get hired at all costs, Mark used generative AI to help him answer technical questions during the recorded video interview.
Mark was thrilled when he received a call from a recruiter who scheduled him for a 1:1 live interview with a hiring manager. Encouraged by his success in the first interview, but conflicted about using AI again, Mark decided to leave well enough alone. During the live interview, Mark stumbled through the technical questions as he quickly looked up answers on a second computer. The hiring manager, under pressure to fill the role, overlooked some of Mark’s shortcomings and decided to move forward with the hire.
At first the job was tough for Mark. His coding skills were not where they needed to be and led to slow work peppered with errors. But with the help of the company’s AI co-pilot and extra hours practicing at home, Mark began to improve. Looking back, Mark was glad he had decided to violate the company’s policy. His decision ended in a win-win, because he was able to use his adaptability and AI skills to help the company AND further his own skills.
Who’s right?
Stories like Mark and Sophia’s are becoming increasingly common. A recent study of 5,000 job applicants by Canva found that almost 50% of them used AI to help them build and improve their resumes.
And with Tik Tok videos teaching applicants how to use generative AI to help them pass job interviews, and sketchy tools such as Final Round AI, an interview co-pilot that which provides “real time guidance to ace every interview”, it seems pretty clear that gaining an AI based advantages becoming a pretty common strategy amongst job seekers these days.
But determining who is right and who is wrong in these stories is not as straightforward as it might seem. Rigid policies can push candidates into gray areas, and while it’s important to consider intent, context, and the practical realities of AI use on the job.
Sure, both applicants violated stated policies, Mark in what seems like a pretty serious way, and Sophia in one that seemed pretty innocent. But Sophia and her employer ended up on the short end of the stick, while Mark and his employer both came out ahead.
Hiring is hard
Hiring has always been a high stakes game of frustration for applicants and employers alike. Despite their best efforts to get noticed, applicants like Mark and Sophia struggle to stand out from the crowd. Continually uploading their hearts and souls into a black hole with nothing but silence in return.
According to a recent survey by CareerPlug, the applicant-to-interview ratio in 2023 was 2%. This means that for every 100 applicants a job posting received, 2 of those applicants were invited to interview for the role. While it is likely that unqualified candidates, mass apply tools and interview ghosting share part of the blame here, this stat is still very telling.
The good news for employers is that within these thousands of applications there are good candidates waiting to be found. But the sheer weight of it all means signals from ideal applicants are often unable to find reception, leaving employers with less than ideal outcomes that leave much to chance.
But getting it right is full of rewards for both sides, putting everyone on the hunt for any advantage they can find. In today’s world these advantages are all technology based. But as reflected in Mark and Sophia’s stories, tech presents a real dilemma, offering strategies to ease the pain while simultaneously creating it.
Asymmetrical warfare
Generative AI is quickly becoming the front lines of an escalating war. It is the first AI tool that is truly available to the masses, often at little or no cost. In the world of hiring Gen AI is allowing job seekers to engage in asymmetrical warfare. While candidates can make discretionary decisions about when and how to use AI, employers must carefully weigh their decisions on AI adoption, often opting to build walls in order to manage risk.
This has resulted in employers turning to all or nothing approaches that use restrictive policies and electronic countermeasures to stop all use of generative AI, good and bad. Often to the expense of both parties.
In a real war, each side shares opposing goals. But in this case both sides share the same goal of creating value through mutual support.
Further exacerbating the confusion for job seekers is the disconnect that many employers ban the use of gen AI in the job application process, while supporting its use at work.
In their 2024 Work Trends Index Report, Microsoft found that 75% of respondents already use generative AI at work. This usage is driving productivity, with a 2023 Harvard study showing that management consultants using AI completed tasks 25% faster and with over 40% higher quality than a control group. And the potential of AI is widely recognized at the right levels, in a recent Deloitte survey 94% of business executives surveyed view AI as key to future success. Candidates see the benefits too, according to research by assessment provider Arctic Shores—72% of recent graduates use generative AI in their job search, and 70% expect to use it on the job.
Anti AI use policies for job applications by employers who promote its use on the job smack of a hypocrisy that has a real and lasting impact on applicants' psyches, often leaving them feeling less conflicted about using it themselves.
Employers often fixate on the negative aspects of AI use, directing significant energy and resources toward preventing its misuse. But the reality is that resistance is futile when it comes to generative AI in the hiring process. Focusing on stopping these bad use cases is a zero sum game and a waste of energy. Agile applicants with no boundaries will always find ways to use AI to their advantage, staying ahead of employers whose AI detection tools are often imprecise and quickly outdated.
But when everyone wants the same thing there is potential for lasting peace.
AI Enabled Work Ethic is a peacemaker
The common ground that will yield progress is recognition that the constructive and ethical use of AI is not just a tool but a critical job skill for the modern workplace. Why not consider the ability to use generative AI constructively as a critical skill that can have a lasting impact?
By embracing the value of constructive use of AI as vital to the future, employers and applicants alike can focus on cultivating the value of what I call "AI-enabled work ethic", and define as:
“The degree to which an individual embraces AI and uses it effectively, responsibly, and ethically in their work processes while continually seeking to learn about and adapt to changes in AI technologies. Those who possess this skill use AI to enhance productivity, maintain high-quality standards, and work collaboratively with AI as a co-pilot. They make informed decisions that demonstrate an understanding of AI's capabilities and limitations, taking accountability for the quality, accuracy, and ethical use of AI-generated outcomes."
Reframing hiring around AI Enabled Work Ethic
As Sophia and Mark’s stories tell us, different frames of reference create different interpretations of what is right and what is wrong and often deliver mixed results.
For instance, while Sophia transparently used AI to improve her application without compromising her integrity, she was still penalized due to rigid policies that fail to recognize the value of responsible AI use.
While it may seem like Mark initially skirted ethical boundaries, his eventual embrace of AI as a tool for continuous learning and improvement on the job reflects the adaptive and responsible use that AI-enabled work ethic embodies.
When it comes to hiring we can separate the good from the bad by reframing the hiring process around this important new work skill, supporting the use of AI as a positive and mutually beneficial force to identify those who use it constructively and filter out those who represent the extremes.
Employers must rethink their hiring practices to support and assess this critical skill effectively. This requires a holistic approach rooted in a balanced perspective on the mutual benefits of AI that includes the taking the following actions:
Clearly communicate AI policies:
Be transparent about your company’s stance on AI use, both during the hiring process and in the workplace, setting clear expectations on accepted and unaccepted uses of AI.
Provide the opportunity to learn:
Offer candidates the opportunity to learn how to use generative AI constructively. Tutorials on how to ethically incorporate AI into the job search and application process, and the opportunity to practice applying them can help level the playing field for applicants like Maria and encourage responsibility for candidates like Mark.
Introduce face-to-face elements:
Where possible, incorporate live and in person interactions into the hiring process. Human to human touch points offer an opportunity to identify inconsistencies in applicants’ online responses and build important relationships with them.
Use task-based assessments:
AI can easily pass traditional hiring assessments. Task-based assessments, such as simulations and problem-solving exercises are harder to fake and provide engaging, job-relevant opportunities for candidates to showcase how they will apply their skills on the job.
The future is here - using LLMs to assess AI-Enabled Work Ethic
While technology created the problem, it can also help solve it.
Having designed and implemented hiring assessments for over two decades, I have repeatedly seen the value of high-fidelity, task-based simulations. But these simulations are expensive to develop and contextualize to a specific company’s unique culture and environment. LLMs offer a game-changing opportunity to create a new generation of highly contextualized interactive assessments that represent the future of hiring.
And the good news is that we already have the technology needed to build an LLM-based simulation for evaluating AI-enabled work ethic.
With the LLM serving as a co-pilot, role-player, and scoring engine, the simulation would require applicants to:
Research information to generate work output and solve problems
Role-play with co-workers and stakeholders
Analyze information
Create written communication and documents
Through these tasks the assessment would evaluate key aspects of AI-enabled work ethic, including familiarity with AI tools, and the willingness and ability to learn and apply new information. The simulation would also assess ethical decision-making by presenting scenarios where candidates must responsibly manage AI-generated information, fact-check outputs, and integrate their own ideas into AI-generated content.
Using Retrieval Augmented Generation (RAG) and fine-tuning, the LLM could be trained on specific information to minimize hallucinations and easily adapt to different job types and work environments.
But wait there’s more.
Assessing applicants with LLM based simulations has many additional benefits like the ability to measure other skills that AI cannot duplicate (yet) such as social interaction skills, empathy, creativity, critical thinking, and curiosity.
AI based simulations can also level the playing field for marginalized candidates like Maria and provide legitimize strategies such as the one Mark used to pass his coding test.
The opportunity for applicants to use AI to demonstrate job related skills also sends strong signals that the company values and supports the value AI provides in the workplace, cultivating feelings of psychological safety amongst applicants and strengthening its employment brand.
Co-creating the future
The rise of AI presents significant challenges, but it also offers immense opportunities. As we navigate these dualities, success requires careful thought, creativity, and a willingness to rethink traditional approaches. The stories of Sophia and Mark provide an example of the need for change. Sophia’s honesty and Mark’s road to success through tools prohibited in the hiring process but used on the job underscore the need for a new framework. Recognizing AI-enabled work ethic as a skill on which the future will be built allows us the freedom to put hypocrisy in the rear view mirror and use advanced technologies to evolve the hiring process for everyone’s benefit.