The interviewer noticed it midway through the coding exercise. As the candidate typed, gray suggestions appeared—GitHub Copilot completing functions before the candidate had finished thinking through them. The candidate accepted some, modified others, dismissed a few with a keystroke.

No one had told her AI tools weren't allowed. No one had told her they were.

"Is that okay?" she asked, noticing the interviewer's expression.

The interviewer honestly didn't know. The company had never discussed it. Their interview process was designed five years ago, when AI coding assistants were curiosities rather than daily tools. Now every engineer on their team used Copilot. But somehow the interview process assumed a pre-AI world.

This scene plays out constantly. AI coding assistants have fundamentally changed how engineers write code daily—over 80% of professional developers now use them regularly. Yet most technical interview processes were designed before this shift and haven't adapted[^1].

After researching over 100 company policies and advising dozens of hiring teams on this question at SmithSpektrum, I've seen the full spectrum of responses—from strict prohibition to enthusiastic encouragement—and everything in between.

The Current Policy Landscape

Companies have landed in surprisingly different places on this question, and the landscape is shifting rapidly.

About 40% of companies currently prohibit AI in technical interviews entirely. Their reasoning: they want to see raw problem-solving ability, unaugmented. They argue that fundamental skills still matter, that access to AI tools varies and creates an uneven playing field, and that they need to see the candidate's own thinking rather than AI-assisted output.

Another 25% allow AI with disclosure—mirrors how their teams actually work, they say. If engineers use these tools daily, why pretend they don't exist during evaluation? This camp argues that the ability to leverage AI effectively is itself a skill worth assessing.

About 15% explicitly encourage AI use. These tend to be AI-native companies or teams building AI products who view AI collaboration as a core competency. They want to see not just whether candidates can solve problems, but how they collaborate with AI to do so.

The remaining 20% have no explicit policy. They haven't addressed it, which means every interview becomes an awkward improvisation. This is the most common source of the confused scenes I described at the opening.

What's notable is how quickly these numbers are shifting. Policies that felt reasonable in 2024 are being reconsidered as AI tools become ubiquitous. The prohibition approach in particular is under pressure—it's increasingly hard to argue that you're testing "real skills" by banning tools that represent how real work gets done.

The Arguments, Honestly Evaluated

Both sides of this debate have legitimate points, though the weight of the arguments is shifting.

The case for prohibition has some merit. Assessing fundamental skills still matters—you want to know whether someone understands algorithms and data structures, not just whether they can prompt Copilot effectively. Creating a level playing field is valid too; access to AI tools still varies, and some candidates have more experience using them than others. And there's value in seeing a candidate's unassisted thinking, their raw problem-solving approach before AI helps them.

Interview Type AI Policy Recommendation Reasoning Adjustment Needed
Live coding Prohibit Tests real-time thinking Harder problems acceptable
Take-home Allow with disclosure Matches real work Evaluate process, not just output
System design Allow for lookup Focus on judgment Ask "why" more than "what"
Debugging Prohibit Tests diagnostic ability Use unfamiliar codebases
Code review Allow Augments, doesn't replace Assess critical thinking

But these arguments have weaknesses. "Testing raw ability" sounds good until you realize that nobody works without tools. We don't ask candidates to hand-write code on paper anymore. We don't prohibit IDE autocomplete or documentation. The line between "legitimate tool" and "unfair assistance" is more arbitrary than it appears.

The case for allowing AI has grown stronger as these tools have become standard. If everyone on your team uses AI daily, testing candidates without it is like evaluating drivers without letting them use the steering wheel. You're assessing a scenario that doesn't exist in practice.

More importantly, using AI effectively is genuinely a skill. Knowing when to use it, how to prompt it, when to reject its suggestions, how to modify its output—these require judgment that varies dramatically between engineers. I've watched two candidates use the same AI tool on the same problem with completely different results, because one knew how to collaborate with AI and the other simply accepted whatever it generated.

The reality check: within a few years, prohibition will likely become impractical. The question isn't whether to allow AI, but how to design interviews that assess the right things when AI is present.

Designing AI-Aware Interviews

If you're allowing AI—or even if you're simply preparing for a future where you'll need to—your interview design must evolve. The goal shifts from "can they write this code?" to "can they solve this problem effectively?"

Traditional coding interviews tested implementation. Can the candidate remember how to implement a binary search? Can they write the code for BFS without syntax errors? These questions have clear answers that AI can provide instantly. They're becoming obsolete as differentiators.

AI-aware interviews must test judgment instead. Can the candidate recognize when the AI's suggestion is subtly wrong? Can they modify AI output to handle edge cases the AI missed? Can they decompose an ambiguous problem well enough to prompt AI effectively?

The questions that work in an AI-allowed world share common characteristics: ambiguous requirements that force the candidate to clarify before prompting; novel problem structures that don't appear in AI training data; multi-step reasoning where AI assistance is partial at best; trade-off discussions that require judgment rather than code; debugging complex state where AI struggles with context.

The questions that don't work are exactly the ones that AI handles well: standard algorithms, common interview problems, syntax recall, boilerplate generation, well-known patterns. If you can find the answer by asking ChatGPT, it's not a good interview question anymore.

System design interviews, interestingly, are less affected. They've always been about judgment, communication, and thinking through trade-offs. AI can't do a system design interview for you because the value is in the discussion, not the artifact.

Evaluating AI-Augmented Work

When AI is allowed, you need to evaluate differently—and the signal you're looking for changes.

What you're actually assessing now is problem decomposition. Before touching any tool, how does the candidate break down the problem? Do they clarify requirements? Do they think through edge cases? Do they plan their approach? Strong candidates do this whether or not AI is involved. Weak candidates jump straight to prompting without understanding what they're solving.

You're assessing prompt effectiveness. When the candidate does use AI, are their prompts specific and well-structured? Do they provide enough context? Do they iterate when initial results miss the mark? This is a genuine skill that predicts productivity.

You're assessing review judgment. Do they actually read what the AI generated? Do they catch errors? I've watched candidates accept obviously wrong AI suggestions—infinite loops, incorrect edge case handling, misunderstood requirements—without noticing. This is a red flag regardless of their other skills.

You're assessing modification ability. Can they improve on what AI gives them? AI output is often correct but suboptimal. Strong candidates treat AI suggestions as starting points to refine rather than finished code to accept.

The positive signals: effective prompting, quick rejection of bad suggestions, thoughtful modification, knowing when not to use AI, faster overall solution through intelligent tool use.

The red flags: accepting obviously wrong suggestions, inability to explain code they submitted, slower than expected even with AI (suggesting they don't know how to use it), complete dependence with no ability to work on simple tasks unaided.

The Disclosure Question

If you allow AI, should you require candidates to tell you when they're using it?

I recommend explicit transparency rather than required disclosure. Tell candidates upfront: "You may use AI coding assistants in this interview—we use them at work too. Please share your full screen including any AI tools. We're interested in how you collaborate with AI to solve problems effectively."

This framing accomplishes several things. It normalizes usage so candidates don't feel like they're cheating. It lets you observe their AI collaboration skills directly. It removes the awkwardness of candidates wondering whether to hide their tools.

The alternative—assuming AI is used without requiring disclosure—is simpler to administer but loses valuable signal. You can't evaluate AI collaboration skills if you can't see how they're collaborating.

Some companies try to require disclosure for each AI use, which adds friction without much benefit. The honor system is weak, and constant interruptions disrupt flow. Better to have full screen visibility than self-reporting.

Take-Home Assignments in the AI Era

Take-home projects require special consideration because you simply cannot prevent AI use. Candidates complete them at home, on their own computers, with whatever tools they choose. Assuming otherwise is naive.

This doesn't make take-homes useless—it means you have to design and evaluate them differently.

Design for AI by focusing on things AI does poorly. Require architectural decisions that need context AI doesn't have. Ask for written explanations of why, not just what. Create problems with unique business context that doesn't appear in AI training data. Include modification exercises that start with existing code rather than blank-slate implementation.

Evaluate differently, weighting factors based on what AI can and can't provide. Code correctness matters less—AI achieves this reliably. Code quality matters moderately—AI produces decent quality. Architecture decisions matter more—these require judgment. Written communication matters more—explanations can't be AI-generated well. The follow-up discussion matters critically—this is where you verify the candidate actually owns the work.

That follow-up conversation is essential. Ask the candidate to walk through their solution. Ask why they made specific decisions. Ask them to modify something on the spot. Ask them to explain a complex section. This verifies ownership and reveals whether they understand what they submitted.

Implementing Your Policy

Whatever you decide, have an explicit policy. The worst situation is ambiguity—candidates guessing, interviewers improvising, inconsistent experiences.

If you're prohibiting AI, communicate it clearly before the interview begins: "During live coding interviews, please do not use AI coding assistants like GitHub Copilot, ChatGPT, or Claude. We want to see your unassisted problem-solving approach. Standard IDE features like autocomplete are fine."

If you're allowing AI, be equally clear: "You may use AI coding assistants during this interview—we use them at work too. Please share your full screen including any AI tools. We're interested in how you collaborate with AI to solve problems effectively."

If you're making it conditional—different policies for different interview stages—explain the logic: "For the system design discussion, AI tools aren't relevant. For the coding portion, you may use AI assistants if you wish—just let us know so we can evaluate accordingly."

Then train your interviewers. If AI is allowed, they need to know what to assess differently. If AI is prohibited, they need to enforce the policy consistently. Either way, they should understand the rationale so they can explain it to candidates who ask.

Candidate Advice

If you're a candidate navigating this, my advice is straightforward: ask explicitly. Before any technical interview, ask about the AI policy. "Do you allow AI coding assistants during the interview?" This shows awareness and gets you clarity.

If the answer is unclear or there's no policy, suggest one: "I use Copilot in my daily work—would you like me to disable it for this interview, or shall I use my normal setup?" This puts the decision on them while signaling that you're thoughtful about it.

If AI is allowed, narrate your process. "I'm going to use Copilot here to generate the boilerplate, then review what it gives me." "I'm ignoring that suggestion because it doesn't handle the edge case we discussed." "I'm modifying this part because the AI doesn't have context about our constraint." This demonstrates judgment rather than dependence.

If AI is prohibited, don't be resentful. Prepare to work without AI as you would have five years ago. The skills still matter, even if they're not the whole picture. Practice without AI so you're not rusty.

Where This Is Heading

Looking forward, I expect AI prohibition in interviews to decline rapidly. Within two years, most companies will allow AI but design questions that test judgment, architecture, and collaboration skills rather than raw coding ability.

The more interesting question is what happens when AI can do system design interviews too. We're not there yet, but the gap is narrowing. At some point, we'll need to assess not just "can you use AI effectively" but "what do you uniquely bring that AI cannot replace?" That's a harder question, and the interview processes that help answer it haven't been invented yet.

For now, the pragmatic path is to allow AI while redesigning your questions to test what matters in an AI-augmented world. The companies that figure this out first will have an advantage in identifying the engineers who will be most productive in the AI-native workplace we're heading toward.


The candidate with Copilot visible on her screen? The company hastily created a policy that week—they decided to allow AI going forward. But more importantly, they redesigned their interview questions. Out went the standard algorithm problems that Copilot solves instantly. In came ambiguous system problems, debugging exercises, and discussions about trade-offs.

Their interviews now look very different. They're also producing much better signal about who will actually succeed on their team—where everyone uses AI daily.

The era of "write FizzBuzz on a whiteboard" was already ending. AI just accelerated its death. The era of "solve this ambiguous problem effectively with whatever tools you'd use at work" is beginning.


References

[^1]: SmithSpektrum interview policy research, 100+ companies surveyed, 2025-2026. [^2]: Stack Overflow Developer Survey, "AI Tools in Development," 2025. [^3]: GitHub, "Copilot Impact Report," 2025. [^4]: HBR, "Hiring in the Age of AI," 2025.


Designing AI-aware interview processes? Contact SmithSpektrum for interview design and policy guidance.


Author: Irvan Smith, Founder & Managing Director at SmithSpektrum