Skip to content

Instantly share code, notes, and snippets.

@indykish
Created January 30, 2026 13:33
Show Gist options
  • Select an option

  • Save indykish/cc197cc801ad722de2b83580de1ab502 to your computer and use it in GitHub Desktop.

Select an option

Save indykish/cc197cc801ad722de2b83580de1ab502 to your computer and use it in GitHub Desktop.
How to Hire Top AI Talent in 2026

The Complete Guide: How to Hire Top AI Talent in 2026

By: Arman Hezarkhani (@ArmanHezarkhani)
Published: January 29, 2026
Views: 14.8K
Source: https://x.com/ArmanHezarkhani/status/2016889009004294625


We've hired dozens of AI engineers at Tenex. Builders who ship production systems for Fortune 500 clients.

Most companies can't hire a single one.

They post on LinkedIn. They filter for Stanford degrees. They run LeetCode interviews. They require "5+ years of LLM experience"—for technology that's been mainstream for three.

Then they wonder why their pipeline is empty.

The best AI engineers aren't applying to jobs. They're building things.

This is the playbook we use at @tenex_labs. Where to find builders who aren't looking. How to evaluate AI skills—not algorithms. Five take-home projects you can steal tomorrow.

The Market Has Changed. Your Process Hasn't.

AI hiring grew 88% last year. AI engineer salaries jumped to $206,000 on average—a $50,000 increase from the year before. AI engineer positions are growing 300% faster than traditional software roles.

And yet companies still can't fill positions.

Three shifts have broken traditional hiring:

Credentials don't predict performance. A CS degree from MIT tells you someone can pass exams. It tells you nothing about whether they can ship an AI agent that works in production. The best engineers we've hired include a former music producer, a dropout who built a viral app, and someone whose entire resume was a GitHub profile.

Experience requirements are meaningless. Someone building with Claude Code for 6 months often outperforms someone doing traditional ML for a decade. The tools change so fast that adaptability beats tenure. Every time.

The builders aren't job hunting. They're founding companies, shipping side projects, or heads-down at startups. They won't respond to recruiter spam. You have to find them where they're already showing their work.

You're competing for people who don't need you. Act like it.

The Hidden Differentiator

Here's what most companies miss:

The single greatest predictor of AI engineering success in 2026 isn't framework knowledge. It's not prompt engineering. It's not even raw coding ability.

It's the ability to see the whole system—technical and business—simultaneously.

AI engineering sits at a unique intersection. The technology is probabilistic, not deterministic. Outputs vary. Costs scale unpredictably. And unlike traditional software, AI systems fail in ways that are hard to anticipate and harder to explain.

This means the best AI engineers aren't just technically excellent. They understand how technical decisions ripple into business outcomes. They think about the CFO's concerns (inference costs at scale), the legal team's concerns (liability when the model hallucinates), and the customer's concerns (why did it give me that answer?)—not just the engineering elegance.

Why does this matter for hiring?

Because it's the skill that separates demo-builders from production engineers.

Anyone can make a chatbot work in a Jupyter notebook. Making it work reliably at scale—understanding when to use AI and when not to, designing for graceful failure, building systems that humans can trust and override—requires a fundamentally different mindset.

LangChain's 2026 State of Agent Engineering report found that 32% of organizations cite quality as the top barrier to putting AI agents in production. Not cost. Not complexity. Quality. And quality comes from engineers who understand the entire system they're building—not just the model, but the data pipelines feeding it, the user workflows surrounding it, and the business processes depending on it.

When you're interviewing, you're looking for engineers who:

  • Ask "What problem are we actually solving?" before discussing architecture
  • Think about failure modes, edge cases, and human oversight from the start
  • Understand that AI is one tool in a system, not the system itself
  • Can explain tradeoffs to non-technical stakeholders without dumbing it down

This is systems thinking applied to AI. It's teachable, but slowly. Engineers either have the instinct or they don't. Your job is to identify who already thinks this way.

What Actually Predicts Success

Six characteristics. In order of importance.

1. They Build Things

This is the only signal that matters.

Not "I contributed to a project." Not "I was on a team that shipped." They personally built something you can look at, use, and evaluate.

GitHub repos. Side projects. Hackathon submissions. Apps in the App Store. Chrome extensions. Discord bots. Anything.

The best AI engineers can't not build. They learn about a new API and ship something that weekend. They see a problem and prototype a solution before anyone asks.

Look for engineers who've shipped end-to-end—not just models, but complete systems with data pipelines, deployment, and monitoring. In 2026, every serious AI project depends on clean data flowing in the right direction. The engineer who can build the whole pipeline is worth three who can only tune prompts.

This matters more than anything on their resume.

2. They're High Horsepower

High horsepower means they take a vague problem and make rapid progress without constant direction.

You say "improve our RAG pipeline." They come back with three approaches they've already prototyped, benchmarked, and have opinions about. They've thought about chunking strategies, embedding models, and reranking. They've already run evals.

Low-horsepower engineers need constant input. Every decision requires your involvement. That doesn't scale.

One high-horsepower engineer accomplishes what three average engineers would. Not more hours—less friction.

3. They're Genuinely Curious

AI moves faster than any technology in history. The tools from six months ago are already outdated.

Curious engineers thrive here. They read papers for fun. They try every new model the week it releases. They have opinions about techniques most people haven't heard of.

Today, that means they're already experimenting with agentic architectures. They know the difference between LangChain and LangGraph. They've built something with MCP. They have thoughts on when to use CrewAI versus building custom orchestration.

Incurious engineers learn what they need and stop. In a stable field, fine. In AI, fatal.

4. They Learn Fast and Teach Well

Learning fast is obvious. Teaching well is equally important.

When someone explains complex concepts clearly, they understand them deeply. Teaching forces you to organize knowledge, find gaps, make assumptions explicit.

Engineers who teach well make the whole team better. Look for people who've written technical blogs, given talks, or mentored others.

This matters more now than ever. AI systems are probabilistic and often opaque. The engineer who can explain why a system behaves a certain way—not just that it does—is invaluable for debugging, for stakeholder communication, and for building organizational AI capability.

5. They Have Taste

Taste means knowing what "good" looks like.

Engineers with taste build things that feel right. Their APIs are intuitive. Their code is readable. They optimize for the long term, not just the immediate task.

In AI specifically, taste means knowing when an LLM is the wrong solution entirely. It means building guardrails without being asked. It means understanding that "good enough" isn't good enough when AI systems can fail unpredictably.

Engineers without taste produce work that's janky. It works, but you spend more time fixing their decisions than building on them.

Taste is hard to teach.

6. They're Self-Directed

Self-directed means they own outcomes, not tasks.

They don't just write code. They think about the problem, propose solutions, execute, iterate. They treat projects like they're the CEO of that project.

At @tenex_labs, we pay engineers based on what they ship. That only works with self-directed people.

Where to Find Builders

The best AI engineers aren't on job boards. Go where they're already working.

Former founders

Someone who started a company—even if it failed—can build from nothing. Failed founders are gold: builder mentality without the distraction of running their own thing.

Founding engineers

First or second engineer at a startup. They've built from scratch and shipped without specs. The "Founding AI Engineer" has quietly become one of the most in-demand startup roles of 2025-2026. Find them at startups that succeeded (wealthy, potentially bored) or failed (available).

People with public projects

GitHub doesn't lie. Repos with stars mean they built something people wanted. Active commits mean they're shipping consistently.

Hackathon winners

They've proven they can ship something impressive in 48 hours. That skill transfers directly.

Open-source contributors

PRs merged into LangChain, LlamaIndex, or similar projects demonstrate real AI engineering skills. It's unpaid work they did because they care.

Technical content creators

People writing blogs or tutorials demonstrate understanding, communication, and initiative simultaneously. The quality of their content shows how they think.

Outreach That Gets Responses

Cold outreach to builders is different. They're not looking. They're getting spammed constantly.

Reference their actual work

Not "I saw your project." Actually look at it. Mention a specific technical decision. This takes 10 minutes and filters out 95% of recruiter noise.

Lead with what they'll build

"We're an AI company" is boring. "You'd build AI agents for complex workflows in finance and healthcare" is interesting.

Be transparent about compensation

Unusual model? Say so upfront. You'll get fewer responses but better matches.

Keep it short

Three sentences. Four max.

Example:

Saw your [project]—liked how you handled [specific detail]. We're building AI agents for enterprise clients at Tenex. Engineers get paid based on what they ship. Interested?

The Interview Process

Traditional AI interviews test the wrong things. LeetCode doesn't predict AI skill. Behavioral questions don't identify builders.

Here's what works. Three weeks end-to-end.

Stage 1: 15-Minute Screen

One purpose: is this worth both parties' time?

Three questions:

  • What have you built recently that you're proud of?
  • What was the hardest part—technically or otherwise?
  • What are you looking for next?

Red flags: Can't articulate what they've built. Only talks about technology, never about users or outcomes. Motivated only by money.

Green flags: Gets excited about projects. Asks what they'd build. Curious about your approach. Talks about tradeoffs and constraints unprompted.

Pass? Send the take-home immediately.

Stage 2: Take-Home Assignment

This is where most companies fail. Too short tests nothing. Too long disrespects time. Generic coding problems miss what matters.

Our take-home tests real AI engineering: can they build a working system, make good decisions without detailed specs, and ship something polished?

Five project options (use these):

  1. Calendar Assistant — Web/mobile app that authenticates GSuite, displays calendar info, includes a chat agent for scheduling and time analysis.

  2. Talk-to-a-Folder — Authenticates Google Drive, accepts a folder link, runs an agent that answers questions about files with citations.

  3. Better Perplexity — Chat agent with search capabilities, plus a compelling feature of their choice.

  4. Inbox Concierge — Authenticates Gmail, classifies last 200 threads into buckets with an LLM pipeline, lets users create custom buckets.

  5. Smart Recipe Planner — Mobile app where users photograph ingredients and get recipe suggestions.

All require React or Expo. All involve real AI integration. All require product decisions—no detailed spec.

Candidates submit code plus a 5-minute video walkthrough.

What we evaluate:

  • Does it work?
  • Is the code quality good?
  • Did they make smart product decisions?
  • Do they understand LLMs deeply?
  • Did they think about the system holistically? (Error handling, edge cases, user experience, what happens when the AI fails)
  • Did they go beyond the minimum?
  • Can they explain their choices?

The systems thinking question is critical. Most candidates build the happy path. The ones who anticipate failure modes, consider the user's context, and design for graceful degradation—those are the ones you want.

Stage 3: 45-Minute Take-Home Review

We've reviewed code and video before the call.

First 10 minutes: They walk through their project. We listen.

Next 20 minutes: Deep questions.

  • Why did you structure the agent this way?
  • How does your context management work?
  • How would you know if this was working well in production?
  • What would you change with another week?
  • What breaks at 10x traffic?

We're checking: did they copy patterns from tutorials, or do they understand why things work?

Last 15 minutes: Extensions and alternatives. How would they add feature X? What would change at enterprise scale?

Stage 4: 45-Minute Systems Design

Take-home shows execution. Systems design shows architecture.

We present a real problem: "A financial services company wants AI that analyzes earnings calls and answers analyst questions with citations."

Then we work through it together:

  • Data pipeline structure?
  • Retrieval approach?
  • Context length limits?
  • Hallucination handling?
  • How would you evaluate whether it's working?
  • Scaling considerations?

We're not looking for perfect answers. We're looking for structured thinking, good questions, awareness of tradeoffs.

We specifically probe:

  • LLM limitations. Do they know where models fail? Can they articulate when to avoid AI entirely?
  • Retrieval and context. Embedding strategies, chunking, reranking?
  • Business context. Do they ask about the end user? The cost constraints? The compliance requirements?
  • Production concerns. Latency, cost, reliability, security, observability?

Stage 5: Team Fit

Technical bar is cleared. Now: will they thrive here?

They meet multiple team members. We evaluate fit for high autonomy, high accountability, direct communication.

Equally important: they're evaluating us. Both sides should leave confident.

The Timeline

Speed wins. If your process takes six weeks, you'll lose every good candidate.

  • Day 1: Outreach
  • Day 2-3: Screen within 48 hours
  • Day 3-4: Take-home sent immediately after screen
  • Day 7-10: Submission due
  • Day 10-12: Review within 48 hours of submission
  • Day 12-14: Systems design
  • Day 14-16: Team meetings
  • Day 16-18: Offer

Three weeks. Every day you delay, you risk losing them.

Evaluation Rubric

Here's how we weight each characteristic when making decisions:

  • They build things (25%) — Evaluated primarily through the take-home. This is the most important signal.
  • High horsepower (20%) — Shows up in both the take-home and review. Did they make progress independently? Did they go beyond the spec?
  • Curiosity (15%) — Emerges during the screen and review. Do they ask good questions? Do they have opinions about the cutting edge?
  • Learn/teach ability (15%) — Watch for this in their video walkthrough and systems design. Can they explain complex concepts clearly?
  • Taste (15%) — Visible in the take-home. Is their work polished? Do the details feel right?
  • Self-direction (10%) — Observed across all stages. Do they own the outcome or wait for direction?

Score 1-4 on each:

  • 1: Concerns
  • 2: Below bar
  • 3: Meets bar
  • 4: Exceptional

Any 1 = no hire. Need at least one 4 to proceed.

Interviewers score independently before discussing. Prevents anchoring.

Mistakes That Kill Your Pipeline

Filtering for credentials

GitHub tells you more than diplomas. Over a quarter of AI engineer job postings have no specific degree requirement—companies are learning that portfolios beat credentials.

Generic coding interviews

LeetCode tests algorithms. AI engineering is about systems, models, ambiguity, product decisions. The job market has shifted to precision hiring where demonstrated skills matter more than pedigree.

Moving slow

Every week of delay increases the chance you lose them. Top candidates have multiple offers within days.

Hiring experience over trajectory

10 years of traditional ML might mean 10 years of habits to unlearn. Someone who's shipped three AI agents in the last six months may be more valuable than someone with a decade of experience in a different paradigm.

Ignoring culture fit

Brilliant but difficult hurts the team more than adequate but elevating.

Not selling

Great candidates are evaluating you. Act like it.

Testing knowledge, not judgment

You can look up how to use LangChain. You can't look up when not to use an LLM at all. Test for judgment.

The Hard Truth

This process is hard to execute.

Finding builders requires sourcing capabilities most companies don't have. Evaluating AI skills requires people who understand AI systems deeply—who can spot the difference between tutorial copiers and original thinkers. Moving fast requires organizational discipline. Competing on compensation requires paying market rate—or offering something genuinely different.

Most companies will read this, nod, and go back to posting on LinkedIn.

That's the opportunity.

The companies that figure out AI hiring get a compounding edge. They ship while competitors are still writing job descriptions. They attract more builders because builders want to work with other builders.

The question isn't whether you need AI talent. It's whether you'll do what it takes to get it.

TL;DR

The market changed. AI hiring up 88%. Salaries averaging $206K. Positions growing 300% faster than traditional roles. And still, companies can't fill them—because credentials don't predict performance, experience requirements are meaningless, and builders aren't job hunting.

The hidden differentiator is systems thinking. The engineers who see both the technical architecture and the business reality—who design for failure, understand tradeoffs, and know when AI isn't the answer—are rare and invaluable.

Six traits matter: They build things, high horsepower, curious, learn/teach fast, taste, self-directed.

Find them where they build. Former founders, founding engineers, public projects, hackathons, open source, content creators.

Outreach with specificity. Reference their work. Lead with what they'll build. Keep it short.

The process:

  1. 15-minute screen
  2. Take-home (five project options above—steal them)
  3. 45-minute review
  4. 45-minute systems design
  5. Team fit
  6. Offer

Move fast. Three weeks max. Every delay risks losing them.

This is hard. Most companies won't commit. That's why it works for those who do.

Start building your team the way you'd build a product: intentionally, iteratively, with taste.


Original source: https://x.com/ArmanHezarkhani/status/2016889009004294625

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment