Skip to content

AI & Automation


AI Model

All AI evaluation and interview generation uses claude-sonnet-4-6 from Anthropic.


Automated Screening Evaluation (Phase 1A)

Service: src/services/screeningEvaluationService.ts

Triggered inline after POST /v1/screening/:token/submit. Moved to BullMQ queue in Phase 6.

Prompt Construction

System: You are an expert recruiter evaluating a candidate's screening responses for [job title].

Job description:
[JobOpening.description]

Questions and candidate answers:
Q1: [questionText]
A1: [response]
Q2: [questionText]
A2: [response]
...

For each answer, provide:
1. A score from 0-10
2. A brief analysis (2-3 sentences)

Then provide:
- An overall score from 0-100
- A 2-3 sentence summary
- Key strengths (up to 3)
- Key concerns (up to 3)
- Recommendation: proceed | review | reject

Output Storage

typescript
// Stored on Interview (recruiter-only):
stageData.screeningAiReport = {
  score: number,        // 0–100
  summary: string,
  strengths: string[],
  concerns: string[],
  recommendation: 'proceed' | 'reject' | 'review',
  generatedAt: Date,
}
// Individual question scores also stored on screeningResponses[].aiScore + .aiAnalysis

// Stored on Interview (candidate-visible):
candidateAggregateScore = number  // 0-100, derived from score

AI Technical Interviewer (Phase 4)

Service: src/services/aiInterviewerService.ts

For stageTypeKey: 'technical_ai_assisted'.

Session Flow

  1. Load ScenarioQuestion for this stage
  2. Present scenario.context to candidate
  3. Candidate submits initialResponse
  4. AI generates follow-up based on scenario.followUpGuide + candidate's answer
  5. Repeat for configured number of follow-up turns
  6. On session end: AI generates aiReport using scenario.evaluationRubric

Follow-up Generation Prompt

System: You are a technical interviewer conducting a [scenario.scenarioType] interview.

Evaluation rubric (for your reference only):
[evaluationRubric]

Follow-up guide:
[followUpGuide]

Conversation so far:
[Q&A turns]

Candidate's latest response:
[candidateResponse]

Generate ONE focused follow-up question. Be concise. Do not repeat previous questions.

AI Conversational Interviewer (Phase 4)

For stageTypeKey: 'ai_conversational'.

Same architecture as technical, but:

  • Real-time voice interface (TTS + STT)
  • TTS: Text-to-speech converts AI questions to audio
  • STT: Speech-to-text converts candidate speech to text
  • Audio URL stored per turn (recruiter-only playback)
  • Implementation: Deepgram or similar service

AI Chatbot in Live Room

Existing feature. Available in live_1on1 stages when toolsConfig.aiChatbot: true.

This is a support tool for the human interviewer — not an autonomous session conductor. It can be asked for question suggestions, company-specific context, or real-time evaluation hints.


Per-Candidate Override & Config Resolution

For any AI-driven session, the effective config is resolved at runtime:

typescript
// Step 1: Check for per-candidate override
const overrideConfig = interview.stageOverrides?.screeningConfig;
// Step 2: Fall back to job-level config
const jobConfig = jobOpening.stages.find(s => s._id.equals(interview.stageId))?.screeningConfig;
// Step 3: Use override if present, else job config
const effectiveConfig = overrideConfig ?? jobConfig;

This applies to all stage types — not just screening.


Phase Delivery

PhaseAI Feature
1AScreening evaluation (screeningAiReport, candidateAggregateScore) — inline async
1BAI chatbot in live room (existing) + private question push
4AI technical + conversational interviewers
6Evaluation moved to BullMQ queue — survives server restart