Question Banks
TalentSync uses four separate question collections, one per stage type's question domain. This is by design — each collection has a completely different schema, access pattern, and management UI.
Why Not a Shared Collection?
- Schemas diverge completely — a
ScreeningQuestionis just a sentence; aDSAProblemhas test cases, starter code, memory limits, hidden inputs. - Access patterns differ — screening questions are fetched as UI chips grouped by category; DSA problems are searched by difficulty + tags; live questions are managed in private per-recruiter banks.
- Management UIs are separate — the recruiter UI for building a DSA test is nothing like selecting screening question chips.
- Cleaner Mongoose models — one model per domain, fully typed, no conditional required fields.
ScreeningQuestion
Used by automated_screening stages. Equivalent to (and replaces) the current Question model.
interface IScreeningQuestion {
_id: ObjectId;
text: string;
normalizedText: string; // lowercase, trimmed — for dedup
contentHash: string; // SHA-256(normalizedText) — unique constraint
source: 'system' | 'custom';
organizationId?: ObjectId; // null for system questions; set for org-custom questions
createdBy?: ObjectId; // ref → User
category: string; // "Work Authorization", "Availability", "Salary Expectations"
chipId?: string; // UI chip id used in the ScreeningQuestionsConfig panel
tags: string[];
relevanceScore?: number; // system questions: general relevance hint for sorting
defaultWeight?: number; // scoring weight when used in AI evaluation
usageCount: number;
isDeleted: boolean;
createdAt: Date;
updatedAt: Date;
}Indexes:
{ contentHash: 1 }— unique (dedup onfindOrCreateupsert){ source: 1, category: 1 }— grouped chip list for the config panel{ organizationId: 1, isDeleted: 1 }— org's custom question library{ usageCount: -1 }— popularity sort
findOrCreate Pattern
The findOrCreateQuestion() service function uses a content hash to prevent duplicates:
async function findOrCreateQuestion(text: string): Promise<IScreeningQuestion> {
const normalizedText = text.trim().toLowerCase();
const contentHash = sha256(normalizedText);
return await ScreeningQuestion.findOneAndUpdate(
{ contentHash },
{ $setOnInsert: { text, normalizedText, contentHash, source: 'custom', ... } },
{ upsert: true, new: true }
);
}Seed script:
src/scripts/seedScreeningQuestions.ts— 22 system questions across categories — run vianpm run seed:screening-questions
DSAProblem
Used by technical_dsa stages. LeetCode-style problems with hidden test cases.
interface IDSAProblem {
_id: ObjectId;
title: string;
description: string; // full problem statement (markdown supported)
source: 'system' | 'custom';
organizationId?: ObjectId;
createdBy?: ObjectId;
difficulty: 'easy' | 'medium' | 'hard';
tags: string[]; // ['array', 'hash-map', 'binary-search', 'dynamic-programming', ...]
estimatedDurationMinutes: number;
constraints: string; // e.g. "1 ≤ n ≤ 10^5"
examples: {
input: string;
output: string;
explanation?: string;
}[];
testCases: {
_id: ObjectId;
input: string;
expectedOutput: string;
isHidden: boolean; // hidden test cases are NOT shown to the candidate
}[];
starterCode: {
javascript?: string;
python?: string;
java?: string;
cpp?: string;
};
timeLimitMs: number; // execution time limit per test case
memoryLimitMB: number;
usageCount: number;
isDeleted: boolean;
createdAt: Date;
updatedAt: Date;
}Indexes:
{ source: 1, difficulty: 1, isDeleted: 1 }— problem picker: filter by difficulty{ tags: 1 }— problem picker: filter by topic{ organizationId: 1, isDeleted: 1 }— org's custom problem library{ usageCount: -1 }— popularity sort
Hidden test cases
The GET /v1/dsa-problems/:id endpoint strips testCases where isHidden: true before returning to the candidate. Only visible examples are shown during the session. All test cases (visible + hidden) are run on final submission.
Seed script:
src/scripts/seedDsaProblems.ts— 20+ system problems (easy/medium/hard) — run vianpm run seed:dsa-problems
ScenarioQuestion
Used by technical_ai_assisted and ai_conversational stages. Complex, open-ended questions where an AI interviewer drives the session.
interface IScenarioQuestion {
_id: ObjectId;
title: string;
source: 'system' | 'custom';
organizationId?: ObjectId;
createdBy?: ObjectId;
applicableStageTypes: ('technical_ai_assisted' | 'ai_conversational')[];
scenarioType:
| 'system_design' // "Design a URL shortener"
| 'api_design' // "Design a REST API for a ride-sharing app"
| 'sales' // "You are pitching to a skeptical CTO..."
| 'marketing' // "Launch strategy for a new product..."
| 'product' // "What metrics would you track for..."
| 'behavioral'; // "Tell me about a time you..."
difficulty: 'easy' | 'medium' | 'hard';
tags: string[];
estimatedDurationMinutes: number;
context: string; // What the candidate sees — the problem setup
// For the AI interviewer only — never shown to the candidate
evaluationRubric: string; // Defines what a good answer covers
followUpGuide: string; // Follow-up questions to ask based on candidate responses
usageCount: number;
isDeleted: boolean;
createdAt: Date;
updatedAt: Date;
}Indexes:
{ applicableStageTypes: 1, scenarioType: 1, isDeleted: 1 }{ source: 1, difficulty: 1 }{ organizationId: 1, isDeleted: 1 }{ tags: 1 }
Seed script:
src/scripts/seedScenarioQuestions.ts— system scenarios across system design, API design, behavioral, sales/marketing — run vianpm run seed:scenario-questions(Phase 4)
InterviewQuestion
Used by live_1on1 and culture_fit_hr stages as the private question bank visible only to the interviewer during the live session. The recruiter pushes questions to the candidate's screen one at a time.
interface IInterviewQuestion {
_id: ObjectId;
text: string; // the question the interviewer asks
source: 'system' | 'custom';
organizationId?: ObjectId;
createdBy?: ObjectId;
applicableStageTypes: ('live_1on1' | 'culture_fit_hr')[];
questionType:
| 'technical' // coding/system design to discuss live
| 'behavioral' // "Tell me about a time..."
| 'culture_fit' // values alignment
| 'situational'; // "What would you do if..."
category?: string;
difficulty?: 'easy' | 'medium' | 'hard';
tags: string[];
// Shown to interviewer only — NEVER to the candidate
interviewerHints?: string;
expectedAnswerGuide?: string;
usageCount: number;
isDeleted: boolean;
createdAt: Date;
updatedAt: Date;
}Indexes:
{ applicableStageTypes: 1, questionType: 1, isDeleted: 1 }{ source: 1, category: 1 }{ organizationId: 1, questionType: 1, isDeleted: 1 }{ tags: 1 }
Seed script:
src/scripts/seedInterviewQuestions.ts— ~30 system questions (behavioral + technical) — run vianpm run seed:interview-questions(Phase 1B)
API Endpoints Summary
| Endpoint | Collection |
|---|---|
GET /v1/screening-questions | ScreeningQuestion |
POST /v1/screening-questions | ScreeningQuestion |
GET /v1/dsa-problems | DSAProblem |
POST /v1/dsa-problems | DSAProblem |
GET /v1/dsa-problems/:id | DSAProblem (strips hidden test cases) |
GET /v1/scenario-questions | ScenarioQuestion |
POST /v1/scenario-questions | ScenarioQuestion |
GET /v1/interview-questions | InterviewQuestion |
POST /v1/interview-questions | InterviewQuestion |