HR-Assistant / docs /entrypoint_patterns.md
owenkaplinsky
Clean initial commit for HuggingFace
363cda9
|
raw
history blame
3.44 kB

๐Ÿ“„ How CV Parsing & LLM Evaluation Are Triggered โ€” Summary

Below is a clean overview of the three architectural patterns for triggering CV parsing and LLM-based CV evaluation inside an agentic HR pipeline.


๐Ÿงฉ End-to-End Flow

  1. Candidate uploads CV
  2. System stores candidate entry in DB
  3. CV parser runs automatically
  4. Parsed CV JSON is stored in DB
  5. Orchestrator detects that parsing is done
  6. Orchestrator triggers the CV Screening Agent
  7. LLM evaluates CV and stores results
  8. Pipeline continues (voice โ†’ scheduling โ†’ final decision)

[User (Streamlit UI)]
   โ†“
   Upload CV + metadata (HTTP POST)
   โ†“
[Orchestrator API]
   โ†“
   Save CV file (local or cloud)
   โ†“
   Write candidate entry to DB
   โ†“
   Trigger parsing pipeline
   โ†“
   Update parsed_cv_json + status='parsed'
   โ†“
   Orchestrator runs CV Screening Agent
   โ†“
   Write results to DB + status='cv_screened'
   โ†“
[Streamlit polls /api/status/<candidate_id>]
   โ†“
   Display updated status + scores

๐Ÿง  Pattern A โ€” Orchestrator-Driven State Machine (Recommended)

The orchestrator continuously monitors the candidateโ€™s status in the database and decides the next action based on that state.

Flow:

  • After parsing finishes, the system sets status = "parsed"
  • The orchestrator checks the state and sees that the next step is CV screening
  • It triggers the CV Screening Agent
  • Once evaluation completes, the system updates status to status = "cv_screened"
  • The orchestrator then moves to the next stage (voice screening, etc.)

Why this is the best choice:

  • Most โ€œagenticโ€ (planning + reasoning)
  • Clean separation between deterministic parsing and cognitive reasoning
  • Perfect fit for LangGraph orchestration
  • Easy to visualize reasoning and workflow progress
  • Ideal for hackathon judges (transparency + intentionality)

๐Ÿง  Pattern B โ€” Event-Based Trigger (Webhook, Queue, Pub/Sub)

The parsing component emits an event like โ€œcv_parsedโ€ when finished.
A listener or orchestrator receives that event and immediately triggers the CV Screening Agent.

Pros:

  • Scales well
  • Good for microservice architectures

Cons:

  • Less agentic
  • Harder to show planning logic and state transitions
  • More infrastructure complexity

๐Ÿง  Pattern C โ€” Orchestrator Polling the Database

A loop runs every few seconds, searching for candidates whose status is โ€œparsedโ€ and triggering CV evaluation when found.

Pros:

  • Very simple to implement
  • Works well for demos and prototypes

Cons:

  • Not reactive
  • Less elegant
  • Not as agentic or clean as Pattern A

๐Ÿ† Recommendation

Use Pattern A (Orchestrator-Driven State Machine) for the hackathon submission.

Benefits:

  • Natural agentic behavior
  • Works directly with LangGraphโ€™s planning style
  • Provides clear reasoning transparency
  • Fits well with your multi-agent architecture
  • Easy to show on the Gradio dashboard
  • Minimal complexity while still highly principled

๐Ÿ“ TL;DR

  • CV parsing should run automatically after upload
  • Parsed data should be saved to the DB
  • LLM CV evaluation should NOT be triggered by upload
  • Instead, the orchestrator detects the new state and triggers evaluation
  • Pattern A (state machine) is the cleanest and most agentic solution