Configuration¶
HireEx reads three classes of configuration:
- Environment variables — secrets and runtime toggles (
.env). - Candidate profile — your positioning, consumed by the scoring
prompt (
config/candidate_profile.toml). - Prompts — tuned LLM instructions (
prompts/*.md).
All three have .example templates tracked in the repo. The live files
(.env, candidate_profile.toml, score_job.md, filter_job.md) are
gitignored and stay on your disk only.
Environment variables¶
Defined via pydantic settings in core/config.py. The HIREEX_ prefix
and double-underscore separator map nested fields (e.g.
HIREEX_DATABASE__URL → Settings.database.url).
| Variable | Purpose | Example |
|---|---|---|
HIREEX_LLM__API_KEY |
OpenRouter (or adapter) key. | sk-or-v1-... |
HIREEX_LLM__MODEL |
Default scoring model slug. | anthropic/claude-opus-4-7 |
HIREEX_LLM__FILTER_MODEL |
Cheap pre-filter model. | anthropic/claude-haiku-4-5 |
HIREEX_LLM__HF_TOKEN |
Hugging Face token for embeddings. | hf_... |
HIREEX_DATABASE__URL |
Postgres URL. | postgresql+asyncpg://... |
HIREEX_INGESTION__LINKEDIN_USERNAME |
LinkedIn login. | — |
HIREEX_INGESTION__LINKEDIN_PASSWORD |
LinkedIn password. | — |
HIREEX_QDRANT__URL |
Vector store URL. | http://localhost:6333 |
HIREEX_QDRANT__API_KEY |
Qdrant Cloud key (optional). | — |
REDIS_URL |
Redis connection. | redis://redis:6379/0 |
Full reference: check core/config.py for the pydantic model. Any
field there is an env var with the HIREEX_ prefix (root-level fields
take no double-underscore).
Candidate profile¶
config/candidate_profile.toml drives the scoring prompt's
"who is this candidate" block. The pydantic model
(CandidateProfile in core/config.py) uses extra="allow", so new
sections / keys can be added without code changes.
Key sections (see config/candidate_profile.example.toml for the full
template):
[candidateProfile]—identity,summary,total_years,target_roles, and up to 3proof_stackentries (portfolio projects the scorer may cite verbatim in itsreasoningfield).[core]/[secondary]/[exploratory]— skill tiers. Scorer treatscoreas a production signal,secondaryas a ramp,exploratoryas out-of-scope.[transferable_skills]— adjacent tools the scorer should not penalize you for missing.[hard_no]— dealbreakers. Pairs with the scorer'sDEALBREAKERSsection.[preferences]— preferred / avoid / favourite domains, work modes, reject phrases.[preferences.positive_modifiers]— free-form dict the scoring prompt reads for whitelist / title-match boosts.[skill_levels]— per-skill calibration the model can reference when ranking stack fit.[experience_history]— compact role history with quantified highlights.[constraints.geography]— preferred region, visa paths, on-site dealbreakers.[constraints.salary]—floor,target_min,target_max,stretchin your target currencies.[domain_focus]—primarydomains you boost,avoiddomains you reject.
The scoring prompt interpolates the serialized profile via
{{ candidate_profile }}, so any field you add to the TOML is visible
to the LLM on the next run.
Prompts¶
Two files, both reloaded per call (edit and re-run, no restart).
prompts/score_job.md¶
Full scoring rubric. Typical sections:
[ROLE]— one-liner on who the candidate is, followed by proof-stack.[OUTPUT]— the exact JSON schema the model must emit.[TIERS]— keyword buckets that trigger match-score adjustments.[PRIORITY LADDER]— your actual preferences, applied first.[DEALBREAKERS]— hard caps.[MODIFIERS]— title / employer / geo boosts and penalties.[INTERVIEW-BAR PENALTIES]— if you hate leetcode, put it here.[GEO]— geography bonuses / penalties.[SALARY_POTENTIAL]— comp bands calibrated to your profile.[FEW-SHOT]— 6–10 labeled rows the model anchors to.[INPUT]/[RULES]— template slots + final instructions.
Size budget: ≤ 6 KB. If it grows, delete the verbose parts the LLM doesn't actually need.
prompts/filter_job.md¶
Cheap pre-filter. Emits a minified JSON {match_score, confidence,
reasoning}. Everything below the pre-filter threshold skips the
expensive scorer.
Buckets are 5-tier (0.85+, 0.65–0.85, 0.40–0.65, 0.15–0.40, 0.00–0.15). Hard caps mirror the scorer dealbreakers so the two stages agree.
Size budget: ≤ 2 KB.
Live editing¶
api/routes/prompts.py exposes:
GET /prompt/score/PUT /prompt/scoreGET /prompt/filter/PUT /prompt/filter
The dashboard Settings page binds to these. Changes take effect on the next scoring batch.
Where .example values become real¶
The apps always read the private filename (candidate_profile.toml,
score_job.md, filter_job.md). The .example files are templates
only — they're not loaded at runtime. If a private file is missing, the
relevant route or script errors out with a clear message; there's no
fallback to the example.