Getting started¶
You need about five minutes and Docker installed. That's the whole setup. You don't need to know Python, you don't need Node, you don't need to fiddle with environment variables — by the end of this page, HireEx will be running on your laptop and ready to sort through jobs for you.
The one-liner¶
If you just want it running, this is the fastest path. Paste it into your terminal:
That's the whole install. Open http://localhost:3000/settings, paste an API key (OpenRouter has a free tier — grab one at openrouter.ai), hit save. Next batch will start sorting jobs for you. You're done.
What you need before you start¶
- Docker and Docker Compose. If you're not sure you have them, run
docker --versionin a terminal. No Docker? Install Docker Desktop — it's the normal install from docker.com. - An API key for an AI provider. OpenRouter is the default; the free tier covers most personal use. If you'd rather not use an outside provider, you can point HireEx at a local AI model (Ollama works). That's developer territory, though.
- Optional: a LinkedIn account, only if you want HireEx to read LinkedIn. The other twelve job boards work without it.
For the curious — setting it up from source¶
If you'd rather clone the repo and poke around, here's the longer path.
1. Clone and copy the example configs¶
git clone https://github.com/chernistry/hireex.git
cd hireex
cp .env.example .env
cp config/candidate_profile.example.toml config/candidate_profile.toml
cp prompts/score_job.example.md prompts/score_job.md
cp prompts/filter_job.example.md prompts/filter_job.md
All four private files are gitignored. Edit them in place — nothing gets pushed back.
2. Fill in the minimum¶
| File | What to fill |
|---|---|
.env |
OPENROUTER_API_KEY, HIREEX_INGESTION__LINKEDIN_USERNAME, HIREEX_INGESTION__LINKEDIN_PASSWORD (only if you want LinkedIn). |
config/candidate_profile.toml |
Your positioning, target roles, skill tiers, salary floor, geography rules. See Configuration for field-by-field notes. |
prompts/score_job.md |
The scoring rubric. Tune it to your priorities. The .example file shows the shape; rewrite the priority ladder and modifiers to match what you actually care about. |
prompts/filter_job.md |
The pre-filter rules. Tighter = cheaper (fewer jobs hit the expensive scorer). Looser = more through to the expensive model. |
3. Boot it up¶
Services that come up:
db— Postgres (5432).redis— Redis (6379).api— FastAPI (8000). Swagger UI at http://localhost:8000/docs.web— Next.js dashboard (3000). Open http://localhost:3000.
.env is mounted via env_file. The container-internal DNS for db and redis is overridden in docker-compose.yml, so you don't have to edit .env to reach them.
4. First fetch → score → triage loop¶
# 1. Pull a batch of postings (LinkedIn + JobSpy).
./scripts/run_ingest.sh
# 2. Pre-filter and score anything unscored.
./scripts/run_score.sh
# 3. Open the dashboard and triage.
open http://localhost:3000
The dashboard has the jobs table with filters, bulk archive, score distributions, and per-job reasoning from the AI.
Running without Docker (developer mode)¶
If you want to develop the API directly:
python -m venv .venv && source .venv/bin/activate
pip install -e '.[dev]'
alembic upgrade head
uvicorn api.app:app --reload
And the web app:
You'll still need Postgres and Redis running somewhere — either via docker compose up db redis or a local install.