job-ops
Self-hosted job application tracker. TypeScript monorepo (Express + Vite), Drizzle ORM + SQLite, LLM scoring and resume tailoring via Ollama, PDF generation via RxResume.
Goals
Centralize job application tracking with AI-assisted scoring and resume tailoring — importing existing opportunity data from Rolepad, enriching it with scraped job descriptions, and using a base resume to score and tailor applications per listing.
Effectiveness
Partial. The import pipeline worked end-to-end: 88 Rolepad opportunities imported, job descriptions fetched, all records in the database. The LLM scoring and tailoring pipeline did not run — because job-ops defaults to v5 RxResume mode (API key auth at rxresu.me) and we configured a v4 account (email/password auth at v4.rxresu.me). The base resume is loaded but never actually exercised for scoring or PDF generation.
What made it effective
/api/manual-jobs/fetch— job-ops's own scraping endpoint fetches and cleans job descriptions from arbitrary URLs. Delegating the scraping to job-ops rather than implementing it in the import script meant we got their extraction logic for free and kept the import script simple./api/manual-jobs/import— a clean import endpoint that handles normalization, deduplication (409 on duplicate URL), and async processing (202 + jobId). The protocol is unambiguous.- 502 + jobId = success — when RxResume isn't configured, job creation still succeeds and processing fails later. The 502 response body includes the
jobId, which is enough to confirm the record exists. Treating this as partial success (rather than a hard error) let the import complete without blocking on RxResume setup. - Drizzle + SQLite — zero-dependency local database, no server to run, schema migrations via
npm run db:migrate. Clean.
Friction / pain points / surprises
v4 vs v5 mode mismatch is silent. Job-ops defaults to rxresumeMode: "v5" but stores rxresumeBaseResumeIdV5 separately from rxresumeBaseResumeIdV4. We created a v4 account, imported a resume, and set the base resume ID — and job-ops accepted it without complaint. There's no validation at import time that the credential mode matches the resume ID's origin. The failure only surfaces when the pipeline tries to fetch the resume for scoring.
Rolepad data sparseness required multi-hop enrichment. Only 3/88 Rolepad entries had a description or URL. The notes API (GET /api/opportunities/{id}/notes) contained forwarded save@rolepad.com emails with job URLs embedded in the text — extractable with a URL regex. Those URLs then had to be re-fetched through job-ops's own /api/manual-jobs/fetch endpoint. Without the notes API, 85 of 88 entries would have been empty shells.
Firebase auth in the scraper. Rolepad authenticates via Firebase (identitytoolkit.googleapis.com/v1/accounts:signInWithPassword). The API key is public and baked into the Rolepad frontend JS. The scraper calls it directly. This works but is brittle — Firebase API keys are meant to be restricted by domain; an undocumented rate limit or key rotation would break the scraper silently.
HTTP_PROXY breaks localhost fetch in Node. The container has HTTP_PROXY set. Node's undici respects it and routes all fetch calls — including http://localhost:3001 — through the proxy, which fails. Fix: NO_PROXY=localhost,127.0.0.1 set at the module level in the import script. Not obvious from the error message (fetch failed).
RxResume schema validation is opaque. /api/resume/import returns 500 {"issues": [...]} with a Zod error path if the JSON structure is wrong. Discovered two required fields this way: icon on each profile item, and custom: {} in sections. Neither is documented. Trial and error against the live API is the only reliable way to find them.
Slug uniqueness is the caller's problem. Re-running the create script with the same slug returns 400 ResumeSlugAlreadyExists. RxResume doesn't generate a fallback or append a suffix — you get a hard error and must retry with a new slug. Appending Date.now() to the slug is an ugly but effective workaround.
The public resume link requires explicit visibility update. Newly created resumes default to private. The /r/<slug> public URL returns a login gate until PATCH /api/resume/:id { visibility: "public" } is called. This is not mentioned in job-ops's own documentation — it only matters if you want to share the resume outside of job-ops, but it's a footgun if you assume imported resumes are publicly accessible.
Services used alongside job-ops
Rolepad.com — Firebase-backed job tracker used as the data source. REST API (/api/opportunities, /api/opportunities/:id/notes, /api/email/reviews) is undocumented but straightforward. The scraper extracts 88 opportunities and 50 unreviewed email notes.
RxResume v4 (v4.rxresu.me) — Resume hosting and PDF generation. Created an account and imported a full resume from LinkedIn and CV data. Ultimately not wired to job-ops due to the v4/v5 mode mismatch described above.
LinkedIn (/in/gavmor) — Profile data source for resume content. Scraped via agent-browser after login: experience (10 roles), education, skills. The experience page required navigating to /details/experience/ — the main profile page renders a truncated list.