Case study
Grant proposal workflow automation
Your team was losing weeks to manual research, inconsistent drafts, and late file assembly. We replaced that with a guided system that streams real work, keeps sources visible, and ends in a structured assessment you can download and review.

The problem
You needed one place to move from company context to a ranked opportunity set, then into a single narrative your leadership could trust. Spreadsheets, email threads, and one off document versions were not enough when deadlines and compliance details stack up.
The work splits into three kinds of work that rarely stay aligned. You must understand the organization and its science, turn portfolio documents into structured project records, then run discovery and scoring across public and private funding sources. Any gap in that sequence creates rework at the end.
What we built
We shipped a Next.js application connected to your grant intelligence API. The UI walks users through seven stages from context to delivery, with server sent events so progress, logs, and partial results show up while jobs run.
- Company context research creates an organization record, runs grounded web research with traceable sources and confidence, and locks the next step until the profile reads complete.
- Portfolio upload accepts a DOCX table, parses headers and rows, and streams extraction so you watch each project land with indication, stage, priority, and model confidence.
- The research pipeline runs historical funding analysis, multi source opportunity discovery, and per project alignment scoring with batch progress visible in the console.
- When generation finishes, the app pulls the assessment, renders markdown to HTML, applies your cover treatment, and returns a PDF through a dedicated export route built with Puppeteer.
- A separate editor path supports streamed drafting and revision for the long form assessment body when you want human in the loop editing on top of the automated pass.
What you gain
You get a repeatable path from raw inputs to a versioned strategic assessment. Reviewers see the same section order every time because generation follows your template file instead of ad hoc prompts. Live feeds cut uncertainty because you can watch each phase update while jobs run, which matters when auditors or program officers ask what changed between versions.
Your operations team spends less time chasing attachments and more time validating facts. When something fails, the event log points to the phase so you can retry or escalate without rebuilding the whole run.
What you should do next
If you want the same architecture for your portfolio, bring your assessment template, your source list, and one real portfolio file. We wire the API, tune scoring weights with you, and harden export for your brand and legal footer requirements.