Website Indexing, Live Fetch, and More Libraries
You can now index documentation websites directly — no GitHub repo required. fetch_doc falls back to live scraping when a URL isn't indexed yet. And free workspaces can now create up to 3 custom libraries instead of 1.
Website Indexing
Point Docfork at any documentation site and it will crawl and index the pages for search. Go to Libraries in your dashboard, create a new library, and pick Website as the source type. Paste the root URL and Docfork queues a crawl job — the same Jobs page that tracks GitHub indexing shows progress, lets you filter by status, and lets you retry failed runs.
Once indexed, your website library is searchable in Cabinets like any GitHub-sourced library.
Website libraries count toward the same 3-library quota as private GitHub repos.
Live Fetch for fetch_doc
When fetch_doc is called with a URL that isn't yet indexed, it now automatically fetches the page live and returns the content in the same shape as an indexed result. The response includes a source field — "indexed" or "live" — so your agent can tell whether content came from Docfork's index or a fresh scrape. Live fetches are rate-limited to 30 requests per minute per workspace.
More Custom Libraries on the Free Tier
Free workspaces now get 3 custom libraries (up from 1). The quota is unified across private GitHub repos, org-scoped public repos, and website libraries — use them however fits your workflow.


