Site Sucker is a simple Node.js tool to download an entire website for offline use.
It recursively crawls all internal pages, downloads assets (HTML, CSS, JS, images, fonts), and preserves the folder structure.
- Download entire websites for offline browsing
- Recursively crawl internal pages
- Download assets (images, CSS, JS, fonts, etc.)
- Preserve original folder structure
- Simple and lightweight — built with Node.js
- Tested with both custom-coded and WordPress sites
- Node.js (v18+ recommended)
- npm (comes with Node)
- Clone or download this repository:
git clone https://github.com/itsmepawansaini/site-sucker.git
cd site-sucker- Install dependencies:
npm install- Open
index.jsand set the website URL you want to download:
const START_URL = "https://example.com/"; // Change this to your target site- Run the script:
node index.jsAll downloaded files will be saved inside the site folder.
The script keeps the original structure of the website.
node index.jsIf START_URL is set to:
https://example.com/
You’ll get:
downloaded-site/
├── index.html
├── about/
│ └── index.html
├── contact/
│ └── index.html
└── assets/
├── css/
├── js/
├── images/
└── fonts/
- The script only downloads pages from the same domain.
- JavaScript-generated content (AJAX, SPAs) may not be captured.
- For dynamic sites, you might need a headless browser like Puppeteer.
This project is open-source and available under the MIT License.
Contributions, issues, and feature requests are welcome.
Feel free to fork this repository and improve the script.