# Welcome Using ScraperAPI is easy. Just send the URL you’d like to scrape to one of our APIs, along with your API key and we will return the HTML of that page right back to you. All ScraperAPI requests must be authenticated with an API Key. Sign up to get one and include your unique API key with each request that you send to us. {% hint style="info" %} If you haven’t signed up for an account yet, then [**sign up for a free trial here with 5,000 free API credits**](https://dashboard.scraperapi.com/signup)! {% endhint %} You can use ScraperAPI to scrape web pages, API endpoints, images, documents, PDFs, or other files just as you would any other URL. **Note:** *there is a 50MB request size limit.* There are **six ways** you can send requests (GET,POST) to ScraperAPI: * Via our API endpoint `https://api.scraperapi.com` * Via our Async API endpoint `https://async.scraperapi.com` * Via our proxy port `http://scraperapi:APIKEY@proxy-server.scraperapi.com:8001` * Via our Structured Data Endpoints `https://api.scraperapi.com/structured/` * Via our DataPipeline service `https://datapipeline.scraperapi.com/api/projects` * Via one of our [SDKs](https://docs.scraperapi.com/resources/sdks) *(only available for some programming languages)* * Via our MCP Server - enable prompt-driven scraping by routing requests through [ScraperAPI’s MCP Server](https://docs.scraperapi.com/integrations/llm-integrations/mcp-server). Choose the API that best fits your scraping needs. {% hint style="warning" %} **Important note:** regardless of how you invoke the service, we highly recommend you set a 70 seconds timeout in your application to get the best possible success rates, especially for some hard-to-scrape domains. {% endhint %} *** In addition to our scraping APIs, we provide scalable solutions like [DataPipeline](https://docs.scraperapi.com/data-pipeline) for managing bulk jobs and scheduled tasks. Our [Crawler](https://docs.scraperapi.com/scraperapi-crawler-v2.0) makes it easy to extract, follow and scrape URLs across a target domain and our [MCP Server](https://docs.scraperapi.com/integrations/llm-integrations/mcp-server) allows you to plug ScraperAPI directly into your LLM, making scraping as easy as writing a promt. Speaking of LLMs, we also [integrate with LangChain](https://docs.scraperapi.com/integrations/llm-integrations/langchain-integration), giving AI agents the ability to browse web pages or pull Google and Amazon search results with just a few lines of code. For no-code and low-code workflows, you can also use our [community n8n node](https://docs.scraperapi.com/integrations/automation-and-workflow-integrations/n8n-integration) to connect ScraperAPI directly into your automations. Our [AI Parser](https://scraperapi.gitbook.io/ai-parser-closed-beta/) allows you to extract structured data from virtually any website using flexible, schema- based definitions. These tools give you everything you need to scrape, scale and power your projects with reliable data.