Actions4
- Job Actions
Overview
The node integrates with a Google Maps Scraper API to manage scraping jobs that collect data from Google Maps based on specified search criteria. The Create Job operation allows users to start a new scraping job by defining keywords, location, and various parameters controlling the scope and behavior of the scraping process.
This node is beneficial for scenarios such as market research, lead generation, competitive analysis, or any use case requiring bulk extraction of business or place information from Google Maps. For example, a user might create a job to scrape coffee shops in New York City, specifying keywords like "coffee shop" and geographic coordinates to focus the search.
Properties
| Name | Meaning |
|---|---|
| Job Name Mode | Choose how to set the job name: - Manual: Enter the job name manually. - Auto-generate by Model: Automatically generate the job name based on keywords and location. |
| Job Name | (Required if Job Name Mode is Manual) The name for the scraping job, e.g., "Coffee shops in New York". |
| Keywords | (Required) One or more keywords to search for on Google Maps, e.g., "coffee shop". Multiple keywords can be added. |
| Language | Language code for the search results, e.g., "en" for English, "es" for Spanish, "fr" for French, etc. Defaults to "en". |
| Zoom Level | Map zoom level controlling the granularity of the search area, ranging from 1 (world view) to 20 (street level). Default is 15. |
| Latitude | Latitude coordinate for the center of the search area, e.g., "40.7128". Optional but recommended for location-specific searches. |
| Longitude | Longitude coordinate for the center of the search area, e.g., "-74.0060". Optional but recommended for location-specific searches. |
| Fast Mode | Boolean flag to enable fast mode scraping, which may speed up the scraping process at the potential cost of completeness or accuracy. Defaults to false. |
| Radius (km) | Search radius around the specified coordinates in kilometers. Minimum value is 1 km. Default is 10 km. |
| Depth | Search depth level indicating how many layers or pages of results to scrape. Minimum value is 1. Default is 1. |
| Include Email | Boolean flag indicating whether to scrape email addresses found in the results. Defaults to false. |
| Max Time (seconds) | Maximum allowed time for the scraping job to run, in seconds. Minimum is 60 seconds. Default is 3600 seconds (1 hour). |
| Proxies | Optional list of proxy server URLs to route scraping requests through, e.g., "http://proxy.example.com:8080". Multiple proxies can be specified to distribute load or avoid IP blocking. |
Output
The output JSON structure for the Create Job operation contains the full response from the Google Maps Scraper API representing the newly created job. This typically includes:
id: Unique identifier of the job.name: The job name.keywords: Array of keywords used.lang: Language code.zoom,lat,lon,radius,depth: Parameters describing the search area and depth.fast_mode,email,max_time: Flags and limits applied.proxies: List of proxies used if any.- Additional metadata such as job status, creation date, and other API-provided details.
No binary data is output during job creation.
Dependencies
- Requires an API key credential for authenticating with the Google Maps Scraper API.
- The node expects the base URL of the API to be configured in the credentials.
- Network access to the Google Maps Scraper API endpoint.
- Optional proxy servers can be configured per job to route requests.
Troubleshooting
- Invalid Base URL Error: If the configured API base URL is invalid or malformed, the node will throw an error. Ensure the base URL is correctly set without trailing slashes or typos.
- Missing Required Parameters: The node requires certain parameters like keywords and job name (if manual mode) to be provided. Omitting these will cause errors.
- Job ID Required: For operations other than create (e.g., get, delete, download), a valid job ID must be supplied; otherwise, an error is thrown.
- API Request Failures: Network issues, invalid credentials, or API downtime can cause request failures. Check API key validity and network connectivity.
- Proxy Configuration Issues: Incorrect proxy URLs or unreachable proxies may cause scraping failures or slowdowns.
- Timeouts: Setting max time too low may cause jobs to terminate prematurely; ensure it is sufficient for the expected scraping workload.
Links and References
- Google Maps Scraper API Documentation (Replace with actual URL if available)
- n8n Documentation on Creating Custom Nodes
- General info on Google Maps Platform