Has anyone had a similar problem and knows how to solve it?
]]>title as a required field. However it is not required, and including it in a request returns a 400 error.
Open API Spec excerpt, note the last 2 lines:
post:
summary: Add a new person
description: ‘Adds a new person. If the company uses the [Campaigns product](Campaigns in Pipedrive API), then this endpoint will also accept and return the `marketing_status` field.’
x-token-cost: 5
operationId: addPerson
tags:
- Persons
security:
- api_key: []
- oauth2:
- ‘contacts:full’
requestBody:
content:
application/json:
schema:
required:
- title
Including title in the request produces the following response
]]>{
“code”: “ERR_SCHEMA_VALIDATION_FAILED”,
“error”: “Validation failed: title: Parameter ‘title’ is not allowed for this request”,
“success”: false
}
The dealFields endpoint is schema introspection, not a direct map of how the actual deal payload is structured. The value field with only currency as a subfield is Pipedrive’s internal representation for field management purposes, but in actual deal GET/POST/PATCH calls the payload is flat: value as the numeric amount and currency as the ISO code, same as v1
For constructing and reading deal objects, go by what the deal endpoints actualy return and expect. The migration guide and the deal API docs are the right reference there. The dealFields response is useful for discovering which custom fields exist and their types, but the subfields representation for value doesn’t translate into how you build requests
Definitely looks like a docs inconsistency worth flagging to Pipedrive directly.
]]>The concurrent retry problem is the main thing to address.
When multiple processes each get a 429 and independently schedule their own backoff, the retries converge back at roughly the same time and you get another wave. A centralized async queue where all Pipedrive calls for a given API token go through one worker fixes this cleanly. That worker reads the Retry-After header from the 429 response and pauses the entire queue, not just one thread.
One thing that may be contributing to bursts at lower than expected volume: Pipedrive moved to a token based rate limiting model ( Rate limiting ) where the daily budget is shared across the whole company account, but the burst limit is per API token on a rolling 2 second window. If your 4 or 5 sequential calls per form submission are hitting that window across concurrent processes on the same token, you exceed the burst limit well before the daily budget looks concerning in your logs.
For the partial state issue, store a small checkpoint record per submission before the first API call (person_id, deal_id, status) and update it as each step completes. On retry, check the record and skip steps already done.
At Stacksync we do the same when chaining dependent Pipedrive calls, since a sequence left partially done without checkpointing turns into a reconciliation problem at any real volume.
]]>With the app: https://paipdrive.com/ you can scan business card directly to your pipeline, you can dictate or analyse emails to automatically convert them to deal, contact, org and task in seconds.
You will need your API for pipedrive and Gemini to process the information. Works amazingly good on mobile devices, and you can ask questions in your own words about your pipeline.
Also available as extension for Chrome: https://chromewebstore.google.com/detail/paipdrive-ai-email-to-crm/ocdmcmmdicadbpmklphkacbhmalnfoho
If you need to know more, please let me know.
Aleks
Make (formerly Integromat) has notified us that the Pipedrive API is being migrated from v1 to v2. Version 1 is scheduled to be discontinued this summer.
There is a “Get a user” module in Make, but we could not find any information in the Pipedrive documentation about the corresponding API endpoint. At the same time, this module is available in Make and is currently working.
In addition, judging by the Users documentation
https://developers.pipedrive.com/docs/api/v1/Users
many user-related endpoints simply do not appear to exist in v2. This raises the question of how these data are supposed to be retrieved going forward. Will some v1 endpoints remain available, or is there another recommended way to access this information?
Could you please clarify how this API endpoint should be handled in this case?
]]>note field just doesn’t show up in the webhook payload on change events, and it’s not in the previous object either, so there’s really no way to know it changed from the webhook alone.
The workaround we use at Stacksync for our Pipedrive connector is basically what you described: on any activity.change event, do a follow up GET /api/v1/activities/{id} call to pull the full activity including the note. Not ideal but it works. You can keep it cheap by only calling back when the webhook fires for activities you actually care about (filter by type or deal_id before making the API call).
Worth checking if you’re on webhooks v1 or v2 too. The v2 payload dropped a bunch of fields that v1 used to include, similar complaints have come up for organization fields and product fields. Pipedrive’s official guidance is “query the API if a field is missing from v2” ( Breaking change: Webhooks v2 will become the new default version ) which is a bit frustrating when you can’t even tell the field changed.
Might be worth posting this under the Feedback category too so the Pipedrive team tracks it as a feature request.
]]>The /v1/deals/collection response was rather similar to the new /v2/deals endpoint in the sense that it did not contain related objects and used cursor pagination (while /v1/deals contained a lot of extra data and used offset pagination). It was only available to subset of users, though (“Please note that only global admins (those with global permissions) can access this endpoint. Users with regular permissions will receive a 403 response.”)
These were the documented query parameters for /v1/deals:
user_id
INTEGER
If supplied, only deals matching the given user will be returned. However, filter_id and owned_by_you takes precedence over user_id when supplied.
filter_id
INTEGER
The ID of the filter to use
stage_id
INTEGER
If supplied, only deals within the given stage will be returned
status
STRING
Only fetch deals with a specific status. If omitted, all not deleted deals are returned. If set to deleted, deals that have been deleted up to 30 days ago will be included.
DEFAULT all_not_deleted
VALUES open won lost deleted all_not_deleted
start
INTEGER
Pagination start
DEFAULT 0
limit
INTEGER
Items shown per page
sort
STRING
The field names and sorting mode separated by a comma (field_name_1 ASC, field_name_2 DESC). Only first-level field keys are supported (no nested keys).
owned_by_you
NUMBER
When supplied, only deals owned by you are returned. However, filter_id takes precedence over owned_by_you when both are supplied.
While some v1 endpoints might work without the api prefix, it is not the recommended way.
NB! v1 Persons endpoint has been deprecated since April 2025 so please switch to /api/v2/persons to avoid your integration breaking in the future.
]]>We are using the API v1 for the deals entity but we are not able to find the documentation as it has been moved to v2.
We understand that, but due to some PROD issues, we need to figure out:
1- How can we filter by date in the v1/deals endpoint
2- Is there any difference between the data retrieved from these 2 endpoints?
v1/deals
v1/deals/collection
Thanks.
]]>The sharing notifications is enabled for the app but still there is not webhook triggers when either the user installs the app or when the admin removes users from app sharing.
Thank You
]]>Thanks for your reply.
The isShared flag is actually from the pipedrive itself. It is not something i invented. I thought of going with your suggestion by periodically checking against the app users but the problem is that since admin can select which users he wants to give the access to against the number of licences they have bought i cannot find which users are.
]]>Feel free to shoot me an email: [email protected]
]]>The V2 Migration Guide shows deals as having a field value and a field currency, which seem to be correct, since they are returned when doing a GET request for all deals in Postman. The API Docs for getting Deals and for adding deals show these fields as well.
However, the API Docs for getting all Deal Fields show an example response where there is a field with a field_code of value with subfields value and currency.
I am wondering why the deal fields don’t just consist of a value and a currency field. This is actually how it was for the V1 API (See Postman). When you actually do request the deal fields, you get a field with field_code set to value but the subfields array only contains an object for the currency, which is different from the way it was shown in the API Docs.
In my opinion, the deal fields should just have currency and value as different fields just like how it was in the V1 API.
For the pricing and timeline side, this forum is really more for technical questions. If you’re looking for someone to build and maintain it, Stacksync handles this kind of external API to Pipedrive sync as part of their workflow automation, or Pipedrive’s partner directory has agencies that do custom CRM integrations.
]]>I am seeking guidance from the Pipedrive developer community on best practices for handling API rate limits in high-concurrency website integrations. Specifically, I would appreciate advice on implementing reliable throttling strategies, designing idempotent workflows that can safely resume after partial failures, and minimizing the number of API calls per transaction without sacrificing data integrity
Better fixes in such condition:
I am currently facing a persistent and highly disruptive issue with my website’s integration with the Pipedrive API, specifically related to intermittent 429 rate limit errors that are causing contact synchronization failures. The website is designed to automatically create or update person and deal records in Pipedrive whenever users submit forms or update their profiles. While the integration works correctly under low traffic conditions, during moderate to high usage periods the API begins returning HTTP 429 responses, which results in failed synchronization attempts. The issue is not constant but occurs in bursts, making it difficult to predict or manage effectively. As a result, some user submissions are not properly reflected in Pipedrive, leading to incomplete CRM data and operational inconsistencies.
The core problem appears to be related to how the website batches and sends API requests. Each form submission can trigger multiple sequential API calls, including checking if a contact already exists, creating or updating a person record, attaching notes, and optionally creating or updating a deal. Although these calls are logically structured and executed in order, the cumulative number of requests during peak usage seems to exceed the rate limits enforced by Pipedrive. What complicates matters is that the 429 errors sometimes occur even when the request volume does not appear excessively high based on my traffic logs, suggesting that the rate limit may be calculated per API token, per company account, or across multiple concurrent processes in ways I may not fully understand.
I have attempted to mitigate the issue by implementing basic retry logic with exponential backoff when a 429 response is detected. While this reduces immediate failures, it does not fully resolve the problem because multiple concurrent processes may still retry simultaneously, effectively compounding the load and triggering additional rate limit responses. Additionally, the retry mechanism introduces delays in processing, which creates a lag between user activity on the website and CRM updates in Pipedrive. In some cases, queued retries eventually fail after reaching maximum retry attempts, resulting in permanent data discrepancies that require manual reconciliation.
Another complicating factor is that some API calls depend on the results of previous calls within the same workflow. You can check the url as well. For example, creating a deal requires a valid person ID, and attaching notes requires confirmation that both the person and deal records exist. When a 429 error interrupts the sequence, the workflow may be left in a partially completed state. This creates inconsistencies where a person record may exist without the associated deal or notes, or vice versa. Ensuring atomicity across multiple dependent API requests has proven challenging under rate-limited conditions, especially when processing multiple user submissions concurrently.
Monitoring and logging have helped identify the 429 responses, but Pipedrive’s rate limit headers and documentation leave some ambiguity regarding the optimal request pacing strategy. It is unclear whether I should implement stricter global request throttling, queue all API interactions through a centralized worker process, or restructure the synchronization logic to reduce the number of calls per submission. For example, I am considering whether certain operations can be consolidated into fewer API calls or whether webhooks could be leveraged more effectively to reduce outbound request frequency. However, I want to ensure that any architectural changes align with Pipedrive’s intended usage patterns and rate limit policies.
I am seeking guidance from the Pipedrive developer community on best practices for handling API rate limits in high-concurrency website integrations. Specifically, I would appreciate advice on implementing reliable throttling strategies, designing idempotent workflows that can safely resume after partial failures, and minimizing the number of API calls per transaction without sacrificing data integrity. Any recommendations on queue-based processing, batching techniques, or architectural patterns that ensure consistent synchronization with Pipedrive under varying traffic loads would be extremely valuable. My goal is to eliminate intermittent 429 errors and ensure that every user interaction on my website is accurately and reliably reflected in Pipedrive without manual intervention.
Is there anyone who can guide me?
]]>I am currently facing a persistent and highly disruptive issue with my website’s integration with the Pipedrive API, specifically related to intermittent 429 rate limit errors that are causing contact synchronization failures. The website is designed to automatically create or update person and deal records in Pipedrive whenever users submit forms or update their profiles. While the integration works correctly under low traffic conditions, during moderate to high usage periods the API begins returning HTTP 429 responses, which results in failed synchronization attempts. The issue is not constant but occurs in bursts, making it difficult to predict or manage effectively. As a result, some user submissions are not properly reflected in Pipedrive, leading to incomplete CRM data and operational inconsistencies.
The core problem appears to be related to how the website batches and sends API requests. Each form submission can trigger multiple sequential API calls, including checking if a contact already exists, creating or updating a person record, attaching notes, and optionally creating or updating a deal. Although these calls are logically structured and executed in order, the cumulative number of requests during peak usage seems to exceed the rate limits enforced by Pipedrive. What complicates matters is that the 429 errors sometimes occur even when the request volume does not appear excessively high based on my traffic logs, suggesting that the rate limit may be calculated per API token, per company account, or across multiple concurrent processes in ways I may not fully understand.
I have attempted to mitigate the issue by implementing basic retry logic with exponential backoff when a 429 response is detected. While this reduces immediate failures, it does not fully resolve the problem because multiple concurrent processes may still retry simultaneously, effectively compounding the load and triggering additional rate limit responses. Additionally, the retry mechanism introduces delays in processing, which creates a lag between user activity on the website and CRM updates in Pipedrive. In some cases, queued retries eventually fail after reaching maximum retry attempts, resulting in permanent data discrepancies that require manual reconciliation.
Another complicating factor is that some API calls depend on the results of previous calls within the same workflow. For example, creating a deal requires a valid person ID, and attaching notes requires confirmation that both the person and deal records exist. When a 429 error interrupts the sequence, the workflow may be left in a partially completed state. This creates inconsistencies where a person record may exist without the associated deal or notes, or vice versa. Ensuring atomicity across multiple dependent API requests has proven challenging under rate-limited conditions, especially when processing multiple user submissions concurrently.
Monitoring and logging have helped identify the 429 responses, but Pipedrive’s rate limit headers and documentation leave some ambiguity regarding the optimal request pacing strategy. It is unclear whether I should implement stricter global request throttling, queue all API interactions through a centralized worker process, or restructure the synchronization logic to reduce the number of calls per submission. For example, I am considering whether certain operations can be consolidated into fewer API calls or whether webhooks could be leveraged more effectively to reduce outbound request frequency. However, I want to ensure that any architectural changes align with Pipedrive’s intended usage patterns and rate limit policies.
I am seeking guidance from the Pipedrive developer community on best practices for handling API rate limits in high-concurrency website integrations. Specifically, I would appreciate advice on implementing reliable throttling strategies, designing idempotent workflows that can safely resume after partial failures, and minimizing the number of API calls per transaction without sacrificing data integrity. Any recommendations on queue-based processing, batching techniques, or architectural patterns that ensure consistent synchronization with Pipedrive under varying traffic loads would be extremely valuable. My goal is to eliminate intermittent 429 errors and ensure that every user interaction on my website is accurately and reliably reflected in Pipedrive without manual intervention. Sorry for long post!
]]>If your app is primarily about logging WhatsApp conversations on deals/contacts, one workaround is switching to the Notes API (POST /v1/notes) to attach message content directly to deals or persons. You lose the Messaging Inbox UI but at least the conversation history stays linked to the right records. For the actual WhatsApp send/receive you’d keep that on your own backend through the WhatsApp Business API and just push the data into Pipedrive via REST.
We’ve had to work around similar CRM API deprecations at Stacksync and the pattern is usually the same: move the messaging layer to your own infrastructure and use the CRM purely as the data store. Not ideal but more stable than depending on endpoints that keep getting pulled.
]]>Your isShared flag workaround is probably the most reliable path for now. If you need to react to sharing changes in near real time rather than polling, you could set up a periodic check against the app users endpoint and diff against your last known state. Not elegant but its predictable.
The closest you can get is that API created records (deals, contacts, etc.) will still trigger existing automations if they match the conditions. But theres no way to query what automations exist or what rules they have.
If you need an audit of your current automations the only option right now is the list view in the UI (https://support.pipedrive.com/en/article/automations-list-view) and manually documenting them.
]]><br> tags and intentional white space to ]]>Solutions such as Rocket.Chat, Sendbird, and Apphitect are great if you want a self-hosted base and you can customize too. Even if you build from scratch, it’s worth studying what these provide as baseline features.
This comparison list https://www.trustfirms.com/best-group-messaging-apis would be useful for exploring.
]]>Is this intentional? Is there any way to preserve white-space / newlines in notes on activities?
]]>I’m unable to reproduce the issue at the moment since my customer changed their field configuration and the issue is no longer happening.
This would have been significantly easier to diagnose if the API identified the fields that were causing validation errors. Frankly, it’s quite surprising that this information is not already returned. I have already spent hours testing fields one-by-one trying to figure out which field is causing an issue. Fixing this seems like an urgent priority for your developer community.
I am considering disabling my Pipedrive integration simply because it is not worth the frustration.
]]>Why isn’t there such an option? And is it possible to add it if such a route already exists in the API?
]]>]]>person_id field is now read-only. It is set indirectly by adding a primary participant. The simplest way to set it is to use “participants”: [ [ “person_id”: 1, “primary”: true ] ].
Thank you for bringing this to our attention, we will look into providing more explicit error responses.
In this case in particular, it seems that you are providing a non-string value to 3 short text (‘varchar’ type) custom fields. Are you perhaps passing the string wrapped in a { value: … } object for these fields instead of a string?
Best regards,
Andreas
]]>We have detected recently in the leads documentation ( Pipedrive API v1 Dev References (Leads) - View Lead API Get, Post & More - Learn - Test - Try Now ) that the get all leads method (Get all leads) is not including the archived leads and now we need to consume them in a different method (Get all archived leads).
In order to process the archived leads, we use the recents method ( Pipedrive API v1 Dev References (Recents) - Learn - Test - Try Now ) but here we can’t see the archived leads in the list of items available (however we can filter by leads even it’s not included in the documentation).
Our question is: Is it possible to filter by “archived leads” items the data returned from the Recents method in any way?
Thanks.
]]>I have an application that uses Pipedrive’s App Sharing feature, and I have enabled this for my private development app. Currently, I’m facing an issue: I need to know which users have been assigned access to the app by the Admin within the organization.
According to the documentation, whenever a user installs the app or an admin removes the app from a user, a callback API call should be triggered. However, in my case, neither of these events is triggering any API calls.
For now, I am using the isShared flag to determine which users have access to the app and which do not.
If anyone has encountered this before, I would greatly appreciate any guidance or suggestions. Thank you in advance ![]()
The API response is HTTP 400 with the following body:
{“success”:false,“error”:“Validation failed: custom_fields: Expected \u0027string\u0027 as short text custom field value, Expected \u0027string\u0027 as short text custom field value, Expected \u0027string\u0027 as short text custom field value”,“code”:“ERR_SCHEMA_VALIDATION_FAILED”}
I am not sending any null values or empty strings.
Tyler
]]>Effective from: April 1st, 2026
See the full post in our Changelog
]]>"success": true,
“data”: [
{
“field_name”: “ID”,
“field_code”: “id”,
“field_type”: “int”,
“options”: null,
“subfields”: null,
“is_custom_field”: false,
“is_optional_response_field”: false
},
{
“field_name”: “Name”,
“field_code”: “name”,
“field_type”: “varchar”,
“options”: null,
“subfields”: null,
“is_custom_field”: false,
“is_optional_response_field”: false
},
{
“field_name”: “Phone”,
“field_code”: “phones”,
“field_type”: “phone”,
“options”: null,
“subfields”: null,
“is_custom_field”: false,
“is_optional_response_field”: false
},
{
“field_name”: “Email”,
“field_code”: “emails”,
“field_type”: “varchar”,
“options”: null,
“subfields”: null,
“is_custom_field”: false,
“is_optional_response_field”: false
},
{
“field_name”: “Person created”,
“field_code”: “add_time”,
“field_type”: “date”,
“options”: null,
“subfields”: null,
“is_custom_field”: false,
“is_optional_response_field”: false
},
{
“field_name”: “Update time”,
“field_code”: “update_time”,
“field_type”: “date”,
“options”: null,
“subfields”: null,
“is_custom_field”: false,
“is_optional_response_field”: false
},
{
“field_name”: “Organization”,
“field_code”: “org_id”,
“field_type”: “org”,
“options”: null,
“subfields”: null,
“is_custom_field”: false,
“is_optional_response_field”: false
},
{
“field_name”: “Owner”,
“field_code”: “owner_id”,
“field_type”: “user”,
“options”: null,
“subfields”: null,
“is_custom_field”: false,
“is_optional_response_field”: false
}
\]
}
”
For those who’ve evaluated or deployed tools like Teams, Slack, Zoom, Zoho Cliq, MirrorFly, Apphitect, or Troop Messenger, what criteria made the biggest difference at scale—AI usefulness, security, integrations, or long-term cost?
Looking for lessons learned from enterprise rollouts.
]]>Hi! The changelog endpoints are under the recents:read OAuth scope. Please try that.
Works. Thank you siim
]]>We need to setup the Pipedrive app into our Tenant. So for that we need some details, like the app ID, API permission, object ID…
The best thing for us would be to receive a full documentation on how to setup the Pipedrive step by step through a service principal in our Tenant.
We require the configuration prerequisites in App Registrations, not in Enterprise Applications.
If we don’t want to setup an entreprise app, instead of app registration in our tenant.
Thanks you for your help.
]]>Ever since the issue: Api.pipedrive.com issue with weak key strength was resolved in Code the module worked perfectly.
Now one of our customers ran into a problem.
I request Contacts via : “https://”+ServerIP+“/v1/persons?start=”+Offset+”&limit=500&api_token=”+APIKey; in Packs of 500 request further 500 with offsetting.
One of our customers seems to have some kind of Special Character in one of the contacts.
In this Customer’s case the JSON Parse now fails with Unterminated String error:
org.json.JSONException: Unterminated string at 690552 [character 690553 line 1]
I went and looked at the JSON file and the string looked normal with an exception “\u2013” in that specific String.
”Company \u2013 Name “
\u2013 stands for the “-”
So it would be Company - Name
Despite it looking correct, i let the customer remove the “-” \u2013 from the String the JSON Parsing continued a bit further.
org.json.JSONException: Unterminated string at 692007 [character 692008 line 1]
And is once again terminated. I don’t know what kind of character caused it the second time, i’m awaiting the full json file, since i cannot access the customers PBX by myself.
Looking trough the forum i haven’t found anyone with parsing issues of this kind, so i’m kinda wondering where this is coming from.
The issue started on 26. of November last year, before the Module was running fine for nearly a year.
I don’t know how i go about debugging this one.
I’m thinking the Terminations by the \ Escape Character are a red herring, and not the actual issue, but cause the parsing to stop due to an earlier issue.
But when I use the same endpoint authenticating with OAuth, I get 403 errors.
I have specified deals: read as the OAuth permissions. But maybe I need different permissions for this to work. Or is this endpoint just not allowed under OAuth? And if not allowed under OAuth, where do I find the documentation on what works with OAuth, and what is forbidden?
]]>If you want faster time-to-market, a white-label solution can help. I personally use Mirrorfly white-label software — it’s secure, customizable, and already supports real-time messaging, offline sync, presence, notifications, and scalability, which saves a lot of development time.
It really comes down to whether you want to build the messaging infrastructure yourself or focus on your product and UX.
]]>Leads API will remain on v1 for the foreseeable future. There are no plans for v2 version of them.
]]>It is currently possible for the headers to be missing for a few requests in a short period of time after a long period of inactivity in your company until our internal cache gets populated.
Could you explain in more detail how your internal cache works? How long does it take to be populated, etc.?
However, it would be great if Pipedrive provided the x-daily-ratelimit-token-limit and x-daily-ratelimit-token-remaining headers in every request, without relying on any caching.