EveryAction / NGP VAN is the company that builds and maintains CRMs in conjunction with large, distributed client partners (NGP VAN products are branded as "EveryAction" outside of the political market). While VAN does not own any voter or client data, we play a role in supporting campaigns and vendors with their voter file integrations.
As a developer it’s your responsibility to understand the system architecture behind these platforms and communicate to your users how the integration you’ve built interacts with their systems. The voter data in VAN may be managed differently than person level data you’ve worked with before so please don’t hesitate to reach out to [email protected] with any questions.
These client platforms and NGP VAN API support team are “shared resources” across client ecosystems. During high volume times like election get out the vote (GOTV) periods, database usage and support tickets increase exponentially. Our goal is to ensure that all integration partners and campaigns can sync data quickly and easily, but it’s important for vendors to utilize these resources in an efficient way.
An outline of how high volume applications should be integrating with VAN is below. Though this is written for high volume applications the methods outlined can be used by vendors of any size.
A vendor integrating with VAN should be sending data continuously, as close to real time as possible (unless using a bulk endpoint or custom file loading system), and with throttling limits in place that meet the guidelines below.
Applications should have the ability to replay API calls – or if not, the vendor should have a backup plan such as providing the client with a CSV to bulk upload via the front end if needed.
Phone or SMS applications must use “phoneId” when syncing any disposition data (wrong number, disconnects, canvassed, opt in status, etc) as shown below. More information is available here.
- If an application is not throttling and the volume is impacting end user functionality, VAN will freeze the application’s key until the problem is resolved. It is the responsibility of the vendor to have the ability to replay API calls later or have a manual solution for their client.
Vendors are expected to utilize API endpoints in an efficient manner and will be notified if they exceed per second call limits. While exceeding recommended throttling limits may not have an impact on the end user today it will during peak volume, when dozens of other vendors are utilizing the same database across clients during peak usage. If a vendor has questions about how best to optimize their calls, they should reach out to [email protected] and we would be happy to help!
Our hope is that by proactively reaching out to vendors who are having small volume spikes now we can prevent bottlenecks during high volume periods. During high volume periods support resources will be allocated based on the greatest needs as dictated by our clients. Additionally, vendors should do “dry runs” prior to GOTV in consultation with NGP VAN.
If an application syncs data incorrectly and a backend edit is required, it is the responsibility of the client to reach out with specifics (vendors cannot request backend edits or database rollbacks on behalf of their clients).
No more than 3 concurrent requests per API key/client, throttled by duration.
Though this is considered a more “advanced” method of throttling and meant for high volume vendors specifically it can be used by vendors of any size. In general, a partner should be able to rely on the actual duration of each API call as a reasonable proxy of how fast they can operate. So if a partner limits their total number of concurrent requests it will ‘automatically’ adjust for the difference in API calls they are using.
Three concurrent workers can get through 60 requests/sec for something that takes 50ms (like post canvass results) but closer to 6 requests/sec for something that takes 500ms (like match and store person). And if the system is under pressure, a high volume integration that is set up this way will reduce their load on us automatically. If for some reason a single client requires more than one API key for the same vendor, the limitation should be applied across all of those keys.
Assuming normal database utilization, if a partner was using this method of throttling across 50 states, they could post canvass results for more than 85 million records in a 24 hour period.
No more than 2 calls per second per API key for /findOrCreate and no more than 5 calls per second for other endpoints.
These are safe numbers for essentially every standard use case and it’s generally fine for developers to bump from 5 calls per second to 10, with the exclusion of match and store person endpoints such as findOrCreate which are more intensive. We may ask a specific developer to throttle back down to 5 during a high volume period.
For smaller vendors or “one off” custom applications the "Simple" per second throttling from above are a good guide but for vendors with a productized integration a concurrent model is highly recommended. Database load is an important metric for the success of your integration and a concurrent model is the best way to accomplish that. For example, 200 canvass results requests spread across 50 states is fine to handle in one second, but 200 match and store person requests in a single national My Campaign is not.
Using duration as a guide tends to work best. An introspection call will also expose databaseName (though that’s not a unique ID of the db it’s pretty close) and could be helpful if they needed to manually intervene and throttle a bulk set of keys based off attributes.
Updated almost 3 years ago