Skip to main content

We are fetching quite a lot of data from CDF, both raw and aggregated, and we have experienced error messages from Postman and Java requests when the aggregated number of datapoints are exceeding the limit (10.000), but we have yet to experience this for the Cognite Python SDK for raw data (limit 100.000). Is there an error message implemented for this intention? From the code in the SDK in GitHub it appears that if the data is exceeding the limit then it would return an empty list, which is a very dangerous return setting. 

Best regards

Oliver

Hi Oliver! I would love to clarify the example in the SDK documentation that was not clear. Could you post a screenshot maybe? I am thinking of the part that caused this confusion:

From the code in the SDK in GitHub it appears that if the data is exceeding the limit then it would return an empty list, which is a very dangerous return setting. 
 

Thanks! 😊


Thanks for the answer! 


Yes, the Python SDK has a really nice DatapointsFetcher implementation that splits the API requests into multiple parallel requests, which both gets around the API’s 10,000 time series limit and improves performance.  SDK’s for .NET and Java seem to lag behind quite a bit in feature set leaving them up to the SDK user to implement these advanced features.


Hi Oliver

I believe the Python SDK will do as many API calls as required and not impose any limit at 100,000 data points. Just tried the snippet below using version 4.5.4 of the SDK:

datapoints = client.datapoints.retrieve(id=**, start="50w-ago", end="now", limit = 1000000)
df = datapoints.to_pandas()
df.shape

Which returned (135324, 1) indicating that the client fetched 135k data points.

 


Reply