Hi there, I am trying to authenticate the CogniteClient in bluefield using our Azure B2C: response = requests.request("POST", "https://oceandataplatform.b2clogin.com/oceandataplatform.onmicrosoft.com/B2C_1A_ROPC_Auth/oauth2/v2.0/token", headers={ 'Content-Type': 'application/x-www-form-urlencoded'}, data={ "grant_type": 'password', "client_id": client_id, "scope": 'openid https://westeurope-1.cognitedata.com/user_impersonation', "username": username, "password": password,}) creds = response.json() client = CogniteClient( token=creds["access_token"], project="oceandata", base_url='https://westeurope-1.cognitedata.com', client_name="cognite-python-dev", debug=True,) This works perfectly :) Edit: I initially asked how to get this working before spotting a typo in the code. Everything works fine now. Leaving the post here so that others may use it for reference.
Since last week, I have not been able to trigger any transformations via the Playground API. ({{baseUrl}}/api/playground/projects/{{project}}/transformations/:id/run) I am getting : 404 Not found with response body: {"code":4 (Yes; a incomplete json) I noticed also that the API has now been included in API v1. Is maybe this error related to this change? Related (?): I can see that when I run a “GET” using the new endpoint, I get the same error ({"code":4). Have you been a bit quick in the implementation and running a GET on the v1 API from the playground API? I can also mention that it does actually work using the new v1 endpoint, so this will solve the problem. My problem is, off course, that I am currently using the Playground API which is currently failing and I have to replace the logic and test/release it (Taking extra time). Working Example: POST {{baseUrl}}/api/v1/projects/{{project}}/transformations/run The example from Postman is with externalId, so you will have to replace it
What are the alternatives for scoping read/write rights to different users? Dataset is the most intuitive way of limiting access rights to subsets of data, but if i would like to share data that is scattered across different datasets, but easily identifiable by sharing a common label, are there any options?
In the CDF docs I see there is a PI and OPCUA connector. Many industry 4.0 community members see MQTT rather than OPCUA as the future of Industrial IoT cloud based communication, as it is lightweight, report by exception, client driven, etc… If I want to onboard a facility that has an IoT MQTT gateway, how does Cognite recommend we integrate? I have used Azure IoTHub and Azure Functions to connect and transform the data in the past. Do you have a reference architecture for this? Can we connect to the gateway directly with a CDF connector, or do you recommend using Azure IoTHub and Functions (or similar GCP/AWS)? Also Azure has some neat tools like IoT Device Provisioning Services, be nice to know a POV on if/how to utilize those. Thanks!
Hi there, I am looking for resources on CDF templates API. So far I the best introduction I have found is the Python SDK documentation, as well as the information at docs.cognite.com which focuses on using templates in fusion. Can you provide more information about how to use the templates API?
I am trying to stream data from CDF to Azure Event Hub with the Python SDK and cannot find anything related to streaming datasets. Only option so far (as I know) is dps = c.datapoints.retrieve_latest(id=184691546499795) which would need a trigger of some sort to keep running. Data from CDF are so called timeseries from different types of sensors. Is there any documentation on Streaming Data for CDF that I could look at or is Streaming really supported?
If a user includes https:// or http:// in the Override Azure Tenant field the url to Azure AD will not include a correct tenant and login is prevented. This has caused login issues for at least a few users. The request is simply that the form is modified to strip the protocol scheme, ie removing http:// or https:// automatically, if present.
Hey, In our current workflow we’re expanding the use of Functions as a tool. We’re somewhat hampered by how “clumsy” it is to bundle and version control proprietary dependencies. Is there something in the pipe to address this issue? Kind regards, Robert
Hey! I’ve a problem with filling a gap from a source to a timeseries in CDF. Problem description We’re filling a hole in a time-series from time A to B. There are some datapoints on the edges of the interval in CDF. Data is extracted from the source, and in python prepped for the datapoints API as a list-of-tuples payload. For an arbitrary period I extract 2976 datapoints which I upload to CDF. Subsequently, I query the time-series for the same period of time and recieve 2928 datapoints. There are no NAN values in the input for either the date-time or value. The data is also hourly, and so I’m wary of it just being an edge effect of poor timestamp specifications for the retrieval. What other PEBCAK things have I missed? Simplified example included below: payload>> [{'externalId': 'ts_externalid', 'datapoints': [...]}]payload[0]["datapoints"][10]>> (1617271200000, 0.0) client.datapoints.insert_multiple(payload)meter_data = client.datapoints.retrieve( start=dates[0], end=dates[-1], exter
We are looking into cloud-optimized storage formats such as GeoTIFF, Parquet, etc. One of the things we are trying to determine is whether we could fully utilize such cloud optimized formats with CDF Files. Will the download-link returned by CDF Files allow us to do seek-operations and only download parts of the file?
Hi. I’m currently working on an application where we need to check if a node (PLC/PC) have updated any timeseries within x amounts of hours. The way we do it now is to get all timeseries recorded values in a time range, and check if any timeseries have any values. If there is an value on one timeserie, then we consider the node alive. This can be time consuming, since each node can several hundreds/thousands of timeseries. So my question is if there is a way to get the latest recorded value in a collection of timeseries within a timerange? Or just a recorded value in a timerange for a collection of timeseries. In Cognite, we have an asset hierarchy, which preferably would look like this: Rig Node 1 TimeSerie1 TimeSerie2 ... Node 2 TimeSerie1 TimeSerie2 ... Node ... TimeSerie1 TimeSerie2 … But the hierarchy could also be completely flat, where all the timeseries are connected to the Rig. Our externalIds for the timeseries are: rigNumber.NodeNumber.SignalNumber So all timeseries with the
When exploring the datamodel in CDF through fusion, it would be nice to add labels to the table overview, as shown below.
Hi, does anyone know how I can download files from AssetMeta and / or AssetDocumentsPanel component? Are there other options for retrieving documents related to an asset?
Hey, To what extent does CDF handle possible race-condition triggering situations like the update of an object from two different systems? The example we are currently considering is whether a disjoint set of metadata on objects (events) can be updated without regard to timing. /Robert
Aksels ts is Original data , ts average at 04:00 = 0.125, can only be explained by interpolated values The example above shows that 1h aggregates of type average computed at time-point 03:00 takes into considereation the interpolated values in the time-span 03:00 to 04:00. Is it always like this as long as i choose granualarity 1h ? I could not find the answer to my question here: Aggregation | Cognite documentation
I wonder if it is possible to use alternative authenticator apps to connect to Cognite HUB? I am asking because I’m already using another app for other sites.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.