I’m using the Cognite SDK to create files on CDF but I’m not able to find this file under the dataset on data Explorer UI. Where will this files be stored and how can I make sure that they are visible? fileMetadata = FileMetadataWrite( name=fileName, external_id=fileName, overwrite=True, data_set_id=dataSetId, mime_type="application/json", ) new_file = client.files.create(fileMetadata)
I'm using python sdk and wanted to query instances based on a condition using “data_modeling.instances.query” method. i have a view “TimeseriesProperty” and it has a direct relation/reference to "TimeseriesPropertyType" view. Is it possible to have a distinct filter on the result, because multiple “TimeseriesProperty” instances can have the same "TimeseriesPropertyType" instances , we don't want duplicated "TimeseriesPropertyType" instances coming in response for below query: view_id= ViewId(space="slb-pdm-dm-governed", external_id="TimeseriesProperty",version="2_0")v_id_2_PROP = ViewId(space="slb-pdm-dm-governed", external_id="TimeseriesPropertyType",version="2_0")query= Query( with_= { "TimeseriesProperty":NodeResultSetExpression( limit=10, filter= HasData(views=[view_id])), "TimeseriesPropertyType":NodeResultSetExpression( limit=10000, direction="outwards", from_ = "TimeseriesProperty", through = view_id.as_property_ref("propertyType"), filter= HasData(views=[v_id_2_PROP])) } , sele
Hi, As mentioned below, which group to sign up for early adopters of DirectQuery in Power BI? https://docs.cognite.com/cdf/dashboards/guides/powerbi/dq_why I can’t find relevant group from what I see from here https://hub.cognite.com/groups Any tips ? :) Thanks,
We have customers using InfluxDB as a local on-prem storage mechanism. What’s the best extractor to use to push this data to CDF? I’ve heard OPC-UA is the way to do it, but what I find online are ways to store OPC-UA data into Influx, but not ways to get InfluxDB data into a OPC-UA server so I could use the OPC-UA extractor to get it into CDF.
Hello, While testing data gouvernance feature per space in the data model service, we noticed that it does work on the instance level but not on the data models level. Here are the tests conducted: Test 1: We tested giving access to users to only instances that are in a specific space but applying the following ACL : - dataModelInstancesAcl: actions: - READ scope: spaceIdScope: { spaceIds: ['test'] } When requesting the data model, only instances in the test space are returned. ACL works on instance level. Perfect! Test 2: When applying a similar ACL on the data model level : - dataModelsAcl: actions: - READ scope: spaceIdScope: { spaceIds: ['not_that_important_data_models'] } In this case, we notice that we get revoked access to all data models, even data model in not_that_important_data_models space. We expected that we would be able to access only data models in not_that_important_data_models. Could you please check ? Thanks
Hi! Is there a way to enable cascading deletes for instances for both direct relations and edges? Edit: I posted in the wrong group, but i don’t think i can delete.
Hello, We are using cdf toolkit to deploy our container, views and data models. We recently added many new views, these views are dependent. i.e: they reference each other in the properties through a source block or via implements block. Here an examples of these view dependencies: - externalId: View1 implements: [] name: View1 properties: test: container: externalId: Container1 space: some_space type: container containerPropertyIdentifier: Test name: test source: externalId: View2 space: some_space type: view version: v1 space: some_space version: v1 - externalId: View2 implements: [] name: View2 properties: wow: container: externalId: Container2 space: some_space type: container containerPropertyIdentifier: Test name: wow space: some_space version: v1 or - externalId: Country implements: - externalId: CountryAttributes space: '{{sp_dm_dap_knowledge_graph}}_wv' type: view version: '{{dap_version}}' name: Country properties: Wells: connectionType: multi_reverse_direct_relation name: We
In Cognite Academy Fundamentals : Working With CDF: Contextualize, I have followed all of the instructions several times for Entity Matching and, in all instances, get “No matches found”, so there is nothing to confirm or write to CDF. What am I missing here?
Using PBI Desktop and Cognites Rest API Connector beta When configuring the GraphQL parameters of Space, Datamodel, version and query There is a problem when pasting a query that was copied from the Query Explorer or even notepad into the query input box It is truncating the query string at the first carriage return CR and or Line Feed LF If all CR and LF are removed from the query string, it will paste properly and run properly.
I am new bee for this mammal, to Setup the Cognite file extractor for the local file upload to CDF, how to I config the environment variable in the yaml?according to the example-local.yaml I need to read:COGNITE_BASE_URLCOGNITE_PROJECTCOGNITE_CLIENT_IDCOGNITE_TOKEN_URLCOGNITE_CLIENT_SECRET I am little confuse, should I create a .env file to have all of those environment variables in that file and put it in the same folder as the yaml config file or can I inject those variables in the configuration file itself, if so any example? need someone lighting me up.
got 401 "Unauthorized" error when using Postman and also sometimes a bad request error while I did everything as they did in the course. please help it took a lot of time from me.
Hi there, I know there’s a lot happening in NEAT right now, and we're doing our best to keep up with the changes and learn from all the improvements. That said, I have a question regarding the documentation. The docs refer to core_data_model, which doesn’t seem to exist. Should we be using the example to load the core data model instead? For example, the documentation shows: neat.read.cdf.core_data_model( ["CogniteAsset", "CogniteEquipment", "CogniteTimeSeries", "CogniteActivity", "CogniteDescribable"]) But in our case, we’re currently using: #extend the core data model through examplesneat.read.examples.core_data_model()neat.subset.data_model(["CogniteAsset", "CogniteEquipment", "CogniteTimeSeries", "CogniteActivity"])
How can we configure the OPCUA data node (in string) as event in Cognite instead of timeseries (by default)?P.S. The existing OPCUA Server does not support to configure it as Event.
I am trying to use the CogniteSdk in c# to build my FDM model node and egde. I able to insert the node accordingly but having issue when trying to define the edge. As you can see below , to create an edge, it require the "Type" to be define. There is no error but when i check my UserDashboard model, it cannot link to my DashboardItem. Anyone know how we can insert Edge using C# SDK? FDM: type UserDashboard @view(version: "c3020ef716088a") { userId: String! createdDateTime: Timestamp lastUpdateDatetime: Timestamp dashboard: [DashboardItem] } type DashboardItem @view(version: "62c414860c7734") { userId: String! index: Int! id: Int! } UserDashboard DashboardItem
so in the Learn to Use the Cognite Python SDK in the data engineer course , I got stuck on the hands on test.as in the readme file after I created a dataset and a root assets I just do not know how can I do this section : - Read the `all_countries.csv` file as a dataframe and list all the unique regions in the world. - For each geographical region, create a corresponding CDF asset that is under the "global" root asset and is associated with the "world_info" data set. - Next, create country-level assets using the data from the CSV and link them to their corresponding region-level assets.
Hello Team,I am trying to check like how many edges(entities in my case) are null in the view(event) is there any specific way to do so? I am trying right now sdk query api like below, but not working. Please give your insights. { "with": { "Event": { "nodes": { "filter": { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Event", "version": "1_7" } ] }, "chain_to": "destination", "direction": "outwards" }, "limit": 10000 }, "Event_2_entities.Entity": { "edges": { "from": "Event", "direction": "outwards", "filter": { "and": [ { "equals": { "property": [ "edge", "type" ], "value": { "space": "slb-pdm-dm-governed", "externalId": "Event.entities" } } }, { "and": [ { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Event", "version": "1_7" } ] }, { "not": { "exists": { "property": [ "slb-pdm-dm-governed", "Event/1_7", "entities" ] } } } ] } ] }, "chain_to": "destination" }, "limit": 10000 }, "entities.Entity": { "nodes": { "from": "E
When I am trying to runcdf deploy or cdf deploy --dry-run In both cases I am getting this error ERROR (AuthorizationError): Don't have correct access rights to clean spaces. Missing: DataModelsAcl(actions=[<DataModelsAcl Action.Read: 'READ'>], scope=AllScope()) Please click here to visit the documentation and ensure that you have setup authentication for the CDF toolkit correctly Please note that I am using toolkit in azure devops pipeline so its CLI and I dont see any documentation link there as mentioned in the error. What are required access for deploy (with or without dry run)?
In CDF, does “Domain Data model” provided out of the box by the product or it needs to be configured? My understanding is, source data is firstly staged on staging/raw layer, then mapped to “domain data model” and then each customer can derive “Solution data model” out of “Domain data model”. Most probably “Domain data model” should be 3NF. Can someone validate my understanding please.
Hi. We’re administering CDF deployments from Github using github actions and Cognite toolkit. I’m setting up a github action to automatically perform a dry-run for a pull request to main, and post the dry-run output as a comment on the PR to assist the reviewer. I’d preferably like to use a client with read-only access to CDF for this, but it seems cdf-tk requires full write access even for dry runs - is that so? Performing a dry-run locally with read-only credentials results in: ERROR (AuthorizationError): Don't have correct access rights to deploy iam.groups(all_scoped). Missing:GroupsAcl(actions=[<GroupsAcl Action.Create: 'CREATE'>], scope=AllScope()) -GroupsAcl(actions=[<GroupsAcl Action.Delete: 'DELETE'>], scope=AllScope())Please click here to visit the documentation and ensure that you have setup authentication for the CDF toolkit correctly. I would expect to see the same error on my github-action, but it stops without much useful information: Run cdf-tk deploy --env=dev --dry-ru
I just have a new phone and I'm facing an issue with the Authenticator App. Please reset the MFA Access so that I can set up the authenticator again. Regards
Hi All, I am working with hosted extractors for kafka and it works pretty well for me with transformations when we have plain json data in kafka topics. Now I am trying to check if we can work with zlib gzip compressed data coming in topic, I have json string and messagepayload attribute of json string will be holding compressed data instead of whole message as compressed one, is it possible to write transformation for such data. e.g., in kafka topics { "Header": { "MessageId": 133367162, "MessageType": "DATA_REPORT", "Timestamp": 1741122422, "PayloadCompression": "Z_LIB_COMPRESSION" }, "MessagePayload": "eJyqVnJJLctMTi1WsoquVvJLzE1VslIyVNJRckksSQxJTIeIhySmQ6WCA3wVfFMTi0uLUlNgqioLQDIu/qFOPq7xYY4+oa5KOkphiTmlMLNCMnNTi0sScwuUrAzNTQwNjYxMjAwtzA11lAJLE3MySyqVrAxqY2tjawEBAAD//x5aKt0="}
As owner of a canvas, the Canvas UI identifies me as expected such as in version history or in comments. When another user makes edits and comments they are not identified and show up as seen in the attached screenshot. Just in case it matters, this is a Rockwell Automation DataMosaix project. Thanks!
After creating a new calculated time series in Charts, how can I replicate the same calculation across multiple similar assets? Let’s say I calculate [ampers / Fluid rate] in one pump; I’m now interesetd on repeating the same calculation across 10 other pumps (let’s assume timeseries have the same name for all pumps, i.e.: PumpX:AMP). Am I obliged to go to SDK and code that? Shouldn’t be an easier way to do that? Thanks!
Hi, I have a datamodel in CDF and I want to run a GraphQL query through the SDK to retrieve specific data from the model. When I try to run response = client.data_modeling.graphql.query( id = ('sp_watercourse_data_model', 'Watercourse', 'v1'), query = query ) I receive the following error message: CogniteGraphQLError: [GraphQLErrorSpec(message=Could not find data model with space=sp_watercourse_data_model, externalId=Watercourse and version=v1, locations=[], extensions={'classification': 'DataFetchingException'})] However, when I list my models using models = client.data_modeling.data_models.list(limit=100).to_pandas() the model is in the dataframe as expected. What causes this problem and how can I fix it? thanks
Hi Cognite Team, I’m encountering an issue when trying to install cognite-toolkit (version 0.4.15) using Poetry. The installation fails with the following error: Installing C:\Users\andre.carvalho\AppData\Local\pypoetry\Cache\virtualenvs\cognite-toolkit-playgroud-gU7xo0oD-py3.11\Lib\site-packages\cognite_toolkit\_builtin_modules\contextualization\cdf_entity_matching\default.config.yaml - Installing cognite-toolkit (0.4.15): Failed FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\andre.carvalho\AppData\Local\pypoetry\Cache\virtualenvs\cognite-toolkit-playgroud-gU7xo0oD-py3.11\Lib\site-packages\cognite_toolkit\_builtin_modules\contextualization\cdf_entity_matching\extraction_pipelines\ctx_files_entity_matcher.ExtractionPipeline.yaml' Before running poetry add cognite-toolkit, I ensured my environment was set up correctly. However, it seems like the package is missing required files. Could you confirm if this is a known issue or if there’s a recommended wo
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.