Recently active
Hi, can this feature be added to my company projects? Henrik
Hello, As per the docuementaiton Pygen exposes a `graphql_query` method. However, I’m not able to find it. Could you please help? version used: cognite-pygen==1.2.1 Thanks
Hi team, I want to import some functions from util folder which is same directory of the folder in which handler.py. As files in util folder is being used by other cognite functions too, I can’t keep the files of util folder in function_folder but at the time of importing functions from files of util folder and after deploying cognite function it gives import error as so basically structure is Repofunction_folder- file1 and handler.pyutil_folder - file2 and file 3 And I want to use methods from file 2 into handler.py and I have added __init__.py in util folder 2025-04-04T12:17+05:30 : Traceback (most recent call last): 2025-04-04T12:17+05:30 : File "/home/site/wwwroot/function/_cognite_function_entry_point.py", line 297, in import_user_module 2025-04-04T12:17+05:30 : handler = importlib.import_module(handler_module_path) 2025-04-04T12:17+05:30 : File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module 2025-04-04T12:17+05:30 : return _bootstrap._gcd_import(name[
I'm using python sdk and wanted to query instances based on a condition using “data_modeling.instances.query” method. i have a view “TimeseriesProperty” and it has a direct relation/reference to "TimeseriesPropertyType" view. Is it possible to have a distinct filter on the result, because multiple “TimeseriesProperty” instances can have the same "TimeseriesPropertyType" instances , we don't want duplicated "TimeseriesPropertyType" instances coming in response for below query: view_id= ViewId(space="slb-pdm-dm-governed", external_id="TimeseriesProperty",version="2_0")v_id_2_PROP = ViewId(space="slb-pdm-dm-governed", external_id="TimeseriesPropertyType",version="2_0")query= Query( with_= { "TimeseriesProperty":NodeResultSetExpression( limit=10, filter= HasData(views=[view_id])), "TimeseriesPropertyType":NodeResultSetExpression( limit=10000, direction="outwards", from_ = "TimeseriesProperty", through = view_id.as_property_ref("propertyType"), filter= HasData(views=[v_id_2_PROP])) } , sele
Hi, I would like to check whether there is any CDF Admin UI or available APIs that provide the telemetry and usage-related insights such: User activity logs such logins, user actions, dataset access, frequency of use API usage metering such calls made by service accounts or integrations, volume over time Application usage tracking such as which integrations/apps are active, which endpoints are being used and etc. Quota and resource usage tracking, like number of API calls, storage/compute loads. Are there any CDF admin dashboards, telemetry APIs, or audit logs available for this purpose? Please advise. Thanks
I am using the [Cognite Data Fusion Python SDK](https://cognite-sdk-python.readthedocs-hosted.com/en/latest/data_modeling.html#delete-instances). While deleting a node, e.g.... ```pythonclient.data_modeling.instances.delete(NodeId(space='some-space', external_id=deleted_node_id))``` I want to also delete the edge(s) from the parent in order to avoid leaving orphan edges in the database. **Problem**: how can I look-up such edges? I know how to get all edges (and then filter by the `end_node` attribute), but that would be very inefficient: ```pythonedges = client.data_modeling.instances.list( instance_type="edge", limit=-1)``` How could I construct the `Filter` for the `end_node` (to match the `deleted_node_id`) and ideally the `type` (to equal `Child` type)? E.g. ```pythonedges = client.data_modeling.instances.list( instance_type="edge", filter=And(Equal(???), Equal(???)), limit=-1)``` Because then I could just delete all those edges with one `client.data_modeling.instances.delete(edges
Hi, I have been trying to use the sync API to try and pull changes from a data model - specifically node instances from a view. When testing, some fields within a view was changed / updated but I was not able to pull anything new using the sync API. This was after I created a baseline of the full dataset using the Query endpoint. The sync endpoint did work when new nodes were added to the view, it would only pull the new row, so I think my query is correct. Is there some sort of parameter to turn on / other way to use the sync API to retrieve changes and new additions to instances?
I would like to know what to fill in BI to be able to connect on CDF. I tried to use that how on the course, but gives a error to add the organization name. publicdata?tenantID=48d5043c-cf70-4c49-881c-c638f5796997
Hello, I’m having this error while using CDF latest Spark datasource. I was only able to make it work using 2.1.10 version (com.cognite.spark.datasource:cdf-spark-datasource_2.12:2.1.10). Could you please take a look? Thank you !
Use Case I am building a dashboard in Power BI that will visualize Events from CDF (about one million events per year). So to make the solution scale I want to use incremental refresh of the semantic model so I only refresh the model with the newest Events since the last refresh. I have followed your tutorial on Incremental refresh (which seems copy/paste of Microsoft’s tutorial), but I still have questions: What happens if the StartTime attribute of an Event is a date/time/zone when loaded into the model? The attribute needs to be a date/time type, which means I need to convert the type first. How will the incremental refresh work if the model needs to load the data in order to convert the type, and then apply the RangeStart/RangeEnd filters? Can I use Incremental refersh with the new REST API Connector? When I load events with the new connector I get the start times as number of milliseconds since epoch. So the type conversion from above also applies here. Thanks for your help! Ander
Hello everyone, my name is Matt. I just joined Cognite Hub. I’m an industrial control system enthusiast and recently have been part of meetings at work where Cognite has been brought up for a few different use cases in some of our business units. I’m interested in learning more about it and exploring the capabilities of Cognite/CDF. I just watched the Cognite Product Tour 2024 on YouTube and was really impressed.
it says that the project is not valid and do not give me the hello world output.
I m working cognite hosted rest extractor and i m not able to perform incremental load and getting Kuiper http error while making a request. Can someone explain what the key name is when we use query params for incremental load and how should value look like in json having a conditional statement to pick a constant value in first execution and last_run from context after that? (Assume we have to modify startindex and lastindex query params after first excution)
I have a data model with a many to many relation between view 1 and view 2. This is modeled as an edge and stored as columnEdgeView2 in view 1 and has an edge pointing the other direction in the other view. I have been digging through the Python SDK documentation but cannot figure out how to retrieve this information. Can anyone help me with this? What I need to do is query either of the edge properties and do something similar to the instance.list of regular views. If possible, getting the columnEdgeView2 included in the result when listing instances the normal way would also work. I have tried to replicate my data model structure below. type View1 { column1: String columnEdgeView2: [View2]} type View2 { column1: String columnEdgeView1: [View1] @relation( type: { space: "space", externalId: "View1.columnEdgeView2" } direction: INWARDS )} Thank you!
Hello, we're using pygen to generate a Full Data Model and instances. One of our containers can have multiple connections to objects in another container (see the pic below). As a result, definitions are lists of objects. It is then difficult to query data using tuples: bays = dm_client.bay.list( bay_to_line=(space, id), retrieve_connections="identifier", limit=None, ).to_pandas() We can overcome this by reading all bays with retreive_connections=”full” and then querying the data. But we would like to generate a model, where such tricks are not needed and the definitions are not a list. Is it possible? To generate a model we do this workflow: neat.read.rdf("lines_bays.jsonld")neat.infer(max_number_of_instance=-1)neat.prepare.data_model.cdf_compliant_external_ids()neat.verify()neat.convert("dms")neat.set.data_model_id(("lin", "lines", "v1"))neat.to.cdf.data_model()neat.to.cdf.instances()
I'm using python sdk and wanted to query instances based on a condition using “data_modeling.instances.query” methos . I have a view called TimeseriesPropertyType which has a field (properties: [Property]) which is a reverse direct relation throughProperty: "propertyType". i just need to check if there are any “properties” field values associated with each instance of the TimeseriesPropertyType view. for that I'm fetching data in property view associated with those particular TimeseriesPropertyType view and doing the check manually in code.. is there any direct filter i can use that's available? i dont see any filters available on properties field in query explorer for the same below is the query im using #view_id_ts_prop_type - TimeseriesPropertyType view#view_id_property - Property viewquery= Query(with_= { "TimeseriesPropertyType":NodeResultSetExpression( limit=10000, filter= HasData(views=[view_id_ts_prop_type])), "Property":NodeResultSetExpression( limit=10000, direction="inwards"
Hello, We are currently the Pygen (cognite-pygen==1.0.2) to generate an SDK of our data model. While testing a use case where we want to query all Casings of all Onshore Wells we notice that we get different results using the `List` and `Select` methods. # First method of querying wells = dm_client.well.list(product_line='Onshore', retrieve_connections='identifier',limit=-1)direct_relations = []for w in wells: if w.wellbores: for wb in w.wellbores: wb_ext=wb.external_id inst = dm.DirectRelationReference(wb.space, wb_ext) direct_relations.append(inst)wb_sections = dm_client.wellbore_section.list(wellbore=direct_relations, retrieve_connections='full', limit=-1)casings = wb_sections.casingprint(f'casings: {len(casings)}') # casings: 645 # Second method of querying casings= dm_client.well.select().product_line.equals('Onshore').wellbores.wellbore_sections.casing.list_casing(limit=-1)print(f'casings: {len(casings)}') # casings: 94 Any thing we missing here? Could you please take a look? Tha
Hello Team, We have three views: → ScalarProperty{ #other fields entity: Entity } → Entity{ #other fields entitytype: EntityType } → EntityType{ externalId } Given this, from ScalarProperty view, can we use groupby on entity.entityType.externalId when querying instances and counting them using the query API, or is there another way to achieve this?
We have generated a SDK using pygen on our Model. Post that we requested a data from particular view in below eg - Property_type. It contains ~138k reocrds Observe the below results on time taken to fetch via Cognite SDK - 44 secs Pygen SDK - 82 secs Its fetching the same data from the same cognite project, but still seeing the performance lag. Its taking almost double time. Both numbers are from my local sandbox. Cognite SDK Code config = { "client_name": "abcd", "project": "slb-odf-qa", "base_url": "https://westeurope-1.cognitedata.com/", "credentials": { "client_credentials": { "client_id": "e063088ad3b4548d4911bd4a617990aa", "client_secret": "", "token_url": "https://p4d.csi.cloud.slb-ds.com/v2/token", "scopes": ["9237c91ce1ea434fa5a91262a5ea3646"], }, },}cognite_client = CogniteClient.load(config) view_id_1 = ViewId(space="slb-pdm-dm-governed", external_id="PropertyType",version="1_6") def _get_timeseries(): next_cursor = None # Initialize the cursor as None at first all_data = []
When I am trying to runcdf deploy or cdf deploy --dry-run In both cases I am getting this error ERROR (AuthorizationError): Don't have correct access rights to clean spaces. Missing: DataModelsAcl(actions=[<DataModelsAcl Action.Read: 'READ'>], scope=AllScope()) Please click here to visit the documentation and ensure that you have setup authentication for the CDF toolkit correctly Please note that I am using toolkit in azure devops pipeline so its CLI and I dont see any documentation link there as mentioned in the error. What are required access for deploy (with or without dry run)?
The current naming convention in the OPCUA server node structure naming is not intuitive for user to be able to find the tag in Cognite.From the example below, we will have 3 timeseries tags with Message and also Data.It is not intuitive for user to look into the path or externalid to find which it is belongs to.How do we configure the OPCUA extractor to construct the timeseries name based on certain criteria or to based on the complete path of the node to be the timeseries name (e.g. Root\Temperature1\Message as the name)? E.g. OPCUA Node structureRoot|_ Temperature1 |_ Message |_ Data|_ Temperature2 |_ Message |_ Data|_ Pressure1 |_ Message |_ Data
I have created a function that was configured to 5G of memory and 2 CPUs to run huggingface AI model (py311). Deploying the function went fine. However, when running the function it throws this error. RuntimeError: [enforce fail at alloc_cpu.cpp:118] err == 0. DefaultCPUAllocator: can't allocate memory: you tried to allocate 9437184 bytes. Error code 12 (Cannot allocate memory)
Are there any documented use cases or papers on integrating MLflow with Cognite, or is it something we need to implement ourselves? For example, if we aim to seamlessly integrate the MLflow UI with Cognite to evaluate and select the top-performing models, we could leverage SQLite, which operates on the local file system (e.g., mlruns.db) and provides a built-in client, sqlite3. However, our preference is to seamlessly integrate it with Cognite.
Within our implementation we have an existing Hosted Extractor reading data from an IoT Hub that contains multiple sites worth of data. Our Hosted Extractor Mapping Template filters for events that have a particular deviceId on it, representative of the location these events are coming from. In effort of ingesting another site-location datafeed, I wanted to extend the template with an ELSE IF condition that has the mapping rules for the other location, which are almost identical to the first except for the target datasets, which I’ve come to realize is set in the Sink section of the Extractor Configuration. The net result here is needing to create redundant Hosted Extractor configurations that change only a filter, rather than having a cascading ELSE IF ruleset that applies to the full stream. For example, this pseudo-template for our existing hosted extractor configuration for one site: if (context.messageAnnotations.`iothub-connection-device-id` == "SITE_A") { input.map(record_unpack
We are trying to deploy a new project with cdf toolkit, but get the following error: ERROR (ResourceCreationError): Failed to create resource(s). Error: Session IdP refresh failed | code: 400 | X-Request-ID: 34cc5242-b88e-991e-87ef-c2b00a6df73e | cluster: api.ERRORERROR: build step 0 "cognite/toolkit:0.4.2" failed: step exited with non-zero status: 1
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.