Recently active
Hello, We are using cdf toolkit to deploy our container, views and data models. We recently added many new views, these views are dependent. i.e: they reference each other in the properties through a source block or via implements block. Here an examples of these view dependencies: - externalId: View1 implements: [] name: View1 properties: test: container: externalId: Container1 space: some_space type: container containerPropertyIdentifier: Test name: test source: externalId: View2 space: some_space type: view version: v1 space: some_space version: v1 - externalId: View2 implements: [] name: View2 properties: wow: container: externalId: Container2 space: some_space type: container containerPropertyIdentifier: Test name: wow space: some_space version: v1 or - externalId: Country implements: - externalId: CountryAttributes space: '{{sp_dm_dap_knowledge_graph}}_wv' type: view version: '{{dap_version}}' name: Country properties: Wells: connectionType: multi_reverse_direct_relation name: We
I m working cognite hosted rest extractor and i m not able to perform incremental load and getting Kuiper http error while making a request. Can someone explain what the key name is when we use query params for incremental load and how should value look like in json having a conditional statement to pick a constant value in first execution and last_run from context after that? (Assume we have to modify startindex and lastindex query params after first excution)
got 401 "Unauthorized" error when using Postman and also sometimes a bad request error while I did everything as they did in the course. please help it took a lot of time from me.
I have a data model with a many to many relation between view 1 and view 2. This is modeled as an edge and stored as columnEdgeView2 in view 1 and has an edge pointing the other direction in the other view. I have been digging through the Python SDK documentation but cannot figure out how to retrieve this information. Can anyone help me with this? What I need to do is query either of the edge properties and do something similar to the instance.list of regular views. If possible, getting the columnEdgeView2 included in the result when listing instances the normal way would also work. I have tried to replicate my data model structure below. type View1 { column1: String columnEdgeView2: [View2]} type View2 { column1: String columnEdgeView1: [View1] @relation( type: { space: "space", externalId: "View1.columnEdgeView2" } direction: INWARDS )} Thank you!
Hi, I would like to check whether there is any CDF Admin UI or available APIs that provide the telemetry and usage-related insights such: User activity logs such logins, user actions, dataset access, frequency of use API usage metering such calls made by service accounts or integrations, volume over time Application usage tracking such as which integrations/apps are active, which endpoints are being used and etc. Quota and resource usage tracking, like number of API calls, storage/compute loads. Are there any CDF admin dashboards, telemetry APIs, or audit logs available for this purpose? Please advise. Thanks
Hello, we're using pygen to generate a Full Data Model and instances. One of our containers can have multiple connections to objects in another container (see the pic below). As a result, definitions are lists of objects. It is then difficult to query data using tuples: bays = dm_client.bay.list( bay_to_line=(space, id), retrieve_connections="identifier", limit=None, ).to_pandas() We can overcome this by reading all bays with retreive_connections=”full” and then querying the data. But we would like to generate a model, where such tricks are not needed and the definitions are not a list. Is it possible? To generate a model we do this workflow: neat.read.rdf("lines_bays.jsonld")neat.infer(max_number_of_instance=-1)neat.prepare.data_model.cdf_compliant_external_ids()neat.verify()neat.convert("dms")neat.set.data_model_id(("lin", "lines", "v1"))neat.to.cdf.data_model()neat.to.cdf.instances()
I'm using python sdk and wanted to query instances based on a condition using “data_modeling.instances.query” methos . I have a view called TimeseriesPropertyType which has a field (properties: [Property]) which is a reverse direct relation throughProperty: "propertyType". i just need to check if there are any “properties” field values associated with each instance of the TimeseriesPropertyType view. for that I'm fetching data in property view associated with those particular TimeseriesPropertyType view and doing the check manually in code.. is there any direct filter i can use that's available? i dont see any filters available on properties field in query explorer for the same below is the query im using #view_id_ts_prop_type - TimeseriesPropertyType view#view_id_property - Property viewquery= Query(with_= { "TimeseriesPropertyType":NodeResultSetExpression( limit=10000, filter= HasData(views=[view_id_ts_prop_type])), "Property":NodeResultSetExpression( limit=10000, direction="inwards"
Hello, We are currently the Pygen (cognite-pygen==1.0.2) to generate an SDK of our data model. While testing a use case where we want to query all Casings of all Onshore Wells we notice that we get different results using the `List` and `Select` methods. # First method of querying wells = dm_client.well.list(product_line='Onshore', retrieve_connections='identifier',limit=-1)direct_relations = []for w in wells: if w.wellbores: for wb in w.wellbores: wb_ext=wb.external_id inst = dm.DirectRelationReference(wb.space, wb_ext) direct_relations.append(inst)wb_sections = dm_client.wellbore_section.list(wellbore=direct_relations, retrieve_connections='full', limit=-1)casings = wb_sections.casingprint(f'casings: {len(casings)}') # casings: 645 # Second method of querying casings= dm_client.well.select().product_line.equals('Onshore').wellbores.wellbore_sections.casing.list_casing(limit=-1)print(f'casings: {len(casings)}') # casings: 94 Any thing we missing here? Could you please take a look? Tha
Hello Team, We have three views: → ScalarProperty{ #other fields entity: Entity } → Entity{ #other fields entitytype: EntityType } → EntityType{ externalId } Given this, from ScalarProperty view, can we use groupby on entity.entityType.externalId when querying instances and counting them using the query API, or is there another way to achieve this?
We have generated a SDK using pygen on our Model. Post that we requested a data from particular view in below eg - Property_type. It contains ~138k reocrds Observe the below results on time taken to fetch via Cognite SDK - 44 secs Pygen SDK - 82 secs Its fetching the same data from the same cognite project, but still seeing the performance lag. Its taking almost double time. Both numbers are from my local sandbox. Cognite SDK Code config = { "client_name": "abcd", "project": "slb-odf-qa", "base_url": "https://westeurope-1.cognitedata.com/", "credentials": { "client_credentials": { "client_id": "e063088ad3b4548d4911bd4a617990aa", "client_secret": "", "token_url": "https://p4d.csi.cloud.slb-ds.com/v2/token", "scopes": ["9237c91ce1ea434fa5a91262a5ea3646"], }, },}cognite_client = CogniteClient.load(config) view_id_1 = ViewId(space="slb-pdm-dm-governed", external_id="PropertyType",version="1_6") def _get_timeseries(): next_cursor = None # Initialize the cursor as None at first all_data = []
When I am trying to runcdf deploy or cdf deploy --dry-run In both cases I am getting this error ERROR (AuthorizationError): Don't have correct access rights to clean spaces. Missing: DataModelsAcl(actions=[<DataModelsAcl Action.Read: 'READ'>], scope=AllScope()) Please click here to visit the documentation and ensure that you have setup authentication for the CDF toolkit correctly Please note that I am using toolkit in azure devops pipeline so its CLI and I dont see any documentation link there as mentioned in the error. What are required access for deploy (with or without dry run)?
Hello, I’m having this error while using CDF latest Spark datasource. I was only able to make it work using 2.1.10 version (com.cognite.spark.datasource:cdf-spark-datasource_2.12:2.1.10). Could you please take a look? Thank you !
Use Case I am building a dashboard in Power BI that will visualize Events from CDF (about one million events per year). So to make the solution scale I want to use incremental refresh of the semantic model so I only refresh the model with the newest Events since the last refresh. I have followed your tutorial on Incremental refresh (which seems copy/paste of Microsoft’s tutorial), but I still have questions: What happens if the StartTime attribute of an Event is a date/time/zone when loaded into the model? The attribute needs to be a date/time type, which means I need to convert the type first. How will the incremental refresh work if the model needs to load the data in order to convert the type, and then apply the RangeStart/RangeEnd filters? Can I use Incremental refersh with the new REST API Connector? When I load events with the new connector I get the start times as number of milliseconds since epoch. So the type conversion from above also applies here. Thanks for your help! Ander
The current naming convention in the OPCUA server node structure naming is not intuitive for user to be able to find the tag in Cognite.From the example below, we will have 3 timeseries tags with Message and also Data.It is not intuitive for user to look into the path or externalid to find which it is belongs to.How do we configure the OPCUA extractor to construct the timeseries name based on certain criteria or to based on the complete path of the node to be the timeseries name (e.g. Root\Temperature1\Message as the name)? E.g. OPCUA Node structureRoot|_ Temperature1 |_ Message |_ Data|_ Temperature2 |_ Message |_ Data|_ Pressure1 |_ Message |_ Data
I have created a function that was configured to 5G of memory and 2 CPUs to run huggingface AI model (py311). Deploying the function went fine. However, when running the function it throws this error. RuntimeError: [enforce fail at alloc_cpu.cpp:118] err == 0. DefaultCPUAllocator: can't allocate memory: you tried to allocate 9437184 bytes. Error code 12 (Cannot allocate memory)
Are there any documented use cases or papers on integrating MLflow with Cognite, or is it something we need to implement ourselves? For example, if we aim to seamlessly integrate the MLflow UI with Cognite to evaluate and select the top-performing models, we could leverage SQLite, which operates on the local file system (e.g., mlruns.db) and provides a built-in client, sqlite3. However, our preference is to seamlessly integrate it with Cognite.
Within our implementation we have an existing Hosted Extractor reading data from an IoT Hub that contains multiple sites worth of data. Our Hosted Extractor Mapping Template filters for events that have a particular deviceId on it, representative of the location these events are coming from. In effort of ingesting another site-location datafeed, I wanted to extend the template with an ELSE IF condition that has the mapping rules for the other location, which are almost identical to the first except for the target datasets, which I’ve come to realize is set in the Sink section of the Extractor Configuration. The net result here is needing to create redundant Hosted Extractor configurations that change only a filter, rather than having a cascading ELSE IF ruleset that applies to the full stream. For example, this pseudo-template for our existing hosted extractor configuration for one site: if (context.messageAnnotations.`iothub-connection-device-id` == "SITE_A") { input.map(record_unpack
We are trying to deploy a new project with cdf toolkit, but get the following error: ERROR (ResourceCreationError): Failed to create resource(s). Error: Session IdP refresh failed | code: 400 | X-Request-ID: 34cc5242-b88e-991e-87ef-c2b00a6df73e | cluster: api.ERRORERROR: build step 0 "cognite/toolkit:0.4.2" failed: step exited with non-zero status: 1
We are collaborating with a client who has reported that neither Jupyter Notebook nor Streamlit functions on CDF Fusion. Upon investigation, we discovered that CDF Fusion dynamically downloads the Python standard library zip file, python-stdlib.zip, along with additional Python Wheel/package files from the Content Delivery Network cdn.jsdeliver.net. However, their security settings block this access. While they do not restrict access to the hosting website itself, it prevents the downloading of the essential Python standard library zip files and Python Wheel files. We might be able to convense them to allow downloading packages from the official Python Index, however, we're not sure if CDF Fusion can be configured to change the source where it downloads the needed packages
Hi, I’ve encountered a bit of issue with CDF when using Microsoft Edge. The full screen button doesn’t do anything. In Chrome and Firefox it does as expected and opens the document in full screen mode. Regards,Markus
Hi, i have set notification to alert when runs failed. However, i noticed that am still receiving email notifications wherever the extraction pipeline is successfully executed. Could you please advice. thanks
Hi Experts, Need your help on Cognite Python sdk. My goal Get all linked timeseries of an Asset I am using Asset class object https://cognite-sdk-python.readthedocs-hosted.com/en/latest/assets.html#cognite.client.data_classes.assets.Asset time_series function Problem It is only giving me the ‘Directly linked timeseries’ , not all the ‘Linked timeseries. Means in this case I only getting 13 timeseries objects , compared to 64 linked timeseries of the Asset. Appreciate your help on adopting the correct approach Thanks and Regards Sree
I would like to know the benefitS of implementing UNS architecture in CDF. A detailed guide would be excellent to show how to implement it in the oil and gas refinery with all the prerequisites and third-party platforms along with that.
In my CDF instance, I am unable to get the Python Kernel active/ready to run in my instance. The status remains “unknown” even after resetting the kernel several times. Additionally, I tried to restart my PC, browser instances, and tried on several different browsers (Edge/Chrome). I understand that this may be a configuration issue for my company or local PC settings but trying to troubleshoot the issue. Cognite support requested I create a discussion here to troubleshoot more.
Hii Team, I'm looking solution for below problem. Currently we have build our custom model using core data model. we have timeseries available in cdf.timeseries resource. Now i want to provide timeseries reference in below view using transformation. Can anyone suggest how timeseries concept is worked in core data model using transformation. type TimeseriesProperty implements Property & CogniteTimeSeries & CogniteDescribable & CogniteSourceable @view(version: "2_1") { name: String source: CogniteSourceSystem entity: Entity sourceId: String sourceContext: String sourceCreatedTime: Timestamp sourceUpdatedTime: Timestamp sourceCreatedUser: String sourceUpdatedUser: String isStep: Boolean! type: CogniteTimeSeries_type sourceUnit: String unit: CogniteUnit assets: [CogniteAsset] equipment: [CogniteEquipment] activities: [CogniteActivity] acquisitionPeriodTimeZone: String isAggregated: Boolean! isAggregatedAtSource: Boolean aggregationMethod: String isString: Boolean @default(value: "false")}
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.