Recently active
Just starting my journey at CDF and i`m feeling enthusiastic about this technology. I`ve been working with databases for 4 years and hope to be for more years to come.
I tried to create data models using these examples in the repo using CDF Toolkithttps://github.com/cognitedata/toolkit/tree/main/tests/data/describe_data/data_modelsHowever, I am not getting the expected result.I got following result as data model type Asset { areaId: Int categoryId: Int createdDate: Timestamp description: String documents: [File] isActive: Boolean isCriticalLine: Boolean metrics: [TimeSeries] parent: JSONObject sourceDb: String tag: String updatedDate: Timestamp children: [Asset]} type WorkOrder { actualHours: Int createdDate: Timestamp description: String dueDate: Timestamp durationHours: Int endTime: Timestamp isActive: Boolean isCancelled: Boolean isCompleted: Boolean isSafetyCritical: Boolean percentageProgress: Int plannedStart: Timestamp priorityDescription: String programNumber: String startTime: Timestamp status: String title: String workOrderNumber: String workPackageNumber: String workItems: [WorkItem] linkedAssets: [Asset]}
Hello,I am at the end of the first module in the Domain Experts course. In order to complete the knowledge, check I need to access the publicdata, however, when I login to Cognite selecting publicdata as the organization the Data Explorer tab is missing and therefore no data available. Has anyone else run into this issue and how did you correct it? I contacted support@cognite.com several days ago and no reply yet, so I thought I would try the community while I wait.Thank you
This year the SAP equipment numbers have changed, also in the technical ID description was changed and some extra description was added. Is it possible to add an extra SAP ID number in which way same asset hierarchy can be maintained?
When using time series data, there are often situations where you want to get point in time for each day.For example, for time series A, which records the accumulated value, the raw data at 9 o'clock on 1/1, the raw data at 9 o'clock on 1/2, and so on.Even if you specify "Granularity=1d" using the SDK, the data obtained is aggregated, so there is a discrepancy from the raw data.Therefore, when I acquire raw data with daily granularity, I pick up the raw data as of 9 o'clock on 1/1 after acquiring raw data with fine granularity.This is very tedious and time-consuming, so I want an easy way to get the raw data at a specific time from the SDK options or UI.
Problem Statement:Synthetic Time Series in CDF are dynamically calculated based on expressions, but lack persistent identifiers. This limits usability when users need:Persistent IDs for discovery and access. Simplified queries via API, SDK. Centralized definition of expressions as part of the data model, avoiding the need for each backend to redefine them.Suggested Approach:Introduce a Synthetic Time Series Definition object that:Allows defining synthetic series with a persistent external_id. Stores metadata like expressions, descriptions, and units. Enables dynamic evaluation without requiring data storage. Supports defining expressions as part of a model, enabling reuse across different systems without requiring redundant definitions in backends.Benefits:Usability: Persistent identifiers for easier access and queries. Consistency: Eliminates repetitive expression definitions. Scalability: Centralized expression definitions simplify updates and maintenance.
I have changed phones and am not able to setup microsoft authenticator on the new one.Please reset the MFA Access so that I can set up the authenticator again.
In Open Industrial Data (OID), we have moved away from API keys. Open Industrial Data currently supports Open ID connect. You can check out more on how to configure OpenID Connect on Open Industrial Data.If you are planning to use OID and authenticate using client credentials flow, you will need a client secret from the app registration on the Azure Active Directory. Go to Open Industrial Data and you will observe there is a widget for generating a new client secret IDIn the drop down, you can select two options:Other: Use this if you are using Postman or Python SDK Javascript: Select this option if your app is in JavascriptOnce you click on Create client secret, this will be display just once. Make sure to save it somewhere safe.Let me know if you have any questions 🙂
Develop an advanced training program to equip users with skills for contextualizing point cloud data, focusing on both detected and undetected objects. The training should address gaps in traditional modeling approaches by providing practical, hands-on experience with diverse scenarios. Challenges Addressed:Limited automation in object detection, requiring significant manual effort. Difficulty in contextualizing objects that remain undetected in raw point cloud data. Inability to handle diverse and complex industrial scenarios effectively.Hands-On Examples and Exercises: Detected Objects: Import and preprocess a point cloud dataset. Use AI-driven tools to identify and classify detected objects. Automatically link detected objects to an asset hierarchy, metadata, or P&ID diagrams. Undetected Objects: Demonstrate manual workflows for identifying undetected objects within the point cloud. Tag, classify, and link undetected objects using the training interface. Show how to cr
HelloWe have Flexible model which has Entity view and then a property named parent refers to Entity back as an edge. Now our user sends us an query to get all the entities say whose name is “SomeString”. And then we construct a query as below, but it errors out.{ "with": { "Entity": { "nodes": { "filter": { "and": [ { "matchAll": {} }, { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Entity", "version": "1_7" } ]}, {"or": [ { "and": [ { "nested": { "scope": [ "slb-pdm-dm-governed", "Entity/1_7", "parent" ], "filter": { "in
We have observed that of late a particular graphl query with filter use to work earlier and its not working now.Basically below graphql use to work earlier -query MyQuery { listEntity(filter: {name: {in: "abc" ,"" }}) { edges { node { description createdTime name } } }} But now its breaking with error {{ "error": { "code": 400, "message": "Invalid in filter with property reference [node, externalId]. Invalid externalId identifier '\"\"'." }}Can you help to know when and why this was changed ? This is kind of breaking change for us,This same thing is observed for /models/instances/query endpoint as well.
Do you have the ability to hide left hand navigation items in the different workspaces? For example if your organization isn’t using Customer Apps (BETA) can you hide that from the menu options?
In teh table Data management > Data models, theire is no mean to fully visualise the description that has been providd for each data model available : no hover on the line, and the wide of the table is limited.This description should also be available on he screen in the header of the data model when you click on it
The December release added tabs for directly & indirectly linked data. I am having trouble finding documentation that explains the difference between the two.
my team has uploaded all the laser scan (360) files, unit wise but some files are having some error I want to check file with file number but in 360 all file name is seen as unknown, how can I find the particular file
Welcome to the Data Engineer Basics - Integrations!This discussion is dedicated to help learners of the Data Engineer Basics – Integrations learning path succeed. If you’re struggling any of the courses in this learning path, post a comment with the challenge you’re facing. You can also post your own tips and respond to fellow learners’ questions. Cognite Academy’s instructors are also here to help.
Hi Team,We are attempting to deploy a Streamlit app using the CDF Toolkit (alpha version), but we've encountered the following error during the process. Could you please assist us in resolving this issue so that we can effectively use the toolkit for deploying the app?Below are some additional details for your reference:CDF Toolkit Version: 0.3.0a7Target CDF Project: slb-pdfI've also attached the cdf.tmol file for your review. Error Snippet: from ._auth_app import AuthAppFile "/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/cognite_toolkit/_cdf_tk/apps/_auth_app.py", line 6, in <module>from cognite_toolkit._cdf_tk.commands import AuthCommandFile "/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/cognite_toolkit/_cdf_tk/commands/__init__.py", line 1, in <module>from .auth import AuthCommandFile "/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/cognite_toolkit/_cdf_tk/commands/auth.py", line 34, in <module>from
The Model Revision API is returning updated by as guid or some number.However the requirement is to show the actual name on dashboard.Could you please help us in converting the user id to username.
ProblemCurrently, Cognite's GraphQL API requires specifying the unit system for each attribute individually. This approach can be repetitive and impractical when querying multiple attributes, especially for large-scale use cases.Proposed ideaIntroduce a global unit system parameter at the query level, allowing users to define a default unit system (e.g., metric or imperial) that applies to all attributes in the request. Individual attributes can still override this setting if needed.
Is there a recommended approach for using YAML configurations to automate the creation of spaces, containers, views, and data models in Cognite Data Fusion? How can we incorporate customizable parameters (e.g., space names, descriptions, and container properties) in the YAML files to make the process more flexible?Example YAML configuration for containers:containers: - name: "example_container_1" description: "First sample container" external_id: "example_container_1_id" properties: name: type: "Text" nullable: false parent: type: "DirectRelation" nullable: true isValid: type: "Boolean" nullable: true indexes: - index_name: "entity_name" type: "BTree" properties: ["name"] - name: "example_container_2" description: "Second sample container" external_id: "example_container_2_id" properties: identifier: type: "Text" nullable: false indexes: - index_name: "identifie
I can not update (push) the existing timeseries by client.time_series.data.insert() method.data_points = [{'timestamp':pd.to_datetime(df5.index[i]).timestamp()*1000,'value':df5['My Feature'].iloc[i]} for i in range(len(df5))]client.time_series.data.insert(data_points, external_id=external_id_feature)The way I followed is : trim the specific date-range, insert the timeseries.Is there any standard way to update the existing time-series ??