Recently active
Our TA and Projects team is evaluating the 3D tool inside Cognite to be the backbone of how work is planned. I would like to get a few enhancement requests that will be helpful to make the planning process in 3D value added. One is the ability to add and move around 3D objects to a scene so planners can understand what equipment is needed or if equipment can fit in a working area. Cranes, people, scaffolding, etc. Specifically for cranes, it would be valuable to add different sizes and ability to understand the reach of the crane so they can understand what work could be done without moving it. Can see how this might pair with capability already in Maintain to see workorder in 3D.
Hello everyone. My name is Adam and I just registered on Cognite Hub. My ultimate goal is to expand my career in the IT Operations field. But my goal here is to get to know more about CDF and what it offers. Hope to chat with everyone at some point.
Hello, we're using pygen to generate a Full Data Model and instances. One of our containers can have multiple connections to objects in another container (see the pic below). As a result, definitions are lists of objects. It is then difficult to query data using tuples: bays = dm_client.bay.list( bay_to_line=(space, id), retrieve_connections="identifier", limit=None, ).to_pandas() We can overcome this by reading all bays with retreive_connections=”full” and then querying the data. But we would like to generate a model, where such tricks are not needed and the definitions are not a list. Is it possible? To generate a model we do this workflow: neat.read.rdf("lines_bays.jsonld")neat.infer(max_number_of_instance=-1)neat.prepare.data_model.cdf_compliant_external_ids()neat.verify()neat.convert("dms")neat.set.data_model_id(("lin", "lines", "v1"))neat.to.cdf.data_model()neat.to.cdf.instances()
I'm using python sdk and wanted to query instances based on a condition using “data_modeling.instances.query” methos . I have a view called TimeseriesPropertyType which has a field (properties: [Property]) which is a reverse direct relation throughProperty: "propertyType". i just need to check if there are any “properties” field values associated with each instance of the TimeseriesPropertyType view. for that I'm fetching data in property view associated with those particular TimeseriesPropertyType view and doing the check manually in code.. is there any direct filter i can use that's available? i dont see any filters available on properties field in query explorer for the same below is the query im using #view_id_ts_prop_type - TimeseriesPropertyType view#view_id_property - Property viewquery= Query(with_= { "TimeseriesPropertyType":NodeResultSetExpression( limit=10000, filter= HasData(views=[view_id_ts_prop_type])), "Property":NodeResultSetExpression( limit=10000, direction="inwards"
Hello, We are currently the Pygen (cognite-pygen==1.0.2) to generate an SDK of our data model. While testing a use case where we want to query all Casings of all Onshore Wells we notice that we get different results using the `List` and `Select` methods. # First method of querying wells = dm_client.well.list(product_line='Onshore', retrieve_connections='identifier',limit=-1)direct_relations = []for w in wells: if w.wellbores: for wb in w.wellbores: wb_ext=wb.external_id inst = dm.DirectRelationReference(wb.space, wb_ext) direct_relations.append(inst)wb_sections = dm_client.wellbore_section.list(wellbore=direct_relations, retrieve_connections='full', limit=-1)casings = wb_sections.casingprint(f'casings: {len(casings)}') # casings: 645 # Second method of querying casings= dm_client.well.select().product_line.equals('Onshore').wellbores.wellbore_sections.casing.list_casing(limit=-1)print(f'casings: {len(casings)}') # casings: 94 Any thing we missing here? Could you please take a look? Tha
We are thrilled to announce the launch of our new Microlearning courses designed to empower you with knowledge and skills in Generative AI. These courses are tailored to provide you with practical insights and strategic understanding, helping you stay ahead in the rapidly evolving industrial landscape. Course Highlights: 1. Introduction to Generative AI for Industry Explore the evolution of AI in industry. Understand large language models (LLMs) and their applications. Learn about industrial knowledge graphs and their impact on decision-making. 2. Ensuring Safe and Secure Generative AI Discover how to liberate and contextualize industrial data. Implement hallucination-free and secure AI solutions. Utilize Retrieval-Augmented Generation (RAG) for reliable AI applications. 3. Operationalizing Generative AI Build and maintain contextualization pipelines. Leverage digital twins for real-time data insights. Develop robust contextualization engines for scalable data management. 4. Practical
Hi everyone! My name is Ahmed Almazrouei, and am here to learn more about Cognite Platform deployment and how it can support different organisations to achieve maximum business benefits that can be triggered by AI and Data Analytics.
Hello Team, We have three views: → ScalarProperty{ #other fields entity: Entity } → Entity{ #other fields entitytype: EntityType } → EntityType{ externalId } Given this, from ScalarProperty view, can we use groupby on entity.entityType.externalId when querying instances and counting them using the query API, or is there another way to achieve this?
We have generated a SDK using pygen on our Model. Post that we requested a data from particular view in below eg - Property_type. It contains ~138k reocrds Observe the below results on time taken to fetch via Cognite SDK - 44 secs Pygen SDK - 82 secs Its fetching the same data from the same cognite project, but still seeing the performance lag. Its taking almost double time. Both numbers are from my local sandbox. Cognite SDK Code config = { "client_name": "abcd", "project": "slb-odf-qa", "base_url": "https://westeurope-1.cognitedata.com/", "credentials": { "client_credentials": { "client_id": "e063088ad3b4548d4911bd4a617990aa", "client_secret": "", "token_url": "https://p4d.csi.cloud.slb-ds.com/v2/token", "scopes": ["9237c91ce1ea434fa5a91262a5ea3646"], }, },}cognite_client = CogniteClient.load(config) view_id_1 = ViewId(space="slb-pdm-dm-governed", external_id="PropertyType",version="1_6") def _get_timeseries(): next_cursor = None # Initialize the cursor as None at first all_data = []
When I am trying to runcdf deploy or cdf deploy --dry-run In both cases I am getting this error ERROR (AuthorizationError): Don't have correct access rights to clean spaces. Missing: DataModelsAcl(actions=[<DataModelsAcl Action.Read: 'READ'>], scope=AllScope()) Please click here to visit the documentation and ensure that you have setup authentication for the CDF toolkit correctly Please note that I am using toolkit in azure devops pipeline so its CLI and I dont see any documentation link there as mentioned in the error. What are required access for deploy (with or without dry run)?
Currently, the Disciplines are attached to the application and if one specific site needs to add a new discipline or edit an existing one, this will reflect in all the other locations. The ideal is to have disciplines management by location. Similar to what was built for Observations.
In CDF, does “Domain Data model” provided out of the box by the product or it needs to be configured? My understanding is, source data is firstly staged on staging/raw layer, then mapped to “domain data model” and then each customer can derive “Solution data model” out of “Domain data model”. Most probably “Domain data model” should be 3NF. Can someone validate my understanding please.
My team and I were utilizing Canvas during a hands-on session, with the expectation that everyone would be able to access the link and contribute simultaneously. Unfortunately, we encountered an issue, receiving the following error message: The inability to collaborate in real-time significantly limits the effectiveness of the tool, particularly for team-based activities. Real-time editing is a critical feature for fostering dynamic collaboration, and without it, the user experience is somewhat hindered. Is there any plan to introduce this functionality in the near future? Enabling simultaneous editing would greatly enhance Canvas’s collaborative potential, making it a much more powerful tool for teams working together.
Hello all, I’m Jake, I am new to Cognite, also new to Data Analytics and Digitalisation, having a background in Electrical Engineering. I am hoping to gain an understanding for CDF and expand my knowledge in Data Analysis in order to deliver quality solutions for our clients.
Hello everyone. My name is Reynald and I just registered on Cognite. My goal is to extend my knowledge of CDF. Previoulsy, I worked on autonomous Driving using AI. Hope to chat with everyone at some point.
The new CDF Power BI REST connector has been certified by Microsoft and is now included in the latest Power BI desktop version and deployed in the Power BI online service.What's new with the Power BI REST connector: Flexible authentication: Connect Power BI with any IdP supported by CDF (the legacy OData connector only worked with Azure Entra ID) Broader data access: Fetch data from OData services (just like the legacy connector) Access data from Data Models using GraphQL Connect to any GA CDF API endpoint Significant performance boost: Up to 10x faster when using regular REST endpoints compared to fetching the same data via OData The connector is currently in Beta, and we're eager to hear customer feedback before promoting it to GA. The documentation for the new connector is available here, and we're working on a new set of micro learning modules in Academy based on the new connector.
Hi. We’re administering CDF deployments from Github using github actions and Cognite toolkit. I’m setting up a github action to automatically perform a dry-run for a pull request to main, and post the dry-run output as a comment on the PR to assist the reviewer. I’d preferably like to use a client with read-only access to CDF for this, but it seems cdf-tk requires full write access even for dry runs - is that so? Performing a dry-run locally with read-only credentials results in: ERROR (AuthorizationError): Don't have correct access rights to deploy iam.groups(all_scoped). Missing:GroupsAcl(actions=[<GroupsAcl Action.Create: 'CREATE'>], scope=AllScope()) -GroupsAcl(actions=[<GroupsAcl Action.Delete: 'DELETE'>], scope=AllScope())Please click here to visit the documentation and ensure that you have setup authentication for the CDF toolkit correctly. I would expect to see the same error on my github-action, but it stops without much useful information: Run cdf-tk deploy --env=dev --dry-ru
Is anyone using Cognite as their main timeseries historian? We are always exploring alternatives and would be interested to hear if Cognite has fit this use case for any users.
I have created 1 workflow , in which I am creating dynamic tasks depending on input, it creates batch of ids and create tasks out of it. Below is workflow definition WorkflowVersionUpsert( workflow_external_id="test_dynamic-0729", version="1", workflow_definition=WorkflowDefinitionUpsert( description="This workflow has two steps", tasks=[ WorkflowTask( external_id="test_sub_tasks", parameters=FunctionTaskParameters( external_id="test_sub_tasks", data="${workflow.input}" ), retries=1, timeout=3600, depends_on=[], on_failure = "abortWorkflow", ), WorkflowTask( external_id="test_create_sub", parameters=DynamicTaskParameters( tasks="${test_sub_tasks.output.response.tasks}" ), name="Dynamic Task", description="Executes a list of workflow tasks for subscription creation", retries=0, timeout=3600, depends_on=["test_sub_tasks"], on_failure = "abortWorkflow", ) ] ) As part of this workflow, I have some task that are needed to be executed in parallel and are expected to finish in around similar
IN SHORT Last year, Aker BP leveraged Cognite Data Fusion™ (CDF) and AI-powered Atlas Agents to identify technology for streamlining Root Cause Analysis (RCA) processes, aiming to improve process efficiency and reduce cost. The value potential identified has been: Ability to easily and rapidly find all relevant information Increased number of performed RCAs with the same resources Significant improvement in RCA process efficiency (on data-driven tasks) Better insights through AI-assisted multidisciplinary data analysis Reduce the number of reoccurring failures and thus increased MTTF CHALLENGE Aker BP operates six brownfield assets, which like any asset all over the world, are expected to require an increasing number of Root Cause Analyses (RCA) as they age. However, existing RCA processes can be improved, potentially leading to: Lengthy and cumbersome data gathering from multiple sources and systems (PI, SAP, inspection and maintenance reports on local share points) Limited real-time
I just have a new phone and I'm facing an issue with the Authenticator App. Please reset the MFA Access so that I can set up the authenticator again. Regards
We are running into merge conflicts when multiple teams are adding to features that touch the same files. An example, two teams working on different modules might need to both scaffold permissions in the same user groups, but deploy at different times. It would be really helpful if when defining a CDF TK Module, we could have different configurations defined for the same resource, but spread across different modules so that if one team needed to add dataset permissions to a common persona, they could do so within the module and that module would not interfere or conflict with another module adding to the same common persona. Similar to how in CSS if you have a base style, and apply styles overtop, they are added / overwritten.
The current naming convention in the OPCUA server node structure naming is not intuitive for user to be able to find the tag in Cognite.From the example below, we will have 3 timeseries tags with Message and also Data.It is not intuitive for user to look into the path or externalid to find which it is belongs to.How do we configure the OPCUA extractor to construct the timeseries name based on certain criteria or to based on the complete path of the node to be the timeseries name (e.g. Root\Temperature1\Message as the name)? E.g. OPCUA Node structureRoot|_ Temperature1 |_ Message |_ Data|_ Temperature2 |_ Message |_ Data|_ Pressure1 |_ Message |_ Data
Hi All, I am working with hosted extractors for kafka and it works pretty well for me with transformations when we have plain json data in kafka topics. Now I am trying to check if we can work with zlib gzip compressed data coming in topic, I have json string and messagepayload attribute of json string will be holding compressed data instead of whole message as compressed one, is it possible to write transformation for such data. e.g., in kafka topics { "Header": { "MessageId": 133367162, "MessageType": "DATA_REPORT", "Timestamp": 1741122422, "PayloadCompression": "Z_LIB_COMPRESSION" }, "MessagePayload": "eJyqVnJJLctMTi1WsoquVvJLzE1VslIyVNJRckksSQxJTIeIhySmQ6WCA3wVfFMTi0uLUlNgqioLQDIu/qFOPq7xYY4+oa5KOkphiTmlMLNCMnNTi0sScwuUrAzNTQwNjYxMjAwtzA11lAJLE3MySyqVrAxqY2tjawEBAAD//x5aKt0="}
As owner of a canvas, the Canvas UI identifies me as expected such as in version history or in comments. When another user makes edits and comments they are not identified and show up as seen in the attached screenshot. Just in case it matters, this is a Rockwell Automation DataMosaix project. Thanks!
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.