Unleashing Innovation: The IMPACT Challenge 2025 Has Landed on Cognite Hub
Recently active
Welcome to the CDF Fundamentals Discussion!This discussion is dedicated to help learners of the Cognite Data Fusion Fundamentals learning path succeed. If you’re struggling with the exercises in this learning path, try the tips & tricks below or post a comment with the challenge you’re facing. You can also post your own tips and respond to fellow learners’ questions. Cognite Academy’s instructors are also here to help.
Hi everyone! 👋My name is Paul Peña, and I’m currently the Chief Technology Officer at XmaasrT, based in Cali, Colombia.We are working on digital transformation projects for water treatment plants (PTAPs), and we’re currently exploring Cognite Data Fusion to centralize, contextualize, and visualize real-time data from multiple sensors and industrial equipment.🎯 My learning goals: Understand the core functionalities of Cognite Data Fusion Learn how to contextualize data and build digital twins Implement dashboards and alert systems for operations Lead successful integration projects using Cognite solutions 💡 Motivation:I want to bring real innovation to the water industry through smart use of data, and I believe Cognite is a key partner to help us reach that goal.Happy to be here and looking forward to learning from this amazing community! Best regards,Paul PeñaCTO – XmaasrT
Hi,I'm developing a Streamlit app using the standard Streamlit library. However, I’ve noticed that the app behaves differently on my local machine compared to its deployment in CDF, likely due to additional configurations or dependencies used in CDF.Could someone guide me on how to replicate the CDF environment locally to ensure consistent behavior?Thanks
Hi.Upon doing either a deploy or a dry-run deploy, the summary displayed by toolkit includes quite a few resources that have not changed. Like in this example, where the only thing I changed was a the handler-file for a cognite function:┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┳━━━━━━━┓┃ ┃ Would ┃ Would ┃ Would ┃ ┃ ┃┃ ┃ have ┃ have ┃ have ┃ ┃ ┃┃ Resource ┃ Created ┃ Delet… ┃ Changed ┃ Untou… ┃ Total ┃┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━╇━━━━━━━━╇━━━━━━━┩[...]│ function schedules │ 0 │ 0 │ 0 │ 2 │ 2 ││ functions │ 1 │ 1 │ 0 │ 1 │ 2 ││ raw tables │ 1 │ 0 │ 0 │ 37 │ 38 ││ resource-scoped groups │ 16 │ 16 │ 0 │ 12 │ 28 ││ transformations │
I have created 1 workflow , in which I am creating dynamic tasks depending on input, it creates batch of ids and create tasks out of it. Below is workflow definitionWorkflowVersionUpsert( workflow_external_id="test_dynamic-0729", version="1", workflow_definition=WorkflowDefinitionUpsert( description="This workflow has two steps", tasks=[ WorkflowTask( external_id="test_sub_tasks", parameters=FunctionTaskParameters( external_id="test_sub_tasks", data="${workflow.input}" ), retries=1, timeout=3600, depends_on=[], on_failure = "abortWorkflow", ), WorkflowTask( external_id="test_create_sub", parameters=DynamicTaskParameters( tasks="${test_sub_tasks.output.response.tasks}" ), name="Dynamic Task", description=
We use DB Exctractor to push data from MSSQL Database to RAW, but have run into a problem.Over time the raw table grows much larger than the Database table and this is due to rows that are deleted in database will not be deleted in RAW. How can I set up the config yaml for db extractor to also handle rows that are deleted in database?
In the current state for charts, we only really see threshold based charts - Greater than or less than a particular value.This is limited since this does not take into account scenarios with timeseries that change state with time.For example if I have a timeseries monitoring pump health as (0 - steady, 1 - transient, 2 - medium criticality, 3 - high criticality) -- I cannot use this threshold based logic without creating a bunch of different alerts for tracking my pump's life health. Are there any options within CDF to tackle such issues?
I have got a new phone and am not able to restore microsoft authenticator from the backup on it.Please reset the MFA Access so that I can set up the authenticator again.
Define graph axis for time series data in Industrial Canvas. In this example below, date range and UOM are needed to understand the graph effectively. Submitted at the request of the Cognite project team
Users need quick access from each CDF resource type back to the source data systems when they need to deep dive into additional data. This could be achieved through hyperlinks that utilize a base URL and a dynamic equipment tag URL parameter, accessible directly from each resource type tab interface, or as a separate calculated data field. Example: Equipment tag (function#) 10-1107https://reliability.mgroup.com/RED-FE/FindEquip/equipProfile?Functionno=10-1107!--scriptorendfragment-->!--scriptorstartfragment--> Submitted at the request of the Cognite project team.
Got a great idea for the Impact Challenge? We can’t wait to see it! Submitting your idea is easy—and it’s the first step toward turning your concept into something real, with the support of the Cognite community and mentors.When you submit, we’ll publish just the title and a short description of your idea (anonymously) for the community voting phase. 🙋♀️ Got questions or not sure where to start? Just ask—we’re here to help and happy to support you throughout the process! Submit your idea here!
By default, filters in Cognite seem to be set to “AND,” meaning that to get a successful return, if you have slected two criteria, the target bust have both properties. There are times when the user wants to cast a wider net, and use “OR,” meaning that the filter will return results if either criteria is met in the filter. For example, if the user wants results from two different platforms, but not all platforms. Or, in some cases, the filter will not allow the user to select the filter element because of this property. Suggest standardizing how filters work and allow user to select either “AND’ and “OR”
Please add a feature where you can select all when filtering on a “is equal to”. Similarly to how you would do it in excel. This is extremely helpful when trying to obtain a specific data set out of your search. !--scriptorstartfragment-->!--scriptorendfragment-->
Hello everyone,I would like to use the OpenAI library or Langchain library in Cognite Streamlit for implementing natural language data search and chat applications. We are using Azure OpenAI Services on our Azure tenant, but I understand that due to the mechanism of the Streamlit app running on Pyodide, we can essentially only use Pure Python packages because of certain constraints.If there are any best practices regarding this, I would appreciate the guidance.I recognize that one potential solution is to build our own wheel, but I suspect that this option may not be viable since Cognite Streamlit cannot store built wheel files.https://py.cafe/docs/howto/buildFor the OpenAI library, I found that it can be installed by downgrading to version 1.39.0, which I suspect is due to dependencies with jiter library, but I am looking for other (and for langchain) possible solutions.
Our full Cognite Data Fusion Bootcamp calendar is now available through December 2025 This 4-day, in-person training is your opportunity to build end-to-end solutions using Cognite Data Fusion (CDF) - covering data foundation, integration, modeling, and orchestration through hands-on, real-world scenarios. Ready to skill up? Reserve your spot today for our bootcamps scheduled in Oslo and Houston. Event Date Location Cognite Data Fusion Bootcamp in Oslo - June 2025 June 16-19, 2025 Oslo, Norway Cognite Data Fusion Bootcamp in Houston - June 2025 June 23-26, 2025 Houston, TX, US Cognite Data Fusion Bootcamp in Houston - July 2025 July 14-17, 2025 Houston, TX, US Cognite Data Fusion Bootcamp in Oslo - Aug 2025 August 11-14, 2025 Oslo, Norway Cognite Data Fusion Bootcamp in Houston - Aug 2025 August 18-21, 2025 Houston, TX, US Cognite Data Fus
I noticed a few misses / potential improvements in the alerts & monitoring section of Cognite charts - Was wondering if anybody else had similar experiences?Email alerts always come in UTC (Considering we are based out of India, we may miss important alerts because of this). Let's say I have an Asset where I have generated a chart (in our case a particular well from a particular lift type showing critical parameters). A filter-based option where I can just switch Wells showing - using the same parameters as tagged before.
Thank you for your continued support.The following error occurred during the P&ID contextualization stepIs there any solution?
I recently started using CDF.I started uploading data to CDF using the OPC UA Extractor and also authenticated Postman with Entra ID.However, even though I followed Cognite’s documentation step-by-step,I keep encountering a 401 Unauthorized error.Attached below are my Postman settings.Could you help me identify the issue and suggest how to resolve it?
HiI am looking into using the toolkit more actively for deploying resources to CDF. One question that was raised when researching how to use the toolkit is what kind of validations actually happens when doing a dry run for deploying data modeling resources. I do not really have any specific issue I want answered, but rather want to learn more about the tool so prepare for a lot of questions from my notes:)Does it test that the configuration of views and containers work together?Does a successful dry run mean that I can be sure that the deployment will always work?Are there anything I need to consider even after getting a successful dry run?What kind of responses do I get if the dry run finds that something is wrong? Do I get any hints about how to fix an issue?Does it consider what is already deployed into the CDF environment?Will it tell me about any issues that can happen with new breaking changes? Appreciate all kinds of insights and experiences around this topic :)Sebastian
When a CDF transformation fails the error message could be more details pertaining to which record instance (external id) caused it to fail. Currently the CDF transformation provides the error why it failed but no information on which record caused the failure.If the transformations are triggered using python SDK, the returned object contains detailed info on which instances caused the transformation to fail. Some similar details would help with expediting the debugging and resolution process.
Is it possible to add support to combine name and SourceID as the external ID.From a Data management point of view the best solution might be to use PointID. But from a consumer point of view it might be more convenient to use name. However with name we have the possibility to overwrite if there is created a new timeseries with similar name. But how about a combination? external-id-prefix string Enter the external ID prefix to identify the time series in CDF. Leave empty for no prefix. The external ID in CDF will be this prefix followed by either the PI Point name or PI Point ID. external-id-source either Name or SourceId Enter the source of the external ID. Name means that the PI Point name is used, while SourceID means that the PI Point ID is used. Default value is Name.
Aker Solutions Verdal Production Line (VPL)Last week I had the chance to visit the Aker Solutions team at their impressive yard in Verdal, fabricating fit-for-purpose steel substructures and jackets for offshore developments, and a great example of industrial innovation done right.At the heart of the yard is the Verdal Production Line (VPL), a fully robotized line that opened in 2024. With automated welding, sandblasting, and painting, VPL is delivering production speeds up to 10x faster than traditional methods, cutting costs and improving safety by keeping people away from hazardous tasks.To push things even further, the team has developed their own software for weld planning. The robots scan each pipe, create a 3D model, and then autonomously plan and carry out the welding.Excited to follow VPL further. Aker Robotics 🦾Aker Solutions - Yards and Fabrication📸 Johan Arnt Nesgård
Currently, there is no checkall button in Data Model → Data Management. If I have a long list of data that I would like to export/download to CSV file, I have to tick the data row by row.
When adding time series data to industrial canvas, providing the name of the tag and the description field in the time series preview window would make selecting the correct reading more effective. Currently, there is often not enough info in the time series tag itself to know which data to select. !--scriptorstartfragment--> Submitted at the request of the Cognite project team. !--scriptorendfragment-->
Instead of writing lot of yaml configuration files in the CDF tool kit better to generate those using trained Chatgpt models to generate. Nowadays Kubernetes deployment yamls are getting generated instead of configuring it manually.We can make use of this github repo to get some idea about generating yaml files. https://github.com/Luxadevi/yaml-generator-Litellm
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.