Impact 2024: The Industrial Data and AI Conference for and by Users
All you need to know to get started with our community.
Connect with the community to ask and answer questions and learn.
Build better products with the product team.
Have your say on beta products in our Early Adopter Groups.
We’re excited to announce the launch of the Cognite Data Workflows course, now live on Cognite Academy! 🎉Data Workflows is an integrated, managed orchestration service within Cognite Data Fusion, designed to automate and streamline data processes. It triggers tasks at the right time, keeps data up-to-date, and ensures that processes are smoothly managed by handling dependencies between them.With Data Workflows, you’ll experience less hassle managing data pipelines, enjoy improved performance, and gain better visibility and control over your data operations. This course offers a deep dive into:Core concepts of Data Workflows: Master the fundamental principles and components of orchestration within CDF. Task types: Learn about the various tasks, including CDF Transformations, Cognite Functions, and dynamic tasks. Creating, running, and scheduling workflows: Hands-on practice in designing workflows that automate and optimize your data processes.With a mix of video tutorials, practical ex
Hi everyone! My name is Maykon Nogueira, I’m from Brazil and work as a back-end developer in Radix Eng, currently starting on a Bussiness Analytics project at Celanese.I never seen anything about Cognite, so this is gonna be a new experience for me, and I’m very excited to see how far my projects and my professional skills gonna be as I do all the trainings from Cognite Academy.I’m very excited to see what the future reserve for me!
As a point of UI Feedback, our work laptops seemingly struggle to render the CDF UI in some cases. This makes resizing the Preview with the Vertical Slider which separates the list of files from the preview window, sometimes laggy and difficult to work with. In effort of eliminating user friction to viewing documents in the fullest way possible, I’d like to be able to double-click the vertical slider and have it store the position it was clicked in for returning to that spot, but then move all the way to the leftmost portion of the screen, covering even the data explorer dataset filter windows on the left.For bonus points this action would also enact the button between the zoom in and out buttons on the bottom right of the preview - the one that looks like a refresh icon, to maximize the preview in the newly sized window. Obviously this can be achieved by dragging the bar manually to the side, but doing so performs extremely poorly on a seemingly newer Microsoft Surface Laptop with 16
Hi Team,I want to download data from a CDF Data Model (GraphQL) on schedule. I create a Streamlit App but I can’t download and place the data on local machine based on schedule. Please let me know if there is any solution.
I am looking at 127 time series linked to one asset and I want to download the list of these time series as shown in the screenshot below, but this doesn’t appear to be straight forward.The download button circled in blue saves a JSON file linked only to the asset “11. QHP”. Is there a way to spare the user the effort of manually selecting and downloading each of the 127 timeseries and later reassemble them into one table like the one shown in the browser? It is not possible to select and display more than 20 columns in the browser … due to performance issues. This is not critical at this time, but still dissatisfying. I want to download everything wholesale and pick what I need from the list locally. Is there a way around this restriction? Solving issue #1 would remedy also #2, as I’d be able to join the tables locally again. Thanks
HiSince the release of nested workflows I have been looking at suitable use cases for it in our environment. I have sketched up some possible flows, but there are some cases I don`t know how to handle in the best way.Consider this scenario where we have two source systems (source X and Y) where each system has a source model that is populated via transformations. The transformations are divided into multiple workflows based on what project it gets data from. I have two solution models (model 1 and 2) that gets data from the source systems. Model 2 gets data from both source models, while model 1 only gets data from source X. My issue is that the data from the source systems are ingested into CDF raw tables once every morning and the data arrive at different times. If data from source X is in CDF before data from source Y, I want the workflow tasks relevant to solution model 1 to run, but not solution model 2, because the source Y data is not ready yet, however if both sources have inge
dear all I was attempting to perform data aggregation based on the date. I am retrieving online data into CDF, which is updated every 2-3 minutes. I am trying to aggregate the data so that the date is updated every 24 hours instead of every 2-4 minutes.I used this code to obtain the list of columns in my data frame.# Check the structure of the DataFrame, including column names and the first few rowsprint(dp.columns) # This will show all column namesdp.head() # This will show the first few rows of the DataFrame however, I got only this after running the code I tried to use another code again the first column is not date column.
For timeseries, I can only see the datapoints itself, not details and metadata. Not urgent to have this and I understand that IndustrialCanvas is still under development.
Hello Team,i was trying to create a Plotly chart. while importing i constantly get error message stating that Plotly module not found.same was the case for scipy. how can i install these modules?
The Cognite Pi extractor is currently unable to fill gaps. The only way to force the extractor to re-pull gaps is to:turn the extractor off change the state table (update first = last) change extractor config to enable backfill turn extractor onThis isn't user friendly nor is it documented. It also usually requires an IT admin with access to the extractor's VM and a person who knows where it was installed. It would be amazing to get a no-code feature to do this same thing. Here are two use cases:As an SME i notice that I'm missing data points for a failure or process. I check my source system to verify that the points are missing in CDF. I go to the Extraction Pipeline that monitors this extractor and I enter a timeframe and tag that I wish to gap-fill. As an SME my use case has changed to require more historical data. I want to pull another year's worth of data for a subset of all my tags. I enter a list of tags and set it to backfill just those tags.
Hi Team,Please help me to find the mistakes in my code. Let me showcase the dummy codemain code:def main() -> None: """ Main entrypoint """ BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) config_values_vault.set_vara() with Extractor( name="SAP_Extractor", description="An extractor to extract asset hierarchy from SAP based on root node", config_class=SapConfig, # version=__version__, # debugger version="1.0.0", run_handle=run, metrics=metrics, config_file_path= os.path.join(BASE_DIR, 'config.yaml'), ) as extractor: extractor.run() I have build code unittest for above code: def test_main(): with patch('os.path') as path_mock: with patch.object(path_mock, 'abspath') as mock_abspath: with patch.object(path_mock, 'dirname') as mock_dirname: with patch.object(path_mock,'join') as mock_join: mock_abspath.return_value = '/p
We are working with one of our sites to build out a plan in Maintain for their TA. The TA have ~3500 work orders so we are trying to bring those into one plan. We have run into several issues during this, so looking to see if anyone has experienced this or knows of limitations in Maintain. When viewing activities in a plan, we can only scroll down in the list to 1000. We most likely will apply a grouping, but this could prevent mass selection if more than 1000. We have not been able to add all ~3500 activities to the plan. We have gotten a plan up to 1950, but then it won’t let us add any more.@ryanbooker @Gerard LeBlanc @Andrew Montgomery
Recently the user interface in Charts has changed. I’m not seeing the ability to add timeseries via a PID in a contextualized PDF file. See the image below. Is that capability not available any more?You can see the PDF from the Industrial Canvas UI:
Hi Community,I would like to know if anyone have information on how the Cognite Streamlit Reveal python package full feature & capability documentation? Second, can the "cognite-streamlit-reveal” have the zoom in / zoom out event trigger? This is because we want to develop a use case where user able to zoom in into the oil rig plaform 3D and once the zoom stop, it able to send and event to show what are the bounding box of that area so that we can load data related to that specific area during zoom in and zoom out interaction. Thanks
Hello, We are trying to delete a unit on a container attribute using CDF toolkit. CDF toolkit identifies that there is a container change. However, this change is not reflected on the container and view schema after the deployment. No error raised.Note: Adding and modifying container attribute unit works well.Any ideas ? Thanks,
Let’s say I have a time series with the behavior seen above. This is a tag that represents the water consumption in my house (it’s a hypothetical). Occasionally I manually reset the sensor measuring it and it goes back to zero as you can see in the curve, but otherwise this sensor works more or less like your water meter: it measures the total volume of water consumed since the beginning of times until the meter reaches 999,999 and it resets itself to 000,000 automatically How do I implement a scheduled calculation in Charts that will compute the hourly water consumption at my house? I want to start calculating this at 00:30:00 of today, and the calc should run every hour after that (i.e. 01:30:00, 02:30:00, 03:30:00, 04:30:00, etc indefinitely) Thanks.
HelloI want to know if it is possible to implement a loop within a transformation?
I am doing some hands-on exercises, but I am stuck in a point here.The task is to Add some time series data:- Create a time series object for each country asset in CDF called <country>_population and associate it with its corresponding country asset. Remember to associate the data as well to the data set that you created. As an example, the time series for Aruba would be called Aruba_population.- Load the data from populations_postprocessed.csv into a pandas dataframe.- Insert the data for each country in this dataframe using client.time_series.data.insert_dataframe.- As a check, retrieve the latest population data for the countries of Latvia, Guatemala, and Benin.- Calculate the total population of the Europe region using the Asset Hierarchy and the time series data.The point I stuck on:=> “Insert the data for each country in this dataframe using client.time_series.data.insert_dataframe”. I am not sure what does this means, although I had done an early exercise …“Insert th
Running a quick entity matching of time series to an asset. It says, An error has occurred while processing the prediction job. Where can we see the root cause?
Get active and reach the top!
The weekly leaderboard is refreshed every Monday. Make an impact in the community and score high!
To receive our newsletter Cognite Hub News, sign up from your profile card.
Read our product documentation.
See the latest status updates on our services.
Cognite Support is available 24/7.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.