Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
As a user I don’t want to find outdated assets (facilities and wells) inside the PSN Viewer. As one of the traits of the Versioned Asset Hierarchy, CDF assets disappearing from the source will not be removed from CDF but will lose their relationships. Therefore, CDF assets to be displayed in the PSN Viewer must have at least one active relationship. At the moment, the implementation of this rule is missing. The PSN Viewer in Cognite Maintain shall feature only wells and facilities having at least one relationship to a target from the Production System Network. CDF assets from other Hierarchy sources (i.e., I&M Hierarchy) must be removed from the PSN Viewer’s search functionality.
Notify an instructor and ask them to give required consent to the application in the AAD (requires admin privileges): Share the link of your Grafana instance (https://<NAME OF DOMAIN>.grafana.net/) with the instructor The instructor will navigate to the link and click the box next to "Consent on behalf of your organization" and "Accept" Once the required permissions are granted sign in to https://<NAME OF DOMAIN>.grafana.net/ in a new session (e.g. incognito mode) and choose Sign in with MicrosoftPlease assign me required privileges.https://parthsinha.grafana.net/
Hi, I’m doing the data science modules, I’m on Notebook 2 (List/Search/Retrieve) and I note that when I use the LabelFilter on the assets, I find a number of assets with a label called ‘COLD’. However, when I do c.labels.list (limit=None), I get only 4 labels and none of them match the ‘COLD’ label. Are labels for assets different from labels that are found using c.labels.list?
Doing the ‘hands-on’ for Data Engineer Basics - “Learn to Use the Cognite Python SDK “ Course."I am down to adding events to CDF from events.csv.I noted that the dataframe df has some entries :NaN” wherever the csv has a blank.Then,However, creating events in CDF fails:the error is: Next, I replace all the csv blank entries with text “blank”, and read into a dataframe. Now the latter does not have any “NaN” values. This throws the same exception when creating events in CDF.If someone can kindly assist, I would be very grateful.Thank you,Doug
Hello!I tried to setup a simple cogex extractor, but get an error that is not letting me proceed following the course. Please help.SyntaxWarning: invalid escape sequence '\e' """csv_extractor_2/__main__.py:13: error: Argument "run_handle" to "Extractor" hasincompatible type"Callable[[CogniteClient, AbstractStateStore, Config, Event], None]"; expected"Callable[[CogniteClient, AbstractStateStore, Config, CancellationToken], None] | None" [arg-type] run_handle=run_extractor, ^~~~~~~~~~~~~Found 1 error in 1 file (checked 4 source files)
When installing pygen in a CDF notebook you may be met with ValueError: Requested 'typing-extensions>=4.10.0; python_version < "3.13"', but typing-extensions==4.7.1 is already installedThis is currently a known bug, which we are working on solving. For now, the workaround is to manually uninstall `typing-extensions` using micropip. The code to do so is documented in the installation of pygen along with other known issues and solutions.
How does Cognite handle data encryption at rest? Is there any documentation available regarding this requirement? Additionally, concerning data encryption in transit, are there alternative approaches to TLS or MTLS?
Hi! I’m trying to create a Power BI Dataflow using CDF timeseries data. The same data is retrieved absolutely fine when using Power BI Desktop and Power Query inside the Power BI Desktop app, but when I try to use exactly the same query as a Dataflow instead, there is a warning about: “The query “Timeseries” contains columns with complex types that cannot be loaded. Some data is retrieved, but it’s incomplete with big parts of it missing. Looks like the CDF connector is not able to retrieve the data correctly when a Dataflow is used instead of a normal Power BI Desktop dataset. Is there a way to make Dataflow also compatible with the CDF connector?
Hey everyone! My name is Sachin. I am a data professional come data & programming world. I’m excited to be part of this community and I look forward to finding and giving help with everything related to Data & Analytics. I’m here coz I am excited to learn more about Cognite data journey, products & services. I’ve been a data architect & data engineer in financial technology industry for close to 12 years now and look forward to exploring new industry to enhance my knowledge.
Hey everyone! I’m excited to jump into this community. I look forward to finding and giving help with everything related to Cognite! It’s a cool product that I’m excited to learn more about.I’ve been a data engineer in the oil & gas industry for a few years now and look forward to building my expertise with Cognite.
Hello community, my name is Dan Riley, Analytics Manager with Interstates, Inc. Looking forward to learning more about CDF!
Hi Everyone! I’m excited to be here to learn more about Cognite’s solutions for industry.
At Cognite, we are committed to continually enhancing the Data Modeling capabilities in the Cognite Data Fusion (CDF) platform while minimizing disruptions for our clients and users.However, we regret to inform you that a recent update in the Data Modeling API has broken older versions of the Python SDK. Specifically, the method used to list views, now it works exclusively in the latest versions of the SDK.To ensure seamless integration and uninterrupted usage of our platform, we strongly recommend all users to upgrade to the most recent version of the Python SDK, with a minimum requirement of version 7.37.0.We want to reassure our users that this change does not affect any functionalities within the SDK itself. Your existing workflows and applications should continue to operate as expected.Should you have any inquiries or require assistance, please feel free to reach out to Cognite Support. We're here to help!Thank you for your understanding and continued support as we strive to prov
Hello everyone. We have some checklists where the operators measure the length of something, usually in feet and inches, like 1’10”. We have been facing some trouble creating tasks with this type of data on the templates, we could only add decimal numbers. How have you been handling this type of data?Thank you.
As beneficial as inspection robots are, they might suffer from limitations if they are not trained in an environment simulating the industrial environment they will be deployed at. Therefore, Cognite gathered forces with Aker Solutions, TESS, and Createc, to build an innovative testing and training facility for inspection robots, called Robot Garden, which will help take robotics' impact on industrial safety and efficiency to new heights. Robot Garden is located at Fornebu, Oslo, and acts as a local test and training facility to train robots in a realistic environment, similar to the environment where they will be deployed, thus enhancing their mission efficiency and helping robot users get the most out of their robot deployments. The main driver behind Robot Garden is to test the robustness of the training AI models in different challenging deployment scenarios such as bad weather, bad lighting, uneven terrain, etc.In addition, TESS provides a simulation panel that can simulate vari
Hello, When I tried to run the DBExtractor, I get the following error: “polars\_cpu_check.py:232: RuntimeWarning: Missing required CPU features.The following required CPU features were not detected: avx, avx2, fmaContinuing to use this version of Polars on this processor will likely result in a crash.Install the `polars-lts-cpu` package instead of `polars` to run Polars with better compatibility.Hint: If you are on an Apple ARM machine (e.g. M1) this is likely due to running Python under Rosetta.It is recommended to install a native version of Python that does not run under Rosetta x86-64 emulation.If you believe this warning to be a false positive, you can set the `POLARS_SKIP_CPU_CHECK` environment variable to bypass this check.”After doing some googling, I was able to install the polars-lts-cpu package referenced using python, but I got the same error. I’m not sure how to make the extractor reference the polars-lts-cpu package when it runs. See attached screenshot.The extractor i
CDF - Filter option is not working as expected under common filters at Data Explorer screen. Login to CDF Click on Data Explorer tab in CDF menu bar. Click on Files tab in right side of the panel. Set Data set as 'src:006:documentum:b60:ds under Common filters in left side of the screen. Select the check box ‘Before’ under common filters in left side of the penal. Click on the Calendar icon and set data as (e.g.) '10-01-2023' Expected results: Document ‘Amarjeet_Test_DT.docx’ should not display in results window because its created after the set date.Actual results: Document Amarjeet_Test_DT.docx is displaying in CDFNote Issue exists for all Date filters like Created time, updated time with Before, After, During in CDF, user want to know what date is used for filtering the documents in CDF with these filters.
Currently on the PI extractor configuration file you can specify the end timestamp for the backfilling of a timeseries by using the “to” parameter on the backfill section of the configuration file. What we would expect is to have something relatively close to this set date. However, the docs say that it can overshoot by the amount of datapoints specified on the filling chunk size, this results in over-ingestion of datapoints. We would like to have a way to control the backfilling more precisely since we currently cannot limit the overshooting of the backfilling without also affecting the front-filling given that both depend on the same data point chunk size parameter (cdf-chunking>data-points).
HiThere are some bugs when doing contextualization in the fusion GUI. It should be possible to “select all” when i do a search query in the interactive engineering diagrams contextualization workflow.
Hey everyone! 👋 I've been diving into the world of CDF data modeling for the past couple of weeks, and I thought I'd share what I've learned so far in a way that's easy to understand. First off, let's talk about the term "Data Modeling." It's everywhere in CDF - in API endpoints, UI, docs - but its meaning can vary depending on where you look. After some digging, I've figured out what it means specifically in CDF. Essentially, it's about organizing, storing, and working with data in a way that's perfect for handling complex industrial scenarios. Think of it as the foundation for unleashing the power of Cognite Data Fusion on your industrial data. So, what exactly makes up a Data Model in CDF? Let's break it down: Space: Think of it like a folder where everything in a Data Model hangs out. Containers, views, instances - they all belong to a Space and have their own unique ID. Containers: These are like the storage units for properties. They live within a space and hold sets of proper
We are excited to bring you an update on the next release of Cognite InField!In this release, we are adding more flexibility for Work Order execution with InField. By navigating to the “Work orders” overview, you will now be able to customize the table by adding/removing additional columns based on the available data. By dragging items up and down the list, you’ll further customize the placement of the columns. Add & remove additional columns based on the available Work Order data.By default, new columns will be searchable & filterable, allowing you to further customize the view to find the data relevant to you. Search & filter within columnsOnce you’ve spent the time adding additional columns and setting your filters, this shouldn’t be an exercise you need to repeat every time. Due to this, we’ve also added the possibility for you to save this view as your own selected filter. By choosing a selected filter in the list you’ll have instant access to this view.As you might w
I am creating a transformation where I am joining data from two different views/containers, where table B has a node reference to table A. I have tried to find documentation for this, but I have not found any so far. Through friends and trial and error I have found two options, and neither seems to be performing well. Example:Type A { Name: String}Type B{ Name: String A_ref: A}What I have found as possible solutions are to go through cdf_data_models and picking the externalId from the node-reference:from cdf_data models(<spc>, <mod>,<ver>, “A”) as A join cdf_… as B on A.external_id = B.A_ref.externalIdSame as above, but join on a as a nodereference: on B.A_ref = node_reference(<spc>, A) Neither of these options seems to be documented anywhere, and I can’t find any other ways documented. The performance when reading when using these joins seems slow, even though I have set up a few indexes which should cover the different joins. This is also slower then
No questions yet! Just excited to start the CDF Journey. :)Bert Greeby
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.