Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Is there a tool for planning an schedulling scaffolds ?
Hello,I’m trying to consume data from FDM in Power BI. But I’m getting error (408) Request Timeout. I’m already filtering Power BI for only a few days of data, which brings an amount of data comparable to the amounts I have already consumed in Power BI. How can I solve it?
Hello everyone.I have a use case that I have been trying to figure and I'm hoping you could help me.We have an equipment, let's say a pump, and we have a timeseries for its flow. We consider that the pump is operating as long as the flow is above a threshold.In PI, if I want to know how long the pump was operating, I can simply use PITimeFilterVal function in Excel to retrieve the amount of time that the timeseries was above the threshold. Me and my team have been trying to do this with Cognite, with no success. We tried transforming the timeseries to a 0 or 1 using the “Threshold” function, and then integrating it, but we have faced some limitations due to the approximations that are intrinsic to the integration function.Unfortunately, I cannot show you the actual data, be we checked a few days where the pump starts the day operating - at 00:00 - and at 2 am our integration gives us a value of 1.84h - when it should be as close as 2h as possible. This difference, even though slight, i
Hello ! I am new to this community but would like to ask the following: Has anybody any experience extracting a list of DCS Process Alerts to Operators or Operator Overrides on an Hourly or Daily basis into CDF ? This data resides in the OT and is sometimes extracted and made available in reports in the IT domain, e.g., by Yokogawa. As I see it, this is an important indicator for stable and well managed production operations, and also a good predictor of upcoming threats to production.
I’m trying to use Update extraction pipelines method to update my extraction pipelines created. Here are some issues i face:1. I’m unable to update contacts for an extraction pipeline, it throws the below error:code - contact_info = [ExtractionPipelineContact(name="sangs", email="sm6@slb.com", role = "MAINTAINER", send_notification=True)]to_update = ExtractionPipelineUpdate(external_id="<PIPELINE-EXT-ID-2>")to_update.contacts.set(contact_info)client.extraction_pipelines.update(to_update)error - “CogniteAPIError: Unexpected field - items[0].update.contacts.set[0].send_notification - did you mean sendNotification? | code: 400 | X-Request-ID: 918d1113-6a5e-9d38-b850-61e3dc54c220 The API Failed to process some items. Successful (2xx): [] Unknown (5xx): [] Failed (4xx): [<PIPELINE-EXT-ID-2>, ...]”2. I’m unable to set the specified fields to None using sdk : description, source, schedule,documentation, name and dataset-id. Im able to set only metadata and raw-tables to None. Wond
I am making comparisons between time series data in CDF and PI. The reason is that in our tenants the CDF data is not 100% accurate compared to PI.However, from my testing I think that PI performs its aggregations with the timestamps “centered” at the aggregated time periods, while CDF puts the timestamps at the start of each aggregated period. Is it possible to specify how this is done with the Python API? From my study of the docs it appears not to be the case. The same applies to the PI Web API as well: I cannot specify how the timestamps are placed. The agreement with PI becomes significantly better if I place the CDF timestamps at the center of the aggregated time periods.My current workaround is the following:Fetch RAW data from CDF Shift the timestamps by 0.5x of the granularity Resample to the desired granularity Compute mean Interplate any missing valuesThe issue is that fetching raw data is a lot more time consuming than fetching aggregates. I have been playing with fetching
I have solved the BufferGeometry issue was present earlier, apparently the react-app-wired was not the latest which is why the override was not working.import React, { useEffect } from "react"import { Cognite3DViewer } from "@cognite/reveal"import { CogniteClient, CogniteAuthentication } from "@cognite/sdk"import { getToken } from "../../utils/MsGraphApiCall"import "./Models.css"function Models() { const appId = "KPI-Dev" const project = "celanese-dev" const clientId = "my-client-d" const tenantId = "my-tenant-id" const cluster = "az-eastus-1" const modelId = 7575155737800092 const revisionId = 3624517118008353 async function start() { await client.authenticate(clientId, tenantId, cluster) const viewer = new Cognite3DViewer({ sdk: client, domElement: document.querySelector("#your-element-for-viewer"), }) viewer.addModel({ modelId: modelId, revisionId: revisionId }) } const legacyInstance = new CogniteAuthentication({ project, }) // getToken() const
When saving and scheduling a calculation from Charts, the enter credentials dialog hangs in the “checking credentials” and keeps trying to verify in a loop. This happens only if you select the Use CDF client ID and Client Secret option at the top. If you do not choose the top radio button and simply enter the creds and click next, it works ok.
We have some people who already have Tableau. Is it possible to use Tableau to visualize data from Cognite Data Fusion? If so, any guidance on setting up the connection?
Hello, Luchiian Alexandru is my name, i am working for Accenture, I have 9 years experience in Industrial Automation (Emerson DCS - DeltaV and Ovation), 5 years experience in IIOT platform Thingworx and passionate about electronics and sustainability. I want to learn what we can do with Industrial Data and change the world by making it SMARTER..
Hi All, I am interested to know about Data lineage during raw data transformation in CDF. Could you please let me where the transformed data will reside inside the Cognite. Whether it will be stored in Cloud storage (Azure / GCP) location or any dedicated location inside the Cognite. Please share me information or supporting web link related to this question.
Can Unified namespace architecture be implemented in Cognite Data Fusion
It was requested by @ibrahim.alsyed from Celanese that we increase the limit of the raw datapoints for the endpoint /timeseries/data/list (Retrieve data points) to 1 million. Currently, the non-aggregated data points returned is limited to 100000.For the drill-down views with more than 7 days, the server-to-client data transfer maximum size is reached due to the number of datapoints (some timeseries have more than 1 datapoint per minute). The solution they have implemented is that for ranges higher than 7 days, they are displaying an interpolated trend using the maximum absolute values for 30-minute aggregation. Once the user selects a smaller date range, they are unable to display all the values. Hence a increase in the limitation was requested.
Hi community,I have noticed a large discrepancy in runtime when running my handle function as a single Cognite Function call and when I run the handle function locally. And yes, this is the runtime of the actual CALL, not the creation/deployment of the Cognite Function. The handle function is doing some calculations on a time series with around 500 000 datapoints. Locally, it runs under 1 minute, while through Cognite Function call it takes around 9 minutes. Do you have any idea what could cause the large discrepancy?Thanks!
I’m currently writing my Master thesis in collaboration with Aize (represented by V. Flovik), where I’m trying to use Generative adversarial Networks (GAN) to generate synthetic time series data. To do this, I’m using data from Cognite OID. I’ve explored the data in the previous semester, and I figured that one of my main challenges is to find longer periods of data that are “Normal”. Meaning periods without to large irregularities and without too many missing data points. Because even when I aggregate the samples over 30 seconds, there are still samples missing in some periods. Some of these periods are hours long and can’t really be filled in without messing up the temporal development of the data. I can’t find any information about the cause of these periods either.The time series I’ve been looking at are:I’m no expert in process engineering, and I only chose these 9 sensors because another master thesis from 2019 had used these sensors in their project. I therefore reach out here t
When I am creating calculations within Charts I often have to click an object twice before it is selected. For example, adding a Function. When I select Operators the list resets and I have to click Operators again. This time it provides the new window. I’ll scroll down and select Round (doesn’t matter what I select). The list resets back to the top and I scroll back down and select Round again. This happens every time independent of browser (Chrome or Edge). Is this common for others or a unique issue for me?
What is the process involved in building the contextualized Industrial Knowledge Graph? Are we referring to creation of a GraphQL schema for a usecase and query data using GraphQL?Is there any training link to lean more about Industrial Knowledge Graph?How can I visualize the GraphQL query output (nodes & edges) as shown in many Cognite documents?
Is it possible to get Cognite Data Fusion APIs from Minitab application? If does, how does it work the connection?
I am unable to generate client code for the CDF openapi spec at https://api-docs.cognite.com/20230101/ using deepmap/oapi-codegen for Go, seemingly because the spec contains some errors. Do anyone know of a way to generate an api-client in go for the CDF-api spec? Both commands below rely on having openapi spec downloaded to the current directory from https://api-docs.cognite.com/20230101/ Testing the spec for errors with redocly/cli: docker run --rm -it -v $PWD/swagger.json:/swagger.json redocly/cli lint /swagger.jsonFor me this results in “Validation failed with 25 errors and 382 warnings” Generating the client library using deepmap/oapi-codegen: go run github.com/deepmap/oapi-codegen/v2/cmd/oapi-codegen@v2.0.0 --config oapi-codegen-config.yaml swagger.jsonwith the this config file:package: "cdf_api"output: "cdf_api.gen.go"generate: models: true client: truecompatibility: circular-reference-limit: 100For me this results in the message “error generating code: error creating operat
Hi Team,Please help me to find the mistakes in my code. Let me showcase the dummy codemain code:def main() -> None: """ Main entrypoint """ BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) config_values_vault.set_vara() with Extractor( name="SAP_Extractor", description="An extractor to extract asset hierarchy from SAP based on root node", config_class=SapConfig, # version=__version__, # debugger version="1.0.0", run_handle=run, metrics=metrics, config_file_path= os.path.join(BASE_DIR, 'config.yaml'), ) as extractor: extractor.run() I have build code unittest for above code: def test_main(): with patch('os.path') as path_mock: with patch.object(path_mock, 'abspath') as mock_abspath: with patch.object(path_mock, 'dirname') as mock_dirname: with patch.object(path_mock,'join') as mock_join: mock_abspath.return_value = '/p
The buttons have disappeared in the search after one of the releases. It is still usable, but you have to guess the functionalities.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.