Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Hello,I am quite new on usinf RestAPI. My company has the license to use CDF. I also inderstood that Cognite has developed dotNet RestApi SDK.I searched a bit but I could not find an example of how we can use this SDK for data retrieval especially for time-series data. How could I get a snippet of code showing an example how to use this?Thanks.
Hi,I am trying to run transformations-cli locally with below command:transformations-cli deploy . (The current directory has manifest.yaml and transformation.sql files) I am getting below error:Deploying transformations...Failed to parse transformation config, please check that you conform required fields and format: Invalid config: can not match type "dict" to any type of "destination" union: typing.Union[cognite.transformations_cli.commands.deploy.transformation_types.DestinationType, cognite.transformations_cli.commands.deploy.transformation_types.DestinationConfig, cognite.transformations_cli.commands.deploy.transformation_types.RawDestinationConfig, cognite.transformations_cli.commands.deploy.transformation_types.SequenceRowsDestinationConfig, cognite.transformations_cli.commands.deploy.transformation_types.AlphaDMIDestinationConfig] Here is destination section of manifest file:destination: viewSpaceExternalId: my-model-space-id viewExternalId: CDFTimeSeries viewVersion: 0_2 i
I have to build two 2 types of extractors.1. PI extractor to connect to the PI server and fetch the data and ingest into CDF2. SharePoint online - Extract data from files present in SharePoint online and ingest into CDF. I wanted to know the construct in building the extractors. Mainly wanted to account for scenarios where the PI server is not available and PI extractor is unable to fetch the data from the pipeline. How to handle these kinds of situations and incorporate them in the code while building the extractor. Also, how to handle monitoring while performing extractor. Are there some sample code repos that can be referenced for getting complete idea for building extractors.
Hi there!I have a usecase where a file is uploaded by a user to an API. The API then uploads the file to CDF Files. We want to avoid having to have the full file in memory at the same time, and therefore must stream the file contents from the request handler directly into CDF Files.There are two ways of achieving this:Stream the request body from the request handler directly into CDF Files’ upload URL Chunk the request body and upload each chunk as separate requests.The first option may be achievable, but I don’t believe the second option is possible.Do you have any insight whether it is possible to chunk a file upload like this in CDF Files?
Hi, I have done the Grafana setup from azure managed grafana from azure portal. But don’t have permission to install cognite data fusion in grafana from azure. Please let me know how to install plugin and what permission needed.
Hi, I have created one FDM model having JsonObject field. In the transformation query I referred that field , example query:SELECT Series_Title as externalId, Series_Title as name, Overview as description, int(Gross) as gross, float(IMDB_Rating) as imdbRating, int(Runtime) as runTime, int(Released_Year) as releasedYear, '{"key1":"value1","key2":{"key3":12,"key4":"hello","key5":{"key6":10}}}' as dataFROM movies.moviesFDM model is:type Movie { name: String! description: String watchedIt: Boolean imdbRating: Float releasedYear: Int runTime: Int gross: Int actors: [Actor] data: JSONObject} The transformation is failing with this error:Unknown property type Json. May be I am missing something, Can you please help?
https://github.com/cognitedata/cognite-sdk-js/tree/master/samples/react/msal-browser-react, I am trying to run npm install and $ REACT_APP_CDF_PROJECT=... REACT_APP_AZURE_TENANT_ID=... REACT_APP_AZURE_APP_ID=... npm startgot the following error:node:internal/modules/cjs/loader:1024 throw err; ^Error: Cannot find module '/Users/kevin.peng/code/cognite/cognite-sdk-js/samples/react/msal-browser-react/start' at Function.Module._resolveFilename (node:internal/modules/cjs/loader:1021:15) at Function.Module._load (node:internal/modules/cjs/loader:866:27) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) at node:internal/main/run_main_module:22:47 { code: 'MODULE_NOT_FOUND', requireStack: []}
I have one question related to 3d models. I would like to explore does reveal/cognite supports programmatic change of animation on 3d model. What i am looking for is, i have 3d model of pump and would like to change the animation speed depending on the speed i set from javascript code (basically real time speed which i get from my api). Does it possible ? If so could you guide me to example code how to do that. Thanks
Getting an error on 2_List_Search_Retrieve.ipynb:c = cauth.create_cognite_client(method="client-secret")AttributeError Traceback (most recent call last) c:\Users\tomas.kurten.perez\Downloads\COGNITE\using-cognite-python-sdk\notebooks\2_List_Search_Retrieve.ipynb Cell 7 in <cell line: 1>() ----> 1 c = cauth.create_cognite_client(method="client-secret") AttributeError: module 'auth' has no attribute 'create_cognite_client'
Cognite can automatically Build and Restructuring the asset Hierarchy to standardize the asset model across multiple manufacturing sites by using industry global standards.? Cognite can define the templates for asset attributes based on the technical object type as per industry global standards.? (Build - from capital projects to build asset hierarchy & Restructuring - Define the level of asset hierarchy as per requirement.)
Hi team, I’m working on OPC UA Extractor. I have configured metadata-targets and configured metadata-mapping.I’m not able to see the metadata along with timeseries data.Please find the configuration in attachment and screen shot in CDF how timeseries data looks like. please let me know if any additional configurations need to be done to see the metadata along with timeseries data.
The transforamtion is pointing for missing isstring colomns
Hi All,I am facing one challenge Connecting CDF thru Power BI( with project instance). Attached is error for your reference. However I was able to connect to cognite “Learn” instance thru Power BI( as described in training modules). I am trying similar method to connect to tiger instance but facing attached error while authentication.Project Name providing in Power BI: accenture-tiger-training?tenantID=prjtentnIDhereIt will be great if someone can assist. Thank you in advance!Arati
How to upload excel files to CDF files, I am looking for ways to upload files programatically and schedule this using cron expression.
Hi, we are using the PI Extractor with Extraction Pipeline and is pushing the config through the pipeline.But we noticed that after updating Config in extraction pipeline we have to restart the PI Extractor to force it to use the updated config.If we update the config we can se in run history that the extractor is reporting “Seen” on the new config revision but it is not loading and using the new config.Is there a way to force it to use new config?
HI, May i know how to schedule a refresh in Power BI for Cognite as i am facing below error?When i try to schedule, i see it is disabled for me.I tried to install the gateway locally as well but it says installation failed.While establishing the connection with CDF on PBI desktop i get pop-up as third party connector. Does that mean PBI service doesn't support scheduling for Cognite?Please help.TIA.Regards,Arati
Hi Team,I am trying to get details of broken files uploaded in CDF . Is there any way to query actual file size from CDF using python sdk without downloading the file? Please add a feature to query file size using python sdk to get the corrupted/broken files. Thanks in advance.
Hi,I was wondering where I can find information about how to set up the permissions for a file extractor reading only files from a few sites in SharePoint, i.e., not giving the extractor access to all sites. I have solved it using the GraphAPI, but this is quite cumbersome, so I was wondering if we have documentation on how this can be set up. I’ll post my solution below for clarity on how I solved the issue:_______________________________________________________________________________You will need to create two applications. One that is used in the file extractor and one to give access to the file extractor.Register the File Extractor App: Go to the Azure Active Directory portal. Click on "Azure Active Directory" then click on "App registrations". Click on "New registration" to register a new app. Grant API Permissions: Once your app is registered, click on the app's name to go to its dashboard. Click on "API permissions" in the left panel. Click "Add a permission". Choose "Mic
We are using the online version of the Jupyter notebook from CDF portal for a client project - DEV and able to get the clientconfig/ client object and create and retrieve assets, run transformations, create datasets etc. Client IT team has created an app and registered in Azure and also shared the tenant ID, Client ID / name and secrets as well. When we use these parameters shared for this app and run the same code locally in a notebook, it is not able to perform certain tasks (such as data set creation etc.). Basically, the online version has all the IAM groups as {data engineer, data scientist Data Analyst, OIDC-Admin.}But when we set the configuration parameters client-ID, Tenant and secrets etc., we don't get the groups entirely as above but only comes as “Data Integration”. This “Data-integration” has limited scope and doesn't allow to create datasets etc. So how do we understand this part of roles and access management in CDF construct and applications registered in Azure AD?
Hi Team,I am facing one challenge in creating Asset hierarchy in Power BI. Format of data is not as per the required format for hierarchy creation i.e. i do not get parent-child relation in different columns. However i see all nodes appear in single column. I do have ParentID but unaware how this will help me to form hierarchy. Any inputs will be appreciated. Thanks!Regards,Arati
Could you please suggest how to limit source?
Hello all,Having some challenges with the hands-on section of the training material. I have some authorization issues based on the error message below. Any suggestions on how to fix it ?
I am developing a outlier detection model. The model will be accessed by users through a Streamlit app. At the moment I am assessing whether deploying the model through the Streamlit page in CDF would be a good option. My issue is that I am not sure what the most efficient way to access and import the dataset into the app is. Would it be a good approach to create a data model and import the data from there, or extract the data straight from raw? Another idea I had was setting up a scheduled Cognite Function to download the data as a csv and letting the app access that. I need the data to be updated a few times a week and have the “import” time in the app to be fairly quick for the user experience in the app to be good.My dataset is around 1 million rows with three numeric and two non-numeric columns. For now, I have the data as a raw table in CDF.Thanks!
When we are selecting different date range, the granularity is changing for different number of days selection. For 2 days - 3min; For 3 days - 5min.Can we get access to any document how this pattern is designed.
Hi Team, Please enable and configure Air setup in the below CDF project. (Accenture-demo-dev) under cluster EUROPE1-GOOGLE. Please let me know for any details.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.