Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
I have begun the module: I followed instructions down to setting the environment variables:Then, authorization was updated: Get New Access Token is successful. Use Token clicked Then, to test, I went: Cognite API collection > "Assets" folder > "List assets" > SendThis gives Error: So, I deselect “minCreatedTime” parameter. (THERE IS NOTHING in the documentation that talks about this.) The, press “Send” again.But then get this error when testing:So, deselect “maxCreatedTime” , and progressively deselect minLastUpdatedTime; maxLastUpdatedTime as indicated by error messages.Finally, Gives the error: Therefore, my attempt to list the assets in the CDF project has failed. Next, I try to list the timeseries data: timeseries > list time series > SendTry deselecting all the query parameters > Send This time there is a listing of timeseries: My questions:Q1. Why can I not list assets?Q2. Why can I list timeseries, but not assets? Thank you for your support. Regards,Dou
Hi All,I am trying to update data in RAW table. I am using below code but after executing it either column values are not modified or sometimes all the columns shrink under one columns as ({} columns) and column data is also not updated in that case. var row = confModel.RawRow as RawRow<Dictionary<string, JsonElement>>;if (row == null){ return;}var currentDate = DateTime.Now;var utcDate = currentDate.ToUniversalTime().ToString("u").Replace(" ", "T");row.Columns[UpdatedBy] = GetJsonElement(JsonValue.Create("SYSTEM"));row.Columns[SimulationStatus] = GetJsonElement(JsonValue.Create(status.ToString()));row.Columns[UpdatedDate] = GetJsonElement(JsonValue.Create(utcDate));var rawRowCreateColl = new List<RawRowCreate<RawRow<Dictionary<string, JsonElement>>>>();var rawRowCreate = new RawRowCreate<RawRow<Dictionary<string, JsonElement>>>();rawRowCreate.Key = confModel.RowKey;rawRowCreate.Columns = row;rawRowCreateColl.Add(rawRowCreate);tr
To move data from source to CDF, it is mentioned we might require custom extractors. Can you explain in very simple terms what exactly are the extractors , how to create them ? Is it just a piece of code to connect two systems ? Events is one the resource type. Do events (like 2hr shutdown) needs to be created manually in CDF or it is detected automatically based on the value of time series data ?
I am trying to configure an OpcUA extractor to push a NodeSet2.xml file to CDF. I have it basically setup and if I run it, there are no obvious errors, but I don’t see any kinds of assets being imported. Am I expecting too much? Is the functionality in place to import an OpcUA Nodeset as Assets into my CDF project. If anyone has this working I would really appreciate a copy of the config file you are usingThanksAdrian
Hello,In DB Extractor, I see a parameter that could be configured to specify how to treat the timezone parameter. Is there something similar for OPC-UA Extractor as well ?If not how can the different timezone data could be handled or atleast Local timezone data ?
Hi I have an application to read data from and I have two options either to use ODBC or OPC UA directlyWhat are the benchmarks or limitations when it comes to aboveI have 30,000 data points to read with sample time of 1s and 5s and 10s
Hi Team,I’m using OPC UA Extractor to extract time series data from Honeywell PHD source system.I have configured history in configuration for backfilling for 30days.But, i can see more historian data than expected please find the screenshot below.I have used following configurations in history. start-time: 0 end-time: 30d-ago max-read-length:30d Let me if this configuration is correct for fetching 30days previous day.Please provide configuration if above one is wrong.
How can i connect to CDF, using Alteryx ETL tool.Is there any documentation available?
How to load data from Alteryx (ETL tool) to CDF Raw Database? I don’t see any connector in Alteryx for connecting to CDF.
Hi team,I have a Honeywell PHD source system. I have a got a suggestion to use Cognite OPC Classic Extractor. But I can see Extractor suggested is beta version. Can I know, it is efficient to use the Cognite OPC Classic Extractor, or we should go with custom extractor.If we use ,Cognite OPC Classic Extractor it connects over COM+ or DCOM to OPC servers and streams live and historical data into CDF. Can you please elaborate on COM+COM+ or DCOM connectivity to OPC Servers. How to connect with Honeywell PHD server using either of the ways? Thanks
I am looking for a way to update the extraction.id-prefix field inside the OPC-UA extractor config file after partially starting data extraction. Basically, I need to be able to store data in CDF raw with External ID prefix set to a particular tag value coming from the OPC server. So I would need to read the timeseries value of that particular tag from the OPC server and set this value as id-prefix in the extractor config file before pushing subsequent data to CDF raw.Since the pre-built OPC-UA extractor works well for our use case, I am looking for ways to use it as is without having to implement a custom extractor. What would be the best way to implement the functionality mentioned above using the OPC-UA extractor?(Attn. @Jatin Sablok )
Does DB_extractor yaml config support execution of store procedures? If yes how to do that?
I’ve been introduced to CDF in the last 2 weeks, very interesting and learning through. Just wondering if CDF has been used for real time monitoring use cases, vs. predictive/BI type use cases. For example, how soon can the alerts be triggered from the time a new abnormal measurement is detected. I imagine that it depends on a lot of factors such as network latency, source extractors, transform functions & schedules, Is there a threshold (say 1hr response time) beyond which CDF can be used.
Has anyone configured the extractor to use Visual Studio Code? I need a step by step guide.
I need to add a dashboard query that is simple to write with SQL but seems to be complex to write with the CDF API.Input parameters:start and end timestamp range List of time series to queryQuery steps:For each time series in the list, sum all values that are in the user provided range Order time series by the sum Return metadata from the top 10 of this ordered listMaybe something equivalent to this:-- Likely SLOW in a SQL DB with many datapoints-- This is a conceptual example, so simplifying by combining Time Series and -- Data Points to single entity.Select TOP 10 TimeSeries.Name, SUM(TimeSeries.Value)FROM TimeSeriesWHERE TimeSeries.Id IN (@ListOfTimeSeries)AND TimeSeries.Timestamp >= @StartTimestampAND TimeSeries.Timestamp < @EndTimestampGROUP BY TimeSeries.Id, TimeSeries.NameORDER BY SUM(TimeSeries.Value)What’s the simplest way to get to this data with the tools provided to a software developer interacting with CDF?
Hello!I am querying for DataPoints using the PostgreSQL Gateway, which I assume will have the same behavior as the DirectQuery feature built on the PostgreSQL Gateway. What I am noticing is that I am not able to retrieve any DataPoints for future dates. Is this intended or a bug? Most of the data we produce and manage is in the future, so this would be an important feature for us if it could be supported. Thanks! Query results returned in Azure Data Studio from PostgreSQL gateway:In CDF these time series have datapoints representing predictions far out into the future:
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.