Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
All you need to know to get started with our community.
Connect with the community to ask and answer questions and learn.
Build better products with the product team.
Have your say on beta products in our Early Adopter Groups.
Hello:I am using DB Extractor v 3.4.3 to read columns from an Excel file but it gives me the following error: 2024-08-20 18:53:06.550 UTC [ERROR ] QueryExecutor_0 - Unexpected error in query1: Could not read file C:\Cognite\SAT-SX4 - 2023-03-21.xls: required package 'xlsx2csv' not found.Please install using the command `pip install xlsx2csv`.The following is my config file: databases: - type: spreadsheet name: "DSN_FT" path: "C:\\Cognite\\SAT-SX4 - 2023-03-21.xls" queries: - name: query1 database: "DSN_FT" sheet: "P_VSD" query: #"SELECT STS PreCOM FROM SAT-SX4 - 2023-03-21.xls" "SELECT * FROM P_VSD" destination: type: raw database: FTV table: AE_MOLYNOR primary-key: "{EventID}" What am I doing wrong? Can you help me?
When .docx file ingested into Cognite Data Fusion UI, document pages increases while user viewed in CDF.This creates confusion for user to navigate any particular page number to check the documents details.For example, a word document pages in source is 51 but its page counts increase to 69 when same document is ingested & viewed in CDF,
We use push-down filtering to get our relevant data from CDF into Power BI. We filter on the external ID property, since this is constant across our environments in CDF. Here is an example of that this looks like: Source = Cognite.Contents(cdf_env & "/Timeseries/?$filter=ExternalId eq 'an_external_id'", null) This works great for Time Series, but doing the same thing for Sequemces leads to an error:DataSource.Error: Microsoft.Mashup.Engine1.Library.Resources.HttpResource: Request failed: OData Version: 3 and 4, Error: The remote server returned an error: (400) Bad Request. (The provided filter was invalid.)OData Version: 4, Error: The remote server returned an error: (400) Bad Request. (The provided filter was invalid.)OData Version: 3, Error: The remote server returned an error: (400) Bad Request. (The provided filter was invalid.)Details: DataSourceKind=Cognite DataSourcePath={"project":"akerbp-dev\/Sequences?$filter=ExternalId eq 'val_alarms_top_10_weekly'"} Push-down filt
Intermittent Internal server is seen while reading RAW table. It is happening in different functions that may not be part of some workflows. It is blocking workflow to run as we use these staging tables to get information for other functions. Is there is any check that needs to implemented ?
Hi guys! We are developing a Power BI dashboard that relies on time series and sequences fetched from CDF using the Cognite Extractor. We have set up a deployment pipeline for our dashboards (Dev, Test, and Prod), where each environment in the PBI pipeline corresponds to data sources in the respective CDF tenant. I.e., our Dev dashboard reads data from our dev tenant in CDF, and so on. We get this to work by configuring a Parameter in Power BI desktop with values that match our CDF tenants.Then we refer to these parameters when we define the data sources. Here is an example of how we use push-down filtering via OData queries from the Advanced Editor in Power Query:Source = Cognite.Contents(cdf_env & "/Timeseries/?$filter=ExternalId eq 'our_special_external_id'", null),The variable “cdf_env” evaluates by default to Dev, and we have configured the deployment pipeline such that this parameter updates automatically when we deploy from Dev to Test, and from Test to Prod. But here is the
We would like to see the ability to rollup documents by type, using metadata, within the new search UI and Industrial Canvas asset browser, as well as other places where users are looking for documents associated with an asset (similar request to this topic Cognite Hub). We have seen this in the legacy data explorer view and it was well received by our users and our document teams have started to shift their business process to tag documents with the appropriate metadata to enable this functionality.Our assets can have hundreds of different documents associated with them, so an ungrouped list will make it hard for users to quickly find what they are looking for. In addition, we have some use cases where documents are snap-shot and stored as a part of a project file (e.g. P&IDs). Separating these duplicates into the correct rollup buckets will allow us to eliminate confusion on which files are most recent to use. Example for old search:
Filling out on behalf of an engineer super user: When looking through the files of an asset, I only see the P&ID number in the “Tree” view. There should be more information like P&ID title or other details while still in the Tree view so it’s easier to understand what the P&ID contains without having to enter each P&ID/equipment drawing to find out.Scratched the names out in the image below, but say you have a P&ID named “12AB0123” - this doesn’t tell the user very much information.
Search results in the hierarchy view of Explore Assets is difficult to use. It’s almost impossible to find what you are looking for in a very large/deep hierarchy. We are almost always resorting to the list view and disregarding the hierarchy view entirely since even after searching, one finds themselves scrolling and scrolling to find yellow matches. Because the hierarchy view also shows close matches, the view is littered non-exact matches.After finding the asset, you can only view children. There is no way to navigate to the parent from the asset view. It is quite useful for us to find an asset and be able to view the ancestry tree to the root and its children.The ONLY way for us to get the ancestry view of a particular asset goes like: Explore Data → Assets Search Skip Hierarchy view, use List View instead View Details of the Asset, copy the External ID Open Filters Put External ID in filter Finally have a hierarchy view with only the ancestry of a particular Asset I ha
For our site, I know how to filter our site data sets to narrow selection of data. Instead of selecting multiple data sets to search only within our site, would it be possible to use wildcards or another filter to limit the search to our site?
Hi All,I need to deploy CDF Functions from azure devops using cdf toolkit.can you share me any example pipeline files which i can take it as reference. Regards,Nidhi N G
Can someone tell me about this. if i Want to keep OPC UA realtime data into cdf RAW using opc ua extractor?
Hello Cognite experts, I trying to find our some reference material to explore regarding integrating data from RTSP in Cognite. Are there any workflows/POC/documentation/examples available for this data source?
Hello I was having the Authenticator setup for logging to zendesk and other cognite sites. But somehow its not there on my mobile now. can you help with re-registering ?my id - nbhatewara@slb.com Neeraj B
I am doing some hands-on exercises, but I am stuck in a point here.The task is to Add some time series data:- Create a time series object for each country asset in CDF called <country>_population and associate it with its corresponding country asset. Remember to associate the data as well to the data set that you created. As an example, the time series for Aruba would be called Aruba_population.- Load the data from populations_postprocessed.csv into a pandas dataframe.- Insert the data for each country in this dataframe using client.time_series.data.insert_dataframe.- As a check, retrieve the latest population data for the countries of Latvia, Guatemala, and Benin.- Calculate the total population of the Europe region using the Asset Hierarchy and the time series data.The point I stuck on:=> “Insert the data for each country in this dataframe using client.time_series.data.insert_dataframe”. I am not sure what does this means, although I had done an early exercise …“Insert th
What are the licensing terms and conditions for using Cognite's Open Industrial Data? Could you please provide a link to these terms and conditions?Thank you.
2 Questions:What is the user limit for the number of Industrial Canvas pages they can create?Can we auto-delete Canvas pages after some time [Ex: 3 days] to avoid overpopulating the Canvas page/data?
Hi,I have defined a Data model with two Views, say Employee and Address, both of this do not have any relationship.For instances in Employee and Address view I have kept externalIds same, say emp101, emp102,... for both view instances.EmployeeextrenalId Name Age emp101 Tom 30 emp102 Harry 28 AddressexternalId City Pincode emp101 Pune 123 emp102 Mumbai 456 I am using cognite sdk sync api call to synchronize instances of Employee view.There is one Transformation on Address to update all the pincode values.After running transformation, when sync api is called it always return all the values of Employee instances even if they are not updated. Is there any resolution to this unexpected behavior?
Get active and reach the top!
The weekly leaderboard is refreshed every Monday. Make an impact in the community and score high!
To receive our newsletter Cognite Hub News, sign up from your profile card.
Read our product documentation.
See the latest status updates on our services.
Cognite Support is available 24/7.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.