Welcome to Our Community!
Join the conversation to shape a safer, more efficient, and sustainable industrial future.
Cognite's first Digital Services and Sales Forum!
Yesterday, we hosted our first Digital Services and Sales Forum for our manufacturing customers who are building new applications on top of CDF. The goal of this forum is to create a space for our customers to learn from each other on key topics relevant to building and launching digital services and software. The focus for this forum was pricing where we heard from @mortenhuse from Properate and Rune Larsen of Accenture, but in the future we’ll cover other areas such as recruiting, product positioning, onboarding, customer success, etc. Thanks to all who attended! The presentations we shared are attached. If you weren’t able to attend, but would like to in the future, please contact your Customer Success Director. Hope to see you at the next event!All attendees of our first Manufacturing Digital Services and Sales ForumMorten Huse and Morten Hveding Juel presenting their Properate solution, built on CDF
Are you at CERAWeek?
If you’re at CERAWeek this week, we hope you’d like to stop by and say hello us! Look out for some of our leaders @John Markus Lervik, @Paula Doyle, and @Francois Laborie in the CERAWeek Executive Conference and Agora sessions, and come find us at the @Cognite and @Aker Horizons house at Innovation Agora.
Nesting of Synthetic Time Series - Combining More than 100 Time Series
Are Synthetic Time Series able to reference other Synthetic Time Series in their formulas specified? This is the problem I am trying to solve:Synthetic Time Series is limited to referencing 100 time series Sometimes a unit/facility/system/functional location/site may have more than 100 direct child assets I need to calculate a “Synthetic Time Series” that is the daily average of a time series found on all child assets My thought is that, for 300 assets, I would have:STS A: Sum for assets 1-100 STS B: Sum for assets 101-200 STS C: Sum for assets 201-300 STS D: Sum of STS A, B, and C divided by 300Is there a better way to accomplish something like this in CDF?
Flexible time-series models
Hey! In accordance with how you’re thinking about flexible-datamodels. Has Cognite done any exploration around flexible time-series modelling? Akin to what you’re doing with charts, a fleet of sensors might be viewed as having the same transformational needs as an Asset does. At Statett, we have “views” of data (typically some linear combination of time-series). We persist these with synthetic time-series as an intermediary, then upload to a new TimeSeries. Unfortunately, the data is not immune to updates or backfills. So these computed TimeSeries either have to have a pretty severe lag, or we have to re-compute at an alarming rate, wasting compute. These computed time-series are further used in computations that we would like persisted. Do you envision expanding Charts to cover this functionality? I.e. having the configuration as code, with Stream logic for re-computes etc.? Or, do you have no plans for Managed Data Transformations? Thanks for your response :)
How Are GraphQL Mutations Defined in the CDF Template Syntax
I am seeing how to create Templates used to query data, but I have not yet found information related to defining GraphQL mutations. Do you know where I can find this information? Thanks! Here are some places I’ve been looking.Docs:Templates | Cognite DocumentationAbout template management | Cognite DocumentationSearch results for "templates" | Cognite DocumentationThe tests in the Python SDK:https://github.com/cognitedata/cognite-sdk-python/blob/master/tests/tests_integration/test_api/test_templates.py There is some info related to GraphQL mutations and Cognite here, but this doesn’t seem related to Cognite’s Template functionality:https://itg.cognite.ai/docs/tutorials/data-ingestion/ingesting-datahttps://github.com/cognitedata/sample-cdf-graphql-angular-app/blob/9641e4967e0a288241ef68b3e94786b56de456c2/src/app/itg-api.service.ts#L125
Ocean Data and Situated Experiences
Feel free to join this session on 9 March at 7-8pm CET by Inna Zrajaeva about ocean data in situated experiences. Get ideas about how complicated ideas and concepts can be explained in a direct environment and situation. Inna is writing her master thesis together with UiT Norges arktiske universitet and HUB Ocean 🐳https://www.eventbrite.com.au/e/ocean-data-and-situated-experiences-tickets-289415408277#oceandataplatform
New Blog Post: Are Lighthouse Sites Blinding Manufacturers from Long-Term Digital Success?
While many manufacturers have successfully fostered lighthouse sites, very few have been able to replicate this success across their other production sites.The purpose of these lighthouse sites are to act as the guiding model for other production sites, providing a wave of innovation to address use cases that will increase productivity, improve quality, reduce energy and water consumption, and much more. The problem is that lighthouse facilities on average only account for 10-15% of production volume, and manufacturers are struggling to replicate this success to the significantly outstanding 85-90% of their production.Read in our latest blog why aren’t lighthouses scaling. And how to ensure that your lighthouse success can be carried forward to other production sites. Did this article cause some reflections? We’d love to hear from you and your experiences.
Are Lighthouse Sites Blinding Manufacturers from Long-Term Digital Success?
Herein lies the challenge, while many manufacturers have successfully fostered lighthouse sites, very few have been able to replicate this success across their other production sites.To define a lighthouse site, these are the ones that are first to install the newest technologies, often have teams with unique technological expertise, and are likely the most productive and agile of all your production sites. This lighthouse concept is best recognized by the World Economic Forum, which started the Global Lighthouse Network in 2018 and currently recognizes 90 manufacturing sites worldwide for “applying Fourth Industrial Revolution technologies to increase efficiency and productivity, along with environmental stewardship.” The purpose of these lighthouse sites are to act as the guiding model for other production sites, providing a wave of innovation to address use cases that will increase productivity, improve quality, reduce energy and water consumption, and much more. The problem is that
CDF Client connection string
Hi there,I was wondering how one would go about representing a CDF Client credential as a connection string.The main idea is to have a single string representing the credentials needed to connect to CDF. The connection string itself should be unwrapped by whatever client it is passed to (Python SDK, JS SDK, etc), and should of course not be sent as part of the request to the API.For tenants using API-keys, something like this may make sense:cdf://CLIENT_NAME:API_KEY@api.cognitedata.com/PROJECT_NAMEHowever, using tokens instead of api-keys is a bit trickier:cdf://TOKEN_USERNAME:TOKEN_SECRET@api.cognitedata.com/PROJECT_NAME?token_url=TOKEN_URL&token_scopes=TOKEN_SCOPESNaturally, it is possible to but anything in the URL parameters, but generalizing the username, password and path-properties would be great.Some challenges that come to mind:How to differentiate between different authentication methods (i.e. api-keys and tokens Is it possible the keep a consistent scheme regardless of a
Changes to our documentation portal (docs.cognite.com)
Hi Community,We are excited to announce that we have made some changes to our documentation portal! We have migrated to a new content management system. This will allow for improved search and internationalization.This is the first step to revamping the documentation portal to allow for quick access to relevant content as well as a introducing a dedicated developer portal with guides and an overview of all available libraries.As part of this, you might see some small changes to where information is located. If you have any feedback or feature requests, please let us know. BROmar
Show Average KPI For Time Series From Today to End That is Read Often
Hello, Thank you for all the great docs and tutorials you’ve put together on CDF. I would like to get an expert opinion on the best way to accomplish a specific use case I have.We have a daily time series that we calculate for predicted availability of an asset, for a time range that is usually between 5 and 40 years (usually between 1,826 and 14,610 data points per time series). We have a view were an engineer will want to see this time series graphed, but also will want to see a single KPI value that represents the average of the time series from today to end of the range. For all of our assets, this KPI could be ready several times a day as people view this KPI. When we were storing this data in Azure SQL, to speed it up, we pre-calculated and stored this KPI average and stored a new time series (throwing away some of the time series points and linear interpolating). That way, reading a single data point representing the average of from today to the end of the range and display
Monitoring the Cognite service status
Subscribe to webhookhttps://status.cognite.com reports the health of Cognite's services and products. You can sign up to get email and SMS notifications for status changes and upcoming maintenance for specific products and clusters. You can also subscribe to webhook notifications to integrate with your monitoring systems. You will receive notifications for all components and clusters and can filter the information to fit your needs. To subscribe to webhooks, select "Subscribe to updates" and then the webhook option (<>). Specify the URL we should send the webhooks to and provide an email address if your endpoint fails. The table below lists the product IDs you can use to filter the notifications. For details about the format of the notifications, see https://support.atlassian.com/statuspage/docs/enable-webhook-notifications/ . If you don't know which cluster your CDF project is on, contact Cognite Support or Customer Success. Cluster names and product IDsIf you're subscribing to
New release on Monday → we will no longer support the old version of the no-code calculation builder
Hello Charts Early Adopters,On December 7th, 2021, we released a new and improved version of the no-code calculation builder (see image below). You can read more in this release post or watch the webinar recording. Ever since this release, all new calculations have been created using this new implementation of the no-code calculation builder.The “new” no-code calculation builder.We will release new features on Monday, February 28th and, as a result, we will no longer be supporting the old version of the no-code calculation builder. Therefore, if you have a calculation in any chart that was created using the “old” calculation builder (see image below), you must recreate it using the new version to avoid losing your work. The “old” no-code calculation builder; calculations using this old calculation builder will no longer be supported from Monday, February 28.If you have not used Charts before December 7th, 2021, or, you have no need for the calculations you created using the old no-code
Thanks for joining our New Features & Feedback Webinar — recording & slides
Thanks to those of you who joined us live for yesterday’s New Features & Feedback webinar series.You will find the session recording, below, and you can view the slides that we presented by clicking here.As always, if you have any comments, questions, or feedback on these new features or anything else Charts-related, please do post about it here on Cognite Hub.
Cognite Data Fusion Release: February 2022
Hi!Don’t miss the Cognite Data Fusion February release note just published in the Product Updates section here on Hub! We also have a Product Release Spotlight Webinar lined up for you where our product experts will go through the main features in the release. Sign up and find previous Product Release Spotlight Webinar recordings here.Let us know if you have comments or questions, small or big, we’d love to hear your thoughts
Reminder: New Features & Feedback Webinar TODAY 15:00-16:00 CET (see post for MSFT Teams link to join)
This is a friendly reminder that we will be hosting our New Features & Feedback webinar today from 15:00-16:00 CET → You can join the Microsoft Teams call by clicking here ←If you have any trouble joining, please leave a comment in this thread or send an email to firstname.lastname@example.orgWe’ll be giving a live demonstration of the newest features that were released this week.We’ll also have an open discussion about use cases that you – our early adopters – want to solve or have already solved with Charts.This session will also be recorded and the video will be shared here in the Early Adopter Group afterwards.Looking forward to seeing you soon on the call!Eric & the Charts team
Fallback for missing values when calculating synthetic timeseries
Hi! If one or more of the source-timeseries are missing data-points they will also be missing in the synthetic timeseries output..We would like an option to default to zero for missing values so that the API will use the available values to calculate a partial result. Example:TS1: [1,2,3]TS2: [3, undefined, 4] Output now if calculating a sum of TS1 + TS2:[4, undefined, 7] What we would like:[4, 2, 7]
Release v0.16 of Charts; Join Wednesday's webinar (Jan. 26, 15:00-16:00 CET) for details
The team and I are happy to announce that a new version of Charts (charts.cogniteapp.com) has now been released! You can read the specific details about this release below.Please try these new features for yourself and share your feedback with the community here in the Charts EA group.Remember to join our webinar on Wednesday, January 26th from 15:00-16:00 CET, where we'll be walking through the details of this release, discussing real use cases you’re facing, and giving you a sneak peak of some upcoming functionalities. For details and joining info, please visit this post. Release Details Search for time series related to a specific equipment tag Once you’ve found the equipment tag you’re looking for, click on the equipment tag name to perform a sub-search for time series related to that specific equipment. Better usability for y-axis zooming and scrolling Adjust the y-axis by simply clicking + dragging or by scrolling while hovering your cursor on the y-axis. No more “Adjust y
3D scanning with robots - what do you want to know?
In heavy asset industries 3D scans are used for applications such as quality inspection, project progress reporting, project planning, evaluating the state of a site ‘as built’ or even evaluating the vegetation around it. The scans span from simple scans performed with a drone in 15 minutes to complex and detailed scans of a plant, taking several weeks to complete. In this 3 part series about 3D scanning with robots we will take you through how to capture, transform, analyze and visualize 3D scan data in order to improve daily operations. What we’d like to know before going is if someone has any questions or topics that you want to learn more about? All questions are greatly appreciated and can span from how do you set up the robot to capture the data to what robot and data management pipeline would you use for use case X? Point cloud from drone scan with contextualized segmentation
Webinar: New Features & Feedback (January 26,2022 from 15:00-16:00)
Hello Charts Early Adopter Community and happy 2022!We’re going to kick off the new year with our second New Features & Feedback invited to a webinar where we in the product team will be sharing the latest and greatest features in Charts.When: Wednesday January 26th from 15:00-16:00 CETWhere: Virtually (Microsoft Teams)→ Click here to join the call; we’ll be re-posting this link on January 26th before the webinar. The features we’ll highlight in this session will include…An in-depth walkthrough of the latest features and functionalities, including... New list component design Better usability for y-axis zooming and scrolling New InDSL functions for creating lines and synthetic signals + more A sneak peak of the features we're working on nextLike last time, I’ll be giving an in-depth walkthrough of these features and how to use them. We'll also have an open discussion between you, our early adopters, and the members of the Charts team about ideas for future improvements – we’l
How do I use OIDC credentials when I create transformations using python SDK
You need to provide “source_oidc_credentials” and “destination_oidc_credentials” run your transformation via API/SDKBelow is the code sample how to use OIDC credentials in your transformationstransformations = [ Transformation( external_id="<external_id>", name="<name>", query="<Query>", destination=TransformationDestination.raw("<Database>", "<Table>"), conflict_mode="<upsert>", source_oidc_credentials=OidcCredentials( client_id="<CLIENT_ID>", client_secret="<CLIENT_SECRET>", scopes=f"<scopes>", token_uri=<"TOKEN_URL">, cdf_project_name=<"COGNITE_PROJECT"> ), destination_oidc_credentials=OidcCredentials( client_id="<CLIENT_ID>", client_secret="<CLIENT_SECRET>", scopes=f"<scopes>", token_uri=<"TOKEN_URL">, cdf_project_name=&
Anatomy of a highly optimized time series database (TSDB) for real-time industrial applications
Design Performance Architecture Backups Access control Roadmap Design Time series databases typically come in two flavors: write-optimized and read-optimized. Cognite Data Fusion Time Series Database (CDF TSDB) strikes a balance between the two, ensuring that tens of millions of data points per second can be ingested and read in response to queries simultaneously, reliably, and with ultra-low latency both for input/indexing and querying.Write-optimized time series databases are useful as historians, constantly ingesting data from industrial equipment. But they are of limited use for large-scale analytics and are a poor choice to power interactive applications, as the stress from unevenly distributed user traffic may interfere with the reliable operation of time series ingestion. Examples include most industrial historians, as well as InfluxDB.Read-optimized time series databases on the other hand are an excellent choice for analytical query loads, but struggle with streaming ingestion.
Generate client secret for OID
Please generate a client secret once every 180 days.
Already have an account? Login
Log in to the community
Log in with your business account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.