Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Hi @Edel Hvesser-Andersen The link to the guide for reporting data quality appears to be broken. Would it be possible to share it again?Thanks in advance,André
Would be a great deal for you to update for the latest version? cognite-sdk 7.41.0
Welcome to the Cognite community, @Luiz Felipe Lima da Cruz !It's great to see the Radix team around here.
Great news indeed!We've been eagerly awaiting this material, which will prove incredibly valuable for our teams involved in data modeling. Thank you for providing it!
Great to hear, @Eniko Farkas We're also in search of resources on data modeling. Thank you!
Thank you, @Frank Danielsen , for the prompt response on this matter. Your explanation makes complete sense to me, and I will certainly consider these trade-offs when building and deploying extractors for our clients' use cases.This highlights the significance of Cognite Hub; we continue to learn more about Cognite every day with your team.
Hi @Frank Danielsen and @Jan Inge Bergseth ! I thoroughly enjoyed the article; we've implemented some of those steps at the Cognite boot camp. However, I'd like to clarify whether deploying an extractor as a Cognite function in a high-volume scenario is considered a best practice, given the limited resources of the cluster. Could you confirm if my understanding is correct? If not, I'm eager to learn more about the recommended best practices for running ETL. Would utilizing solutions like Azure Data Factory be preferable, or is it more advisable to run the Cognite extractors on a dedicated cluster within our client's cloud provider or their local infrastructure? Looking forward to your insights.
Thanks, @Glen Sykes !I'm always learning from you guys. I really appreciate your assistance.
Hi @Ankit Kothawade Adding a piece of information: Time series data typically flows directly into CDF and bypasses the RAW stage. The Cognite team is welcome to correct me if I am mistaken. If you agree, I'd like you to mark this topic as answered.
Hi @Ankit Kothawade It's crucial to understand that not all data from sources needs to be copied to the staging area. Techniques like data streaming and selective data extraction can minimize unnecessary data movement, optimizing performance. Additionally, incremental ingestion, focusing on the data required for specific use cases, is highly advantageous. Cognite offers robust support for incremental ingestion capabilities.Determining the most suitable ingestion approach often involves gathering use case requirements from data consumers and working backward. Key questions to consider include:What are the business needs driving the data ingestion? What are the expectations for the data quality, granularity, and timeliness? When does the data need to meet these expectations to support business objectives effectively?The last time I read about Cognite's performance, it was reported that the largest Cognite Data Fusion® Time Series cluster stored around 15 trillion data points. It consist
I would like to piggyback on the previous request and also request access to the training. I had made this request earlier, but since I wasn't using a corporate email, we couldn't proceed with the course approval.
Thank you, @Eniko Farkas , I've had a discussion with Flávio Guimarães regarding the matter, as you suggested.
Hi @MattH, @Eira Monstad I apologize if my understanding of the request is unclear; this is merely a suggestion to consider alternative approaches. It's quite common to address scenarios like this by configuring a reverse proxy and load balancing solution, such as Nginx (NGINX Plus or NGINX Open Source, for example). Additionally, NGINX can be utilized to continually test TCP or UDP upstream servers, avoid failed servers, and gracefully incorporate recovered servers into the load-balanced group. This type of solution enables us to sidestep adding network dependencies to applications and enhances connection resilience.A reverse proxy like Nginx could introduce features in an enterprise environment, such as Reverse Proxy, Load Balancing, SSL/TLS, Caching, TCP and UDP Load Balancing, as well as high concurrency and low resource usage.
Hi @Yewtaah Welcome to the vibrant community of Cognite! I'm excited to have you here. As someone deeply engaged in the realm of Cognite, I'm available to exchange knowledge and insights to help foster growth and understanding within our community.For anyone looking to deepen their expertise or start their journey in Cognite, I highly recommend exploring the Cognite Academy training paths. These paths are designed to provide a comprehensive understanding of Cognite's tools, technologies, and methodologies, ensuring a strong foundation for leveraging its capabilities effectively.Feel free to reach out if you have any questions or need guidance along your Cognite journey. Together, let's explore and harness the potential of this innovative platform!
Hi @fernandolara It's great to see you here. Could you please provide the curl request that you are using?
Hello @Faris,There is already a feature request open for this issue. However, as indicated in the response, there are currently no plans from the Cognite team to implement it. You can find more details and check if it aligns with your request by visiting https://hub.cognite.com/ideas/cdf-auditing-feature-is-required-2235.Hope this information is helpful!Cheers, mate!André
Welcome to the Cognite hub commutiy @Vinod Arya
Thank you very much for your help, @Que Tran .That is what I was looking for—great news.
Thank you very much @Glen Sykes
I asked this question a few months ago when I was just starting with Cognite, but unfortunately, I didn't receive a response. However, based on my understanding, you can utilize Cognite data to create your ML model. Subsequently, to run your model, for example, you might deploy it on a compute cluster and then call an Azure Machine Learning endpoint to execute it. I'm certain that there are other possibilities available. If anyone can assist in clarifying whether all of this can be achieved within the Cognite Platform, please feel free to correct me.
Hi All,As no one has responded, I am presuming that this feature is not available and may not be needed for Cognite right now. Unfortunately, this client has an ongoing use case that requires accessing assets from P&ID as they lack another data source for it
Thank you very much @Dilini Fernando . I really appreciate your assistance. André
The training material references a project named 'ds-cognitefunctions.' However, when I tried to access this project, I received the message 'Not a valid organization name.' I am presuming that this project is no longer available.
Hi @Kristian Nymoen Here’s another reference that can assit you in using the Cognite API to upload your 3D models.https://developer.cognite.com/dev/guides/upload-3d-models/upload-3d/
Hi @Raluca Bala, I'm currently using the repositories listed below. However, both of them are similar, and the last update was four years ago.https://github.com/cognitedata/interacting-with-open-industrial-data https://github.com/cognitedata/open-industrial-dataIf this is for training purposes, I would assume that the only publicly available data is within the 'publicdata' project in Cognite Data Fusion.If you happen to find something else, please share.Regards, Andre
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.