I am looking for information related to 3d models. I would like to explore does reveal/cognite supports programmatic change of animation on 3d models which are uploaded to cdf. What i am looking for is, i have 3d model of pump and would like to change the animation speed depending on the speed i get from my api (basically real time speed which i get from my api). Does it possible ? If so could you guide me to example code to how to do that. Also the uploaded 3d model in cdf is not displaying like how the original model that was uploaded. After uploading into cdf its not displaying with colors and animations, could you tell me why it is like that ? Thanks
Hello! Was looking around and cleaning some data, and came upon the realization that I can’t find out of you can access Cognite resources that are not contained in a dataset. That is, is there an ability to find/fetch/list Cognite resources that are “floating free”.I have tried setting data_set_ids And data_set_external_ids To None, but this is the same as not having specified it.
I seem to have the dame problem as JON L had few months ago in that I cannot see the time series for the above training course. I’ve tried finding the Charts data set but only have options for publicdata, cdf-fundamentals, de-transformation and infield-training. I’ve tried each of these and cannot find 41-PT-1002_VALUE. Can you point me in the right direction please?
Hi, I’m trying to add more fields to Dataset’s property, and would like to display them on the UI. Beside the existing fields (Labels, Owners, external ID, etc...), is it possible to display custom fields on the UI ( similar to metadata’s consoleLabels, consoleOwners). A good example would be how metadata displayed in details of file.
Looking at API documentationhttps://pr-411.docs.preview.cogniteapp.com/api/v1/it seems one can not build an automated pipeline to create and update 3D model only using python SDKs?I aim to make a pipeline using cognite function and python sdk for nightly update reading files from SharePoint and upload or update 3d models in cdf.If its possible, where I can find the guiding document?
I have provided the essential credentials within the config.yml file in the OpcUaExtractor config folder, however when run it get this error. How can I fix this?
Hello Team,I am working on creating POC on PROSPER Connector I checked documentation of Cognite Prosper Connector. Connector is provided by Cognite, but where can I get simulator for POC purpose?Also in connector configuration, where to provide simulator details or simulator name?
If there are large number of instances in a view, wont that load in the fusion UI? Can’t this be fixed with trimming with some max limit and still display?
Hi team, I’m working on OPC UA Extractor. I have configured metadata-targets and configured metadata-mapping.I’m not able to see the metadata along with timeseries data.Please find the configuration in attachment and screen shot in CDF how timeseries data looks like. please let me know if any additional configurations need to be done to see the metadata along with timeseries data.
Hello,What are the best practices around setting the max-workers for a sdk. If I do not set the Global Config all the default values for workers and retries would be picked correct ? Can you give any reference on how we can set GlobalConfig, I mean I did not see any way to pass the GlobalConfig to ClientConfig ?
Hello, is it possible to test the CogniteAuthError using monkeypatch context manager ?Regards ,Hakim Arezki
Does CDF support the direct ingestion of timeseries with non-numeric values ( enums). If the workaround is to convert enums to integers to ingest, this is unnecessary overhead and a limitation. Also have to remember to convert the values back for visualization.
We are delving into the specifics of using Cognite for certain use cases and have identified the necessity for automatically extracting asset data from engineering data sources like PI&D documents. Are there any features available to facilitate this?For instance, our clients possess numerous PI&D documents and require automatic generation and structuring of asset hierarchies.
I want to create a new time series in CDF based on a calculation on another time series. This can be done by saving the calculation in a schedule. But I would like to run a calculation on the full input signal (which may span multiple years back in time), but it seems like the scheduling limit is 30 days (can anyone verify?). Is it possible to run a single calculation in Charts (spanning the entire period) and save the result to a Time Series object without going by a schedule? Or do I need to use the Python SDK for this purpose?Thanks in advance!Vetle.
I am trying to create a time series subscription and receiving the error below:"Must have access to WRITE time series subscription.My question is, if the subscription is a listener, why does it need WRITE access?
Ensuring the security of PI credentials is critical in the Cognite PI extractor.In the context of Azure Vault, the requirement involves implementing robust measures to safeguard PI access credentials. This entails storing sensitive information, such as usernames and passwords, securely in Azure Vault. The system should be configured to read these credentials from the vault when needed, providing a secure and centralized method for managing and accessing PI authentication details. This approach enhances overall data security by minimizing the exposure of sensitive information and adhering to best practices for credential management in cloud environments. we have implemented in other custom extractors. can you share the documentation to implement in Cognite PI extractor
N/A
As per below pic, we need to click “Create dataset” after navigating to the “Use the Data Catalog”. But I am unable to find this option for creating dataset, can someone help?
I'm trying to use a graphql query in my python notebook. And I'm using this code snippet given in the sdk docs. from cognite.client import CogniteClient>>> c = CogniteClient()>>> res = c.data_modeling.graphql.query(... id=("mySpace", "myDataModel", "v1"),... query="listThings { items { thingProperty } }",... ) I dont see data_modeling.graphql.query() method available. I see the below error message. AttributeError: 'DataModelingGraphQLAPI' object has no attribute 'query' I'm using sdk version 7.5.1. How do I run my grapghql query?
I am getting following error during the execution of poetry add pandas command in Data Engineer Basics - Integrate course. I have attached the complete output for you reference. Prevously I got an error saying install the latest Microsoft Visual Studio C++. After installing latest Visual Studio C++ I got this error.``` error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.38.33130\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 at ~\AppData\Roaming\pypoetry\venv\Lib\site-packages\poetry\installation\chef.py:164 in _prepare 160│ 161│ error = ChefBuildError("\n\n".join(message_parts)) 162│ 163│ if error is not None: → 164│ raise error from None 165│ 166│ return path 167│ 168│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:Note: This error originates from the build backend, and is likely not a pro
To move data from source to CDF, it is mentioned we might require custom extractors. Can you explain in very simple terms what exactly are the extractors , how to create them ? Is it just a piece of code to connect two systems ? Events is one the resource type. Do events (like 2hr shutdown) needs to be created manually in CDF or it is detected automatically based on the value of time series data ?
In the video of contextualizing engineering diagrams (say P & ID), can you explain more what data is exactly fetched from these and how it is contextualized? Is it just names of the tags or much more than that is done ? Do we use some vision algorithms to understand which equipment is connected to other equipment ? Or we just fetch the name of tags using some parser from the image file ?
Are the calculations provided in Charts (originating from `indsl` python library) available for all CDF projects? I’m wondering if it’s possible to contribute with new calculations to Charts that are written in `indsl` but only deployed to the specific CDF project you’re working on (i.e., not accessible for people outside this project)?I want to write new calculations using indsl that SMEs in my team can use in Charts for specific time series analysis, but I think it would be best to keep them isolated within our CDF project to not “overflow” the global Charts with potentially lots of new calculations that may mostly be relevant for my team/company.Any response is appreciated, thanks :)
In Charts you can import your own calculations, but I can’t find any documentation regarding requirements to this end. It seems like the required format is a JSON file, but how should this JSON be set up. If anyone has any examples of imported calculations, sharing them would be very much appreciated.Thank you.
I am following the CDF-learning exercises. All good until I got to transformation of Time Series data. I get this error. Don’t know what to do to fix.Have gone back and repeated the steps a few times.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.