Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Yes! We offer this through endpoints under `/api/v1/projects/{your-project}/hostedextractors`.You can read about some of the high level concepts of that API here: https://docs.cognite.com/cdf/integration/guides/extraction/hosted_extractorsAnd the reference docs for the API can be found here:https://api-docs.cognite.com/20230101-beta/tag/Sourceshttps://api-docs.cognite.com/20230101-beta/tag/Jobshttps://api-docs.cognite.com/20230101-beta/tag/DestinationsFor MQTT, you would add a source with type ‘mqtt’, then create a destination with a set of credentials, and finally create jobs (which is where you subscribe to topics, etc)
The raw-metadata config option just says that the metadata should go into a table in CDF RAW.You can read more about the config parameters here: https://docs.cognite.com/cdf/integration/guides/extraction/opc_ua/opc_ua_configuration/
I assume you’re referring to Azure Key Vault?Support for reading in secrets from Key Vault is a planned feature, but not yet added to the PI extractor. It’s something we are currently working on, and it will first be rolled out to the DB and File extractor before we add it to OPC UA and PI.
In terms of testing it’s just a normal python codebase, so there is nothing special you need to do. Just follow pytest’s documentation as normal.
To check if the server allows connections without encryption, or if it doesn’t use an unknown root certificate, just run the extractor without configuring a custom root certificate. If it fails on connection with SSL-related issues, then the server requires custom certificates.
Data set external IDs are not supported for the Documentum extractor. You can see a list of the supported config parameters on our documentation page.
This is an error coming from the operating system. The file is being locked by another process, meaning no other processes (such as the file extractor) can access it. This can for example happen when another process is writing to the file.You can read more about file locking on Wikipedia, the article there also had a few examples of scenarios this might happen in.We will look into if we can implement any automatic retries when this happens, but there aren’t really much to do here except trying again at a different time.
The Pi extractor should detect a new config and restart itself within a few minutes. How long are you waiting before restarting it manually?
The Cognite PI Extractor is built on top of the PI AF SDK, which also means the PI AF SDK needs to be installed on the computer running the extractor.
If you want more info you can check the documentation page: https://docs.cognite.com/cdf/integration/guides/extraction/documentum/documentum_configuration#documentum See the host field under the documentum section
Which mode is the documentum extractor running in here?If you are running in D2 mode, you can configure which document repository the extractor runs against by changing the `documentum`/`host` parameter.
When you download the extractor from CDF, it ships with several example configuration files for different source systems. These also contain comments explaining what the config parameters does, and which ones are optional or required.
Correct, for something like Osisoft PI, it should be relatively plug-and-play. When you download the extractor from CDF you are also given an example configuration file you can use as a starting point for your own setup.
The extractor will reconnect automatically on connectivity issues. It also keeps track of extraction state in order to resume from where it left off in the case of the extractor being shut down or crashed. From the docs page:If the extractor reruns after a period of downtime, it resumes the backfill task and starts a frontfill task to fill in the gap between when the extractor stopped and the current time. When the frontfill task has caught up, the extractor returns to streaming live data points.The extractor maintains an extraction state for the time range between the first and last data point inserted into CDF. Only the streaming task can insert data points in CDF within this range. Any changes to historical values already existing in CDF will only be updated in CDF when the extractor is streaming data.
We have pre-built extractors for both of these systems. You can read about them on our docs page:PI extractor File Extractor (which supports Sharepoint)The extractors themselves can be downloaded from fusion.cognite.com after you have logged in.
This is a response from the CDF API that usually means the RAW service itself is overloaded. It is something that generally should not happen, and is usually only an intermittent error when it does.The DB extractor will retry failing requests towards CDF a number of times, that’s why this is a warning log and not an error log (at least not yet), since the extractor will retry the request again several times. If it eventually gave up you will see a log line in the `ERROR` channel telling you so. If that happens, the extractor could be picking up that data again next time it runs, depending on how you have configured state stores and incremental fields in your configuration file.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.