Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
The API docs have examples and list the available query parameters: https://api-docs.cognite.com/20230101/tag/Assets/operation/getAssets, as you can see they have restrictions and must be on the correct form. You don’t show in your post what the actual value of each of these query parameters are, but I’m assuming they are empty? The errors you are getting are because of this. I would advise you to start with no query parameters, then add the ones you need.
Hi, try /api/v1, as in, API version 1, the path you’ve pasted is vi
This documentation is clearly a bit outdated. It should work fine on azure.
You are explicitly creating a `RawRowCreate<RawRow<Dictionary<string, JsonElement>>>`, in other words, a raw row where the columns are a full `RawRow`. You want to make it a `RawRowCreate<Dictionary<string, JsonElement>>`, and move only the `Columns` part of the retrieved row over.
That’s possible for attribute values, you can store anything in those, but the relevant value here is the timestamp on datapoints. Those must be sent as UTC. In general, the server should send timezone along with any non-UTC dates.
OPC-UA strictly defines how timestamps are transferred, and we refer to that, so it shouldn’t be necessary. Any server where timezones are an issue are non-compliant, and we don’t support non-compliant OPC-UA servers.
An extractor is a piece of software that reads data from one place, and writes it to another. You can use the cognite extractor utils to help write them: https://cognite-extractor-utils.readthedocs-hosted.com/en/latest/?badge=latest Events are a distinct resource type and you have to create them using our SDKs or APIs.
You can find a NodeSet2.xml file for the base OPC-UA node hierarchy athttps://files.opcfoundation.org/schemas/UA/1.04you will always need to reference this XML file, since all other NodeSet files depend on it.It does not matter where the file is located on the extractor machine.
I showed you how to configure it above, it needs to either be present at a URL available from the extractor, or locally.There is no limit on the number of nodes in the schema, but the extractor will load everything into memory, so if the file is very large you may need a lot of memory on the computer the extractor is running on.
Hi,The OPC-UA extractor is specialized for reading large amounts of data points, more so than the DB extractor. That said, this is often mostly dependent on the source system. It is likely that the OPC-UA interface for this solution just reads from the underlying database, in which case it may be much less efficient.I can only advise that you try OPC-UA, and switch to ODBC if it is too slow.
Hi, you can use the cognite file extractor to upload files to CDF on a schedule.
I think you are either looking at a node with id `i=62` in the wrong namespace, or there is something wrong with the server. According to the OPC-UA standard, `ns=0;i=62` is the BaseVariableType node.
max-read-length does appear to be missing from the docs, that’s an oversight. The docs on end-time are also no longer accurate, we will address this.It is a workaround for servers that do not support ContinuationPoints, it limits the maximum length of each read request to the server. So if you request a year of history, and set max-read-length to 30 days, it will take at least 12 requests to the server to find all the data in that range, even if there are no data points in that timeseries at all.There is no reason at all to use it on servers that support ContinuationPoints for history, and setting it to 30 days is likely to not work that well even if you had such a server.
Hi, are you certain the nodes have metadata at all? This is absolutely not a guarantee in OPC-UA systems.Also, I don’t know what made you set the root node to `i=62`, but this will almost certainly not produce the result you want. ns=0;i=62 is the node BaseVariableType. If your server has timeseries, they will not be found under that node.
Hi, please read the documentation found here: https://docs.cognite.com/cdf/integration/guides/extraction/opc_ua/opc_ua_configuration. History is read from “start-time” to “end-time”. Your current configuration means that history will be read from time 0 (the beginning of server history) to 30 days ago. Also, max-read-length does not do what you think it does. It is a workaround for specific server issues. If you do not need it, I highly recommend turning it off.
I understand that, but what, exactly, did you do, and how is it going wrong.
What have you tried, and how is it failing?
Hi,This was answered here:
Hi,This isn't something we've really looked at, and we have no official guidelines for this source system. From reading the documentation it looks like there may be ways to develop custom plugins, but this isn't something we are able to help much with.Here’s the alteryx SDK: https://pypi.org/project/ayx-python-sdk/Here’s the cognite SDK: https://pypi.org/project/cognite-sdk/You may be able to use these together to build a custom connector.
I don’t know for sure. My googling seems to only indicate support for OPC Classic, not OPC-UA. Presumably you have access to the source system and related documentation and are better placed than me to determine what it supports and doesn’t support.It looks to me like OPC Classic is how you connect to Honeywell PHD. If so, then are you sure that applications connecting to it aren’t already using COM/DCOM? You can use COM+ or DCOM safely provided you make sure to run it on an isolated network, using a dedicated user with limited permissions.
If the server supports OPC-UA then they can use that. I’m not familiar with the source system.If the server only supports OPC Classic then there is no way around DCOM. It is indeed insecure, but that’s the consequence of using technology that was deprecated over a decade ago.
If your extractor is running on the same machine you will use COM+, if they are on different machines it’s DCOM. Configuring DCOM is hard, see https://campus.barracuda.com/product/archiveone/doc/46206124/how-to-configure-the-firewall-to-allow-dcom-connections/, for example, we will have some docs like these. If you have a server on the same machine, you should be able to configure the extractor quite easily with little extra work. When connecting to an OPC Classic server with both live and historical data you are technically connecting to multiple different servers. You need to tell the extractor which interfaces on the server to connect to. You will need to look up what versions of DA/HDA your server supports.
Documentation on this is in the works, it is non-trivial to connect over DCOM, COM+ should just work if you are on the same machine as the server.
I would really recommend trying the OPC Classic Extractor before you start trying to make your own, as it is non-trivial to do.The extractor connects over either COM+ or DCOM, though configuring your machines to properly let it though can be difficult.
This is too specific a feature to be something that the OPC-UA Extractor is capable of being configured to do.If you really need this functionality, and can’t work around it, you might be best off modifying the extractor code itself. The OPC-UA extractor is open source, https://github.com/cognitedata/opcua-extractor-net. At https://github.com/cognitedata/opcua-extractor-net/blob/2eb0613313f1a866c8c5141740282b7f009c5205/Extractor/UAExtractor.cs#L343, or so, the extractor is in a state where you could read from a node in OPC-UA using uaClient.ReadRawValues, and update the Config object.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.