Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Got an error when deleting some files: { "code": 500, "message": "Internal server error. Please report this error to support@cognite.com and provide us with your request id: 277d50c2-7bb4-98eb-850f-91738b07d7e3 and cluster name: westeurope-1"}Here’s another instance of the error:{ "code": 500, "message": "Internal server error. Please report this error to support@cognite.com and provide us with your request id: db1f2315-cb72-9ca4-aab8-075f7d18203f and cluster name: westeurope-1"}
Can we have a text-editing mode in Cognite Hub where we can type Markdown? We developers love markdown.
I have implemented block-updates to work with CDF Files. This by itself works quite well, as documented here.However, for very large files there are some problems. After some time, typically around 4 minutes, one of the block updates fail, with a simple `ReadError`. There’s no response from the API.Some logs from our application:DEBUG:odp.odcat.storage.cdffs.async_spec:Uploaded 133120 bytes to https://westeurope-1.cognitedata.com/api/v1/files/storage/cognite/6723545981847516/7406155183547467/wod_gld_2013.nc?sig=REDACTED&rsct=application%2Foctet-stream&sp=cw&sr=bDEBUG:odp.odcat.storage.cdffs.async_spec:Total bytes uploaded: 2660989807DEBUG:odp.odcat.storage.cdffs.async_spec:Data: len=5376000, MD5=6a0c0db1b66dac18ee28e8b082830667DEBUG:odp.odcat.storage.cdffs.async_spec:Buffer cursors: start=0, end=5242880, final=FalseDEBUG:httpx._client:HTTP Request: PUT https://westeurope-1.cognitedata.com/api/v1/files/storage/cognite/6723545981847516/7406155183547467/wod_gld_2013.nc?sig=RED
Hi there!I have a usecase where a file is uploaded by a user to an API. The API then uploads the file to CDF Files. We want to avoid having to have the full file in memory at the same time, and therefore must stream the file contents from the request handler directly into CDF Files.There are two ways of achieving this:Stream the request body from the request handler directly into CDF Files’ upload URL Chunk the request body and upload each chunk as separate requests.The first option may be achievable, but I don’t believe the second option is possible.Do you have any insight whether it is possible to chunk a file upload like this in CDF Files?
It would be awesome if we can explore CDF Geospatial from the Fusion API, similar to how we explore assets or sequences.
CDF Templates currently does not support geospatial, however this would be a very natural integration as Geospatial is reaching general availability.In the case of the Ocean Data Platform - we would like to be able track data from moving equipment, for example a floating sensor called an Argo Float that floats around in the ocean collecting temperature and other interesting measurements. The measurement series could be expressed as a time series, which then could be tied to geospatial locations.
Hi there,I was wondering how one would go about representing a CDF Client credential as a connection string.The main idea is to have a single string representing the credentials needed to connect to CDF. The connection string itself should be unwrapped by whatever client it is passed to (Python SDK, JS SDK, etc), and should of course not be sent as part of the request to the API.For tenants using API-keys, something like this may make sense:cdf://CLIENT_NAME:API_KEY@api.cognitedata.com/PROJECT_NAMEHowever, using tokens instead of api-keys is a bit trickier:cdf://TOKEN_USERNAME:TOKEN_SECRET@api.cognitedata.com/PROJECT_NAME?token_url=TOKEN_URL&token_scopes=TOKEN_SCOPESNaturally, it is possible to but anything in the URL parameters, but generalizing the username, password and path-properties would be great.Some challenges that come to mind:How to differentiate between different authentication methods (i.e. api-keys and tokens Is it possible the keep a consistent scheme regardless of a
Hi there,I am trying to authenticate the CogniteClient in bluefield using our Azure B2C: response = requests.request("POST", "https://oceandataplatform.b2clogin.com/oceandataplatform.onmicrosoft.com/B2C_1A_ROPC_Auth/oauth2/v2.0/token", headers={ 'Content-Type': 'application/x-www-form-urlencoded'}, data={ "grant_type": 'password', "client_id": client_id, "scope": 'openid https://westeurope-1.cognitedata.com/user_impersonation', "username": username, "password": password,})creds = response.json()client = CogniteClient( token=creds["access_token"], project="oceandata", base_url='https://westeurope-1.cognitedata.com', client_name="cognite-python-dev", debug=True,)This works perfectly :) Edit: I initially asked how to get this working before spotting a typo in the code. Everything works fine now. Leaving the post here so that others may use it for reference.
Hi everyone,We in are running a series where we are challenging the community to model a certain problem in CDF. The aim with this series is to facilitate discussion and invite the community members to share interesting solutions and techniques.In this second entry of the series we will focus on the SpatioTemporal Asset Catalog (STAC).The SpatioTemporal Asset Catalog (STAC) specification provides a common language to describe a range of geospatial information, so it can more easily be indexed and discovered. A 'spatiotemporal asset' is any file that represents information about the earth captured in a certain space and time.The goal is for all providers of spatiotemporal assets (Imagery, SAR, Point Clouds, Data Cubes, Full Motion Video, etc) to expose their data as SpatioTemporal Asset Catalogs (STAC), so that new code doesn't need to be written whenever a new data set or API is released.(source)In C4IR Ocean, we are primarily dealing with geospatial data. The data originates from mult
Hi everyone,We in C4IR Ocean are starting a series where we are challenging the community to model a certain problem in CDF. The aim with this series is to facilitate discussion and invite community members to share interesting solutions and techniques.The first challenge focuses on Open LineageOpenLineage is an Open standard for metadata and lineage collection designed to instrument jobs as they are running. It defines a generic model of run, job, and dataset entities identified using consistent naming strategies. The core lineage model is extensible by defining specific facets to enrich those entities.In C4IR Ocean, we are dealing with data from multiple providers. Some datasets are open and publicly available, others are closed. It is therefore important to keep track of where data is coming from, and what transformations have been applied to the data after it is read from the provider.We are seeing a good landscape of data lineage solutions - both open and closed source. At the sam
Hi there,I am looking for resources on CDF templates API.So far I the best introduction I have found is the Python SDK documentation, as well as the information at docs.cognite.com which focuses on using templates in fusion.Can you provide more information about how to use the templates API?
We are looking into cloud-optimized storage formats such as GeoTIFF, Parquet, etc. One of the things we are trying to determine is whether we could fully utilize such cloud optimized formats with CDF Files.Will the download-link returned by CDF Files allow us to do seek-operations and only download parts of the file?
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.