Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
@Mugless Durry Since it is a learning course, I suppose the solution would be to just create the data set as write_protected=False in this case.
That could be the case but seems weird.Try creating another write_protected=True dataset and try write some mock data to it.This should test your theory.
Hi @Mugless Durry The write_protected field means: To write data to a write-protected data set, you need to be a member of a group that has the “datasets:owner” action for the data set.The weird experience you have observed could be to due someone else archiving the dataset. Try adding a check in your code to see if the data set exists first. ds_names = [ds.name for ds in c.data_sets.list(limit=-1)]if "world_info_myname" not in ds_names: raise Exception("Dataset does not exist in project.")Let me know if anything.
Hi @Lavanya Kannan You should use “cognite-sdk” not simply “cognite”.Meaning, either “pip install cognite-sdk”, or “poetry add cognite-sdk”Also if you are in the root directory (where the pyproject.toml file is) you should also be able to just write “poetry install”Let me know how it goes.
@Karina Saylema You can mark this questions as answered, if you believe it is. :)
Hi @Karina Saylema Your code seems correct.Could you make sure that “row_number” is unique? If you try the code bellow, are you encountering the same issue? batch_size = 10000external_id = 'TEST_UPLOAD4'start_row = 0 # Initialize a starting row numberfor chunk in pd.read_csv(file_input, sep=';', chunksize=batch_size): chunk.drop(columns=["row_number"], inplace=True) chunk.index = range(start_row, start_row + len(chunk)) client.sequences.data.insert_dataframe(chunk, external_id=external_id, dropna=False) start_row += batch_size
Hi,Code: 403, means: 'Insufficient rights'.In your specific case you need to add the raw:read rights in the CDF group you are part of.Or ask your admin to do so if possible. That should fix the issue. :)
Hi Sumit.So:assets → Create Update Assetsasset2children → Create Update Asset hierarchyworkorder → Create Eventsworkorder2asset → Update Events (connect them to assets) workorder2items → not sure, maybe Relationshipsworkitems → Create Eventsworkitem2asset → Update Events (connect them to assets) timeseries2asset → Update Timeseries (connect them to assets)timeseries → Create TimeseriesdataPoints → Create Data Points (connect them to timeseries)
To my knowledge, no, it is not possible to change.Someone please correct me if I am wrong.This is one of the biggest restriction of Cognite functions!What you can do is to push your code to foundational infrastructure if it can be an internal service managed by Cognite. See documentation here.
Hi Vetle. It could be the CPU cores per function call and/or the RAM per function call.
Hi @Sumit Bondlewad This error indicated that the parent asset externalId you are referencing to does not exist.To solve you can:Either make sure that the parent asset exists in CDF with such externalId Or in your SQL, include a if/else clause, to check if the parent asset exists before assigning the parent externalId.
Hi :)To my knowledge, there is no provision for that. If I am wrong, someone please correct me.However, I would like to come with a suggestion on how to fix this.My suggestion is to have 2 identical (except the name and id of course) transformations. Scheduled every 30 minutes but starting at different times, such that there is always at least one of them running every 15 minutes. For example “transformation_1” will be running at 15:00, 15:30, 16:00 … etc. And “transformation_2” will be running at 15:15, 15:45, 16:15 … etc.Note: This might lead to having both of them running a the same time (overlapping), so one needs to account for that. Or, before starting a transformation, add a step to check whether the other transformation is currently active. This can be done by querying a status table or a log where the start and end times of each transformation are recorded. But this requires to set up such table in place first. Let me know what you think. :)
Without knowing the contents of the files my suggestion would be the following:You could add “workorders” as “events” in CDF. Then you can use “workorders2assets” to connect them to assets via the “assetIds” field in “events”. Alternatively you can use “relationships” (see documentation). For “asset2children” you can use it to create the asset hierarchy (see structure an asset hierarchy). If it doesn’t fit, then you can use “relationships” as well. For “timeseries” and “timeseries2assets”, similar to “events”, you can connect them to “assets” via “assetId” field in “timeseries”. (timeseries). Use similar approach for “workitems2assets” based on the data in the file. Try map “workitems” to a CDF resource (my guess would be “events”), then use the data from “workitems2assets” to map them to assets via the “assetId(s)” field or “relationships”.
Hi. The only required field for target schema is “name”. Which you can set it as “tag” for example, or “externalId” as well, if it is a good asset identifier (i.e. asset title). Other fields in target schema are not mandatory. However, I would recommend you to create a Dataset first and assign the assets to that dataSetId. See this documentation for why use datasets.Other source fields I would include as “metadata” ({“areaId”:value,”isActive”:value, “updatedDate”:value “createdDate”:value}.As for “categoryId”, if you can find names for each category, I would create labels, (see documentation) so you can filter, group, list, etc. them. If not, you could also include them as metadata.Does that help? :)
Hi Vetle.If I understand you correctly, this should do it:Let me know :) Note: The data presented in this chart is from “cognite-learn” project.
As Håkon mention in the above comment, the error indicates that your data contains NaNs (Not a Number) or infinite values (+/- Inf), which are not allowed in JSON format. The problem lies probably in your df['region'].unique(). Before creating assets, ensure that the df['region'] column does not contain any NaNs or infinite values.Here is an example of a solution: # handle infinite valuesdf = df.replace([np.inf, -np.inf], np.nan)# Replace NaNs with a placeholder stringdf['region'] = df['region'].fillna('Unknown')# Convert 'region' values to string if they are not alreadydf['region'] = df['region'].astype(str)assets_list = []for region in df['region'].unique(): asset = Asset(name=region, parent_id=14499569942375, data_set_id=621288550636820) assets_list.append(asset)client.assets.create(assets_list)
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.