Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Doing the ‘hands-on’ for Data Engineer Basics - “Learn to Use the Cognite Python SDK “ Course."The instructions said “Make sure that the data set is write-protected”. So I had under Step 1:I then ran without error:That was YESTERDAY.TODAY, the 2nd block of commands threw an error: “CogniteAPIError: Resource not found. This may also be due to insufficient access rights. | code: 403 | X-Request-ID: f9083e61-8310-9462-9fb6-69f8da1299f3”So, after trying this and that, I changed the generated data_set to write protected = False in the 1st block of code above. --- > The 2nd block of commands now works, just like YESTERDAY. Q1. Why did ‘write protected = True’ work YESTERDAY, but not TODAY, giving Error 403?Q2. Why does ‘write protected = False’ work, but not ‘write protected = True’ , TODAY, that is?From where does this inconsistency come? Someone?Thank you!Douglas
Hi everyone, I'm loading sequences from a file that has 11 million rows, and I would like to load it in batches, where each batch is a dataframe. As I load by batch, the rows that the sequence already had are being deleted, leaving only the last batch. How can I load each batch without deleting the previous one? This is the code I use to read the file in batches and load rows into the sequence. It is assumed that the sequence is already created. Regards,Karina Saylema@Aditya Kotiyal @HanishSharma @Jason Dressel @Jairo Salaya @Liliana Sierra
I am doing asset transformation asset Source Target key categoryId sourceDb source parentExternalId parentExternalId updatedDate createdDate externalId externalId isCriticalLine description description tag areaId isActive I don't have all the columns for mapping to target schemaThese are the rest target columns what to do if the mapping column is not there? and where should i add the rest additional columns from source? parentId metadata name dataSetId labels
Hi There, We have multiple CDF transformations running on schedule for every 15 minutes. Sometimes these jobs are taking more than 15 minutes and leading to Transformation failure with an error - ‘A job already runs for this transform ’. Is there any provision for us in CDF to not run a job/transformation while it is running already.
Hi,Is it possible to display stepwise connection between datapoints in Charts rather than the standard linear interpolation?Thanks!
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.