No, that is not a supported use case.If you need different values per extractor-machine locally you could use environment variables in the config stored in the extraction pipeline.
What are you trying to achieve here? You cannot have local and remote in the same config.If you have a local config with the extractor it will only read the local config file.If you have type:remote with the extractor it should only contain details around connecting to CDF (cognite section). The extractor will then connect to CDF and read all config from the extraction pipeline.
It’s same as with the Cognite relased extractors. You have a local file with type:remote which refers to the extraction pipeline which contains the config. Sample local config:type: remotecognite: # Read these from environment variables host: ${COGNITE_BASE_URL} project: ${COGNITE_PROJECT} idp-authentication: token-url: ${COGNITE_TOKEN_URL} client-id: ${COGNITE_CLIENT_ID} secret: ${COGNITE_CLIENT_SECRET} scopes: - ${COGNITE_BASE_URL}/.default extraction-pipeline: external-id: ${COGNITE_EXTRACTION_PIPELINE}
Hi Chris - we have acknowledged this as a bug and our engineering team is working on a fix. Will keep you posted.
The format changed a little bit on v3 of the extactor: # Required when running the extractor on continous mode # There are two schedule types supported: # - `cron`: Regular cron expression. Check [cronguru](https://crontab.guru/) for further info # - `interval`: Interval-based schedule. For example: `1h` will repeate the query hour or `5m` will repeate the query every 5 minutes. # schedule: # type: interval # expression: 5m You could tryschedule: type: cron expression: "0 * * * *"
In the attached config file you have commented out the schedule section#schedule: "* * * * *" Without a schedule the extractor will only run once and exit.
We don’t have any plans to deploy jupyter or streamlit outside pyodide in the Cognite platform. What are you trying to do that does not work in pyodide?
Microsoft is in dialogue with Grafana, but there is nothing to test now. We are following up with Microsoft regularly and the currently we have been told is Q1 2024, but it’s just an estimate and not committed to.
Jypyter notebooks in CDF is running inside your browser and is based on pyodide. There is a subset of packages that works with pyodide. psutil seems like something wouldn’t work here.There workaround is to install jupyter / python locally on your computer.CDF functions runs in a normal python environment, so that’s different.
If you initialise the client Sauth will be called. The name is just used for reporting purposes in the API. If you initialise only at the start Sauth will only be called when the token needs a refresh (after one hour). The error you pasted above is not related to Sauth, it seems to have been a small hickup on the API side. You could try to tune the retry settings in this SDK using client.config.GlobalConfig:https://cognite-sdk-python.readthedocs-hosted.com/en/latest/cognite_client.html#cognite.client.config.GlobalConfig.max_retries_connect
In RAW, all columns are stored as a string.
When is this happening? This is Sauth specific, it seems Sauth is doing rate limiting. We haven’t experienced this error with Azure AD.
Hi - is this happening with AzureAD or Sauth?
It’s not implemented in the python SDK yet, it has not been a need (most customers only have 1-2 projects).You can however use it in the python SDK with generic get function against the REST-API:client.get("/api/v1/projects")
Hi, this isn’t possible since there is no fixed columns in a RAW table. You can have different columns per row.
Can you also share the code and your requirements.txt?
This setup is not supported by our pipeline action. It will build the zip file and create the function in the same codebase.What’s the value of doing it with a single zip file? The contents of the zip file will the same when ran on the same source code.
We haven’t tested that Azure application. Is the cognite datasource available there or not?
https://grafana.com/pricing/
Azure Managed grafana does not include the Cognite Data Fusion data source. It only has a limited set of data sournce plugins which has been whitelisted by Microsoft.We have requested to get the CDF data source whitelisted, but it has not yet been done by Microsoft.If you want to try out grafana you could also try the free tier at grafana.com which many of our customers are using.
There will be added another for PowerBI (this is a microsoft app) once it is approved. See the documentation link provided above. It needs to be approved by an accenture.com AAD admin.
You have done it correctly, but as you see from the popup PowerBI has to be approved by the AzureAD admin.You can contact @niladrimondal for some more information on how to get that done internally in Accenture.
When using synthetic time series you can do a mapping from string to a number to get out results, this might help your use case:Map the strings OPEN → 1 and CLOSED → 0map(TS{externalId='stringstate'}, ['OPEN', 'CLOSED'], [1, 0], -1)Doc: https://docs.cognite.com/dev/concepts/resource_types/synthetic_timeseries/#convert-string-time-series-to-doubles
Looking at your scopes parameter it’s a bit wrong, it should only contain .default and not user_impersonation:scopes=[f"{cognite_base_url}/.default"]
I have created https://github.com/cogniteAIR/accenture-cpg-demo-2 and pushed the code to that repo and invited you as admin. You need to invite rdhande1992 and ash him to add the same secrets to this new repo.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.