Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
I am doing some ETL jobs in Azure Databricks and have successfully managed to use Cognite’s Spark Data Source to read and write time-series, datapoints etc from and to CDF. I know that databricks itself is a cloud platform. However, it is interesting for me to be able to run some or all of the jobs locally during development phase. I wonder if it is still possible to somehow test-run Spark jobs locally? The configuration does not seem to be trivial. I played a little bit with PySpark, and I was able to run it on my Mac but I could not create a connection to “cognite.spark.v1” to read or write data. Do you know if it is possible to perform such operation? If not, what would you suggest?
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.