Skip to main content
Question

Handling Property Schema Evolution in CDF Without Breaking Existing Data Models


Hello

 

As our industrial data evolves; we're facing challenges with schema versioning specifically when modifying / extending the properties of existing asset or time series types in Cognite Data Fusion (CDF).  🙂For instance; adding new fields or changing data types (e.g., integer to float) in metadata often breaks downstream pipelines or applications that expect the original structure. CDF doesn’t enforce a strict schema, but how do others handle backward compatibility in real-world use?😐

We’ve experimented with versioned types and custom labels to signal changes, but this quickly becomes hard to manage at scale. Some of our consumers rely on fixed field names, and introducing Salesforce Developer Course breaking changes results in unexpected behaviors in the SDKs or fusion apps.😐

Is there a recommended practice for managing schema evolution that preserves data integrity while enabling flexibility for growth?🤔

Looking to hear how other developers are addressing this challenge—do you version your CDF data models explicitly? Use separate types? Or rely on CI/CD validation checks? 🤔 Let’s discuss a scalable approach to schema governance in Cognite's flexible but loosely structured data world.

 

 

Thank you !!

Reply


Cookie Policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie Settings