Graph DB for PLM : Schema evolution
One of the characteristics of PLM is the need for flexible data models. You can argue that and tell me that everyone should follow standards so you don’t have to build your...
Filter by Category
Filter by Author
One of the characteristics of PLM is the need for flexible data models. You can argue that and tell me that everyone should follow standards so you don’t have to build your...
Posted by Yoann Maingon
Last June (June 17th 2021), Neo4j raised $325 millions. Last week ( october 5th 2021), Memgraph raised $9.34 millions. Tigergraph raised $105 millions last winter (February 17th...
Posted by Yoann Maingon
As an enterprise-wide data management platform, one of the main PLM goal is to provide the users with the right data at the right moment. The digital thread should provide a...
Posted by Yoann Maingon
Data is the essence of most applications and this is particularly true for PLM. How you store the data is a key aspect of your PLM application. It will define how much data you...
Posted by Yoann Maingon
Looking at the picture you can tell that I am in the NYC area to attend GraphConnect 2018. GraphConnect is a major conference organized by Neo4J, the graph database. During a bit...
Posted by Yoann Maingon
One of the characteristics of PLM is the need for flexible data models. You can argue that and tell me that everyone should follow standards so you don’t have to build your own model and you don’t have to make any change. But believing that is letting Excel or other spreadsheets win. Rely on a fixed schema and you will get an excel based underground PLM.
Therefore once you acknowledge you need a flexible datamodel, we can cover some technological comparaison to see what is the best technology out there to support these concepts. Today I want to compare two datamodel changes one with an SQL database and the other with a graph database.
One use-case that I had to handle in the past is a change of type for a collection of objects. The customer use to have all its parts handled with a “part” class, makes sense. A classification was done to define if the part was mechanical, electronic or software. But at some point the management of software information became a lot different than the part information. Therefore we were asked to takes these softwares and migrate them to a new “software” class.
This is not an exact process because each PLM system will have its own mecanisms and tools to play with data. But basically in terms of db data manipulation you will need to do the following:
For the graph database we will take neo4j as the example.
That’s the beauty of it. The indexation will work then to create a new index for software and update the one for parts, but that’s it. Many software editors will claim that this is too technical and never happens. Well, talk to the integrators, if they’ve worked with a customer for multiple years they know it happens.
Data is the essence of most applications and this is particularly true for PLM. How you store the data is a key aspect of your PLM application. It will define how much data you...
As an enterprise-wide data management platform, one of the main PLM goal is to provide the users with the right data at the right moment. The digital thread should provide a...