Part Proliferation/Data Cleaning
Part proliferation is an ongoing problem with many of today’s large manufacturing companies. With hundreds of thousands or even millions of parts to consider, companies have and will continue to experience difficulties in controlling the product data’s quality and efficacy. Along with current standards that encourage part duplication in order to save time, companies face an uphill battle of incurring unnecessary part proliferation costs.
A study by the Parts Standardization and Management Committee has found that a single part can cost $20,000 over its lifetime. Additionally, according to the Aberdeen Group, 30-40% of a typical manufacturer’s parts data are either duplicates or acceptable substitutes. For a company that may have 1 million parts, the burden of maintaining unidentified, proliferated parts may reach $200 million. A manufacturer will benefit from the concept of clean data, where cost is reduced, manageability is increased, and a streamlined approach to product data is achieved.
Source: Parts Standardization and Management Committee
The traditional method of a part proliferation and data cleansing effort is a difficult and time consuming task.
First, the data must be harnessed. Because data is spread throughout different products, folders, PLM systems, and even company subsidiaries, time and resources must be dedicated to pull the complete data together and make it accessible to all parties involved.
Second, the data must be analyzed. Each part’s shape and function must be understood in comparison to other similar parts, but these details are not readily available from the file name or location. Thorough research is required to understand a part and then referenced when evaluating a similar part.
Third, a company must enable effective decision making to decide what similar parts are more valued over the other. This process involves considering other factors, such as what similar part has the better cost efficiency, reliability, quality, manufacturability, and many more.
Even after valuable time and resources have been spent to properly identify proliferated parts and eliminate unnecessary data, a fourth step must be done: implement new tools, standards, and training in order to prevent undergoing this arduous task again.
The one common and accurate factor that identifies a part to be a duplicate is shape. Rather than embark on traditional, tedious, and manual methods for identifying and removing proliferated parts, advances in shape-based search technology can dramatically reduce the time and effort needed to clean data.
Using our powerful shape similarity algorithm, Bingo! will determine both the duplicated and similarly shaped objects for each individual part. The location and the intensity of proliferated parts throughout different products, folders, and PLM systems are identified in many powerful, visual, and detailed reports. In addition to shape-based matches, Bingo! has tools to compare part attributes and identify parts used in an assembly, in order to help a company make correct decisions on eliminating proliferated parts. Furthermore, tools are available to actively prevent part proliferation from occurring after a data cleansing effort has been completed.
Shape-based search is nameless. Spelling, language, and file naming rules are not required. The shape search technology of Bingo! is simply the most ubiquitous, accurate, and intuitive search key for product data. Bingo! will enhance your data cleansing initiative by enabling shape-based part and product similarity analysis and linking geometric data with attribute data from non-CAD systems to provide a central repository for part proliferation decisions.
Bingo! capabilities include:
- Identifying duplicate and similar parts.
- Automatically organizing and classifying part data into useful catalogs using shape based technology.
- Providing powerful and intuitive shape search capabilities.
- Analyzing and reporting on the data to enable effective decision making.
- Integration to external attribute based systems for additional filtering and consolidation of data in a single, similarity based system.