The Role & Impact of Automation in an Effective Quality Management Program
Sally B Keil President AcquiData, Inc.
The hallmarks of a good quality management program are many, but there is only one ultimate goal: to insure that each time a consumer buys your product they will have exactly the same experience each and every time they buy that brand from you. Quality means consistency and reproducibility.
An effective quality management program, therefore, should enable the production plant to consistently produce the same product with the same characteristics, year over year, and/or irrespective of location of production. The most important first step for a quality management program, therefore, is to identify those parameters that affect the end product and that can be controlled, and then enable the accurate measurement of those parameters in a repeatable, consistent way. The greater the number of these identified parameters, the tighter the quality control can be.
What role can automation play to help quality departments truly be effective? Today, clipboards and data entry sheets simply won’t do it anymore if you truly intend to have an effective quality program. Why is that? Let’s begin by looking at some of the hallmarks of a good quality management program and then go on to see how automation can directly impact each of these characteristics.
Hallmark #1. A good quality management program understands the key quality parameters of its process and has established metrics for each.
As discussed at the outset, the first step in building an effective quality management program is to understand the key parameters that effect final product quality.
Determine what needs to be measured, with what frequency, how best to measure it, and what are the metrics, or yardsticks, that you will be using to evaluate these measurements against a standard.
Be sure to distinguish production parameters that effect cost or other aspects from those that effect quality. By doing so you will establish ‘information boundaries’ between production information and product quality information. For example, capturing the name of a raw material supplier is a potential quality parameter, as some suppliers more reliably deliver ingredients that are non-varying in their quality, than others. But capturing the weight of a specific shipment received by the plant, for example, is not a quality parameter: important for cost accounting, perhaps, but not quality. Working to reduce Change In Process times is an important profitability measurement but not necessarily an important quality one.
Hallmark #2. A good quality management program has standardized sampling procedures, standardized testing procedures and has fixed the frequency of that testing.
If you are producing the same product across multiple locations, establishing an instrumentation standard that will dictate common measurement principles is a good first step. Round robin testing – where a known sample is tested by the different labs/locations and results are compared – thereafter will insure repeatability across multiple labs/locations. AACC or other governing bodies’ test methods should be adopted to insure standardization of sample preparation, instrument calibration, instrument set up, and test procedures followed. Again, round robins come into play here.
Hallmark #3. A good quality management program strives for absolutely accurate data.
There is no point in spending all of the time and money to hire lab techs, buy instruments, and do all of the work of capturing all of the measurements if the accuracy of the measurements is able to be challenged. The data must be shown to always be accurate, or it will forever be suspect.
Hallmark #4. A good quality management program captures the different types of quality data throughout the entire production process.
Final products pass through many different stages before it becomes a final product! For example, incoming raw material quality effects finished product quality. It should, therefore, be considered as a key quality parameter that can be controlled: we can choose different suppliers, we can buy different materials, we can measure what we purchase before committing it to the production process, etc. By performing quality checks on raw materials, our quality management program appropriately ‘begins at the beginning’.
Quality data on incoming receiving would probably be tagged by supplier name, date received, item name or number purchased, perhaps the purchase order number. We may take a number of measurements on the received materials, for a number of different quality parameters.
Once we begin the production process, we might capture measurements such as temperature/time, pressure/time, color/time, etc. In line, or in process, measurements are typically single point values identified by location of the sensor (production unit #1 for example), and the date/time the point is acquired.
As the product passes on through the manufacturing process we ultimately come to finished product testing, when we will know the brand name, perhaps a lot or customer order number.
As can be seen the labels we apply to the various quality parameters that we have identified as key, change across the process. Some are supplier name/date received, some are date/time stamped single point readings, others are multi-measurements per unit of finished product.
Hallmark #5. A good quality management program recognizes the importance of data security.
Product quality information is highly proprietary: it should not enjoy easy distribution! It is important to the optimization of the production process; therefore it should be tamper-proof. It represents the product of the company and therefore is truly a corporate asset: as such it should be audit-protected and secured. The capability to make decisions based on the quality data presented is only as good as the protection afforded that data from unauthorized tampering.
Hallmark #6. A good quality management program can analyze and compile quality data into timely, meaningful reports for production management.
Measurements are data. Production supervisors need information to make decisions. Management needs to see trends and relationships, accumulations over time…not sheets and sheets of numbers. A good quality management program can generate meaningful information out of all of the data it collects.
How do computers come into play?
What tools can computerization bring to a quality program to enable it to be more effective? How can we best employ PCs and PC based software to enable our quality management program to be truly effective?
Everyone is familiar with Excel and, to a lesser extent, Access. These two PC-based tools will carry a small quality department forward from pencil and paper alone, to some measure of computerization, very nicely. However, care should be taken to observe the point in time when the company’s needs are no longer being properly served by the use of these programs alone. Oftentimes these programs are employed to manage the quality department simply because they are so readily available and there is usually a ‘technically savvy’ worker or summer intern who can build some Excel macros or set up a few data tables and build some reports. The challenge, however, is to avoid short changing your quality management program by demanding that its automation requirements be met by the computerization made available through Excel and/or Access. In other words don’t fit the needs to the tool at hand…look at it the other way around.
To define this more fully, let’s take each of the hallmarks that have been identified and see what they need in terms of automation.
Hallmark #1. A good quality management program understands the key quality parameters of its process and has established metrics for each.
The first step in building an effective quality management program – identifying the key quality parameters that can be controlled – is the first step that only the company can take. You have to know your process first and foremost! PCs can’t help you if you don’t possess a fundamental understanding of how it is you are making your product!
In setting the metrics for those parameters, however, PC software can be a tremendous aid. Quality specifications are best stored in a computer-based data table rather than on a sheet of paper in a 3 ring binder or in a spreadsheet. Why?
First and foremost is the support for multi-user access to a single set of controlled specifications. Many different people want to see the specs at various points in time; no one wants to have to walk to the location of the 3 ring binder or the PC that ‘owns’ the Excel spreadsheet. Having multiple folks make their own copies for their own ease of use rapidly becomes a control nightmare. A PC database solves the access problem very effectively, allows for easy updates that everyone will see immediately, and also permits controlled access: you may not want just any person being able to bring up the spec spreadsheets or be able to change them in any way. Password authorities are part and parcel of virtually any well written piece of software and no where else does this functionality play such an important role as in the security of your product quality standards.
We want these metrics to be available to us as data is being collected: real time analysis of actual measurements against their targets. We want to bring the appropriate specifications to the data entry program, and computer programs are great at bringing multiple data sources together, when required by the application. We want our specs to appear on our finished product quality reports so we can see how close to our targets we have come. We may want to use those specs as filters, too, to produce reports of only our out of spec production, for exception type reports.
Hallmark #2. Importance of standardized sampling procedures, standardized testing procedures and standardizing the frequency of that testing.
One very important benefit computerization can bring is to provide common access to procedures and method instructions with a single, central source for editing/updating. Central repositories of AACC test methods (they are available on CD-ROM) and/or company specific test methods may be maintained in a central server; remote access to the same documents can be made available to multiple locations, and testers may access them at the time of testing if they are unsure of how to perform a certain test. Updates when received are therefore automatically ‘distributed’ to all. Again, avoid paper sheets in 3 ring binders whenever possible! As with the management of product specifications, updating test methods can also be a nightmare if they are maintained on paper.
Expanding on this ‘nightmare’ theme, one key point needs to be made: using PC software alone doesn’t necessarily solve this multi-location problem. You can have an Excel spreadsheet installed on ‘Mary’s PC’ because Mary works in the lab and does the data collection there, so everyone has to know that the data and/or reports they may want are on her machine. If Joe wants a copy too, he may make a copy of the spreadsheet and then, using his own Excel program proceed ahead to change or alter the data as he feels he needs to do….or he makes a copy but Mary behind him sees some entry errors and corrects her spreadsheet, but that does nothing for Joe’s now incorrect report……and we begin to see the proliferation problem rearing its ugly head. Therefore, the current technology of web-based software becomes an important consideration, to eliminate ‘islands’ of potentially different data that may evolve over time due to changes/updates.
In referring to ‘web-based’ I do not mean Internet based; I am referring to software that runs in the PC’s Internet Explorer browser environment but that is accessing another PC within the production plant, over the company’s internal network. It is intra-net, not inter-net. By avoiding standard PC client applications that are installed on one PC, specifically requiring that you know where to go to run that program and/or update its information, ongoing use and maintenance of a web-based system can be a breeze.
To again guide testers to the correct testing procedures, data entry screens can be created that present exactly what a tester is to test, perhaps in the order it is to be tested (if there is one). What I mean to say is that PC-based data entry screens can be modified to present a different ‘face’ to the tester depending on where in the process they are testing, or what type of sample they are testing, and so on. In effect, you can use a software program to ‘select’ the appropriate data entry form for the tester to guide them correctly. The manual alternative is to see if the tester grabs the right clipboard that holds the right form that uses the right math calculations to be used at that particular point.
Testing frequencies can be monitored by a software program; if data is not collected the program will know this and management can be notified. Computers date and time stamp everything….one of their tremendous benefits to management control of the quality department……so you can be sure when the testing is done. With a manual data entry worksheet someone can fill in the columns whenever they choose, whether or not testing has actually taken place and whether or not the numbers written down are the actual numbers displayed by the instrument or determined by the calculation and/or tasted/observed by the technician.
With a computer program ‘To do’ sheets can be prepared for testers to guide them to what they need to do. In other words PC software offers the opportunity to be pro-active….to guide. In some cases testing frequency may be able to be automatically adjusted if there are quality issues that require an increase over standard frequencies. Computer programs can easily follow rules that you may set to adjust frequency upwards or downwards depending on the status of the production process.
The number of measurements to be taken can be controlled and standardized, which is very important if you want to statistically analyze different data sets. Its important that you have some commonality among your data sub group sizes and by insuring at the outset that testers are capturing a sufficient quantity of data each and every time, then a project-driven analysis of quality data doesn’t have to begin with a data collection phase: you can use the data that is being collected each and every day.
Any math calculations or ‘look up chart’ type analyses can be standardized by a software program so errors are eliminated, and the correct calculation/look up chart is used.
Instrument calibrations can be monitored by a software program to insure that this important activity takes place according to a schedule. With PC programs being able to easily date and time stamp events, we can employ that activity to keep track of the passage of time between calibrations, and pop up a note to a tester when it is time to re-calibrate. Perhaps we can also add the ability to prevent the use an out-of-calibration instrument, to enforce quality management standards. By signing on to an instrument calibration log, quality management can see exactly when this activity has taken place.
Hallmark #3: striving for absolute accuracy in the collection and reporting of data
This is one area where automation has an immediate and profound effect and using a PC becomes almost essential: whenever possible always opt for automatic data collection over the manual entry of quality data. Many instruments today offer some sort of electronic output representing the measurement values…typically an RS232 serial output, perhaps a file output if the instrument itself is PC-based, maybe an Ethernet output so it can be put right onto the company’s network.
If an instrument offers an output and the need is to get data into a PC based program, it’s easy to see the goal: get the person with their pencil and clipboard data entry sheet out of the middle! This will not only significantly improve lab productivity by eliminating all of those essentially clerical tasks but will also filter out user transcription and/or math errors as well as any other inappropriate user ‘influences’. Avoid clipboards and data entry forms…they are productivity-draining clerical work that introduces error.
If the test and/or the test instrument does not lend itself to automatic data collection every effort should be made to have the quality information entered directly into the PC program immediately, as opposed to writing it down to type it in later. Tablet PCs, with wireless connections back to a central server, are highly portable devices and are very suitable to this type of application.
Hallmark #4. Logic to link quality data across the production process
As discussed previously the tags used to identify quality data as it transitions from incoming raw material through the production process to wind up as finished product, change along the way. Automation becomes essential if we are ever to harness this spectrum of information and be able to see product genealogy.
Product genealogy is being able to trace back from the finished product, to the quality of that product in process, and before that to its incoming raw materials. While not always exact, there is a continuum that we can look at, and we really need a computer and some logic to make that happen. Automation is critical to achieving product genealogy! You can’t begin to link and associate the data on clipboards, or Excel spreadsheets with any degree of ease. A true relational database is needed.
Per the different types of quality data that we need to associate with each other, we need to understand the logic, or the association, to tying the different sample identifiers together. Links can be time based: for example we can say that finished product sampled just after packaging can be traced back to in process data if we subtract ‘x’ amount of time. Identifiers linked to quality data downstream can be embedded into identifiers upstream; for example the second and third characters in a finished product sample ID might represent an in process tank or production unit. Other approaches can be identified per your specific environment.
Report tools may be written to query data tables with one type of sample identifier (product code # for example), but be able to jump over to the data table of incoming raw material quality measurements of the ingredients that went into that product, using the logic links just mentioned. The terminology typically used is called ‘parent/child’ relationships: we know that finished product as a ‘parent’ may have one or more ‘children’ to which it is linked by the logic described earlier. Only computer software can help us quickly and effectively navigate through parent/child relationships to bring data together into a meaningful report.
Another type of linkage can be built to allow for on-going monitoring of in line sensor accuracies. We can acquire the measurement data from the in line sensor, provide it with a sample ID, and then when we test for that same property using the lab bench instrument equivalent, we can use the same sample identifiers. By so doing we can ask the computer to immediately compare the two. Further, by setting a metric that says the variance between the two should not exceed ‘x’ percent, we can have automatic alarming to the lab technician if the in line sensor is straying and needs immediate attention and/or re-calibration.
Hallmark #5 Importance of Security.
Secured access to the various functions in a quality system via passwords and user IDs is essential. As has been stated before, product quality data is highly proprietary and so being able to see this data carries a requirement for security. Computer programs easily provide this important feature.
Being able to change or edit this data carries an even higher demand for security and also adds a second level: audit logging. Again, computers are able to distinguish user authorities among the various program functions, so that some users may see but not change data, while others may have full access and control over the data, while audit logs are built of all of this activity to be able to track edits back to the original data point, all date/time stamped with information on the password used to perform the edits. A product quality database is a corporate asset, and should be protected as such.
Hallmark #6 Creation of meaningful reports.
Little needs to be said here as it is so obvious: to really analyze data you need a good PC-based program. There are many great statistical analysis programs available that can really be helpful….IF you can get your data into them electronically. You do NOT want to use the manual data entry screens these programs may offer to get your data into them: if you’ve got an automated quality system in place, you don’t have to…you already have a wealth of product measurements in your quality database that can be imported into virtually any statistical analysis package of interest. With your quality data already stored in a database, (your highly accurate, correctly sized sub groups database of audit-protected data!) you’re ready to roll quickly into any analysis program to really take some quantum leaps forward. Now, you’re leveraging the asset: your product quality information database.
Summary
A truly effective quality program is based on 1) a solid understanding of the key quality parameters that drive finished product results, 2) establishment of the metrics to produce a consistent result with a standard set of procedures for accurately measuring, 3) centrally collecting and reporting, and 4) flexibly analyzing and safely storing that information. The information the quality department builds out of its measurement data may reliably be used by production supervisors to alter/modify the process to insure finished product consistency, again by that same personnel to learn more about the process itself and expand the number of understood quality parameters, and by company management to see overall plant performance and for specific sales/customer support activities.
A good quality automation system lays the foundation for a good quality management program. It provides the tools for standardization of many different aspects of a testing program, it insures absolutely accurate quality measurements where automatic data collection can be employed, it date and time stamps everything for management control, and it offers a relational database underneath all of this data to permit product genealogy studies, a broad array of statistical analyses, and to meet the performance reporting requirements of production plant management.
© Copyright AcquiData, Inc. 2007