Producing quality: Combining efficiency and effectiveness in manufacturing
Quick navigation: Effectiveness vs. Efficiency | Producing Quality | Prejudices | CAQ & Core Tools | Deming’s Philosophy | Fear-Free Failure-Culture | Method Competence | Increasing Efficiency | REFA Time Fundamentals | Overall Equipment Effectiveness | Connected Manufacturing | Goal: Zero Defects | Conclusion
Effectiveness vs. efficiency in production

In manufacturing, a distinction is often made between effectiveness (doing the right things) and efficiency (doing things right).
Effectiveness aims to achieve set quality and performance targets – in other words, it is about the effectiveness of the measures taken.
Efficiency, on the other hand, evaluates the effort required to achieve these goals, i.e., how economical and time-saving processes are.
Both aspects are crucial:
It is not enough to be effective (e.g., producing high quality) if this is achieved at disproportionately high cost.
Conversely, maximum efficiency is useless if the end result is not the right one in the form of flawless products.
In this article, we look at how quality improvement (effectiveness) and productivity enhancement (efficiency) can go hand in hand to achieve the overarching goal:
Quality is not merely tested, but produced from the outset.
Produce quality instead of testing it: Quality improvement as a gain in effectiveness
A central principle of modern quality management is:
Quality cannot be tested out, it must be produced.
Instead of testing only the end product and sorting out defective parts, the focus should be on preventing failures from occurring in the first place.
W. Edwards Deming, a pioneer of quality management, formulated it as one of his management points:
“End dependence on 100% inspection and ensure quality from the outset.”
Complete final inspections (100% inspections) are therefore an admission that the preceding process was not capable of delivering error-free parts.
The result is time-consuming sorting and rework processes that are neither efficient nor effective.
It is much better to shift quality control into the manufacturing process itself – for example, through process-integrated testing, statistical process control (SPC), and failure prevention methods – so that defective products are not produced in the first place.
Prejudices against quality measures
Nevertheless, quality improvement initiatives are often viewed with prejudice, particularly in small and medium-sized enterprises, as cost drivers that slow down productivity.
In many companies, the fear of high costs and a perceived loss of productivity deters investment in quality systems. But this view is short-sighted.
In fact, good quality management is not a luxury, but improves lean processes and avoids waste, rework, and scrap.
In other words:
Quality management is not a cost driver, but an investment with a high return on investment, as fewer errors and rework ultimately save costs.
Studies show, for example, that effective quality management enables companies to
- reduce waste of time and resources,
- make processes more efficient
- and thus work more productively overall.
Quality costs can also be reduced by investing preventively in “good quality costs” (e.g., for training, testing equipment, or CAQ systems)
in order to avoid the much higher “costs of poor quality” (consequential costs of failures, scrap, warranty claims).
Classic CAQ components and core tools
In practice, computer-aided quality management (CAQ) systems with various modules are used to improve quality – from inspection equipment management and incoming goods inspection to statistical process control (SPC) and complaint management.
In strictly regulated industries in particular (automotive: IATF 16949; aviation: EN 9100; medical technology, etc.), the use of certain quality methods and so-called core tools is mandatory.
Deming’s philosophy in manufacturing
“Quality is made in the boardroom” – systems thinking instead of blame
Deming coined the phrase:
“Quality is made in the boardroom,”
meaning that quality is the result of systems and management – not just the diligence of workers.
He estimated that 94% of all problems originate in the system (and thus in management responsibility) and only 6% are due to employee failures.
This rethinking is crucial:
Instead of sorting out failures through final inspection or punishing employees for mistakes, the system must be improved so that failures do not occur in the first place.
One example is the expansion of preventive measures such as Statistical Process Control (SPC):
SPC and intelligently set control limits – based on process capability and statistics – enable processes to be monitored continuously.
This allows deviations to be detected and corrected early on, long before they result in faulty parts en masse.
Automation and inline testing technology (keywords: automated process control and 100% inline testing) can also help to eliminate the need for manual full inspections.
For example, if camera-based inspections check each part at regular intervals and automatically evaluate the data, you receive immediate feedback and can readjust the process without drastically reducing throughput.
The PDCA cycle in everyday manufacturing

Another aspect of Deming’s approach is continuous improvement:
“Improve every system constantly and continuously” is one of his famous 14 points.
This is often illustrated by the Deming cycle (PDCA) – an iterative four-step process:
Plan – Do – Check – Act.
In manufacturing, this means, for example:
- Plan: Plan quality objectives and processes (e.g., define how a critical measure is to be monitored with SPC).
- Do: Implement the planned measures (carry out production, collect data).
- Check: Analyze process data and quality metrics (e.g., evaluate test results or SPC control charts).
- Act: Derive measures (adjust process parameters, conduct training, initiate corrective and preventive measures).
This principle of continuous improvement ensures that quality and productivity are not viewed as static – there is always potential for better quality at even lower costs.
Deming emphasized:
There is no final optimum; continuous improvement is possible and necessary.
Data plays a key role in this cycle because it systematizes intuition:
It makes it possible to confirm or refute gut feelings and identify causal relationships that would remain hidden without measured values.
As Deming said:
“Without data, you’re just another person with an opinion.”
(“Without data, every assumption is just an opinion.”)
Therefore, decisions on quality improvement should be based on.
A fear-free culture of failures
To enable quality in the process, the culture must also be right.
Deming demanded:
“Eliminate fear so that everyone can work effectively for the organization.”
Fear among the workforce—such as fear of punishment for mistakes—leads to mistakes being covered up or glossed over.
This prevents learning from mistakes and damages both quality and productivity.
An open, fear-free culture, on the other hand, encourages employees to report problems immediately instead of hiding them.
This allows causes to be analyzed and remedied before major damage occurs.
Deming held management responsible for the failures culture:
Only when managers deal openly with mistakes and focus on process improvement rather than assigning blame do employees dare to report honestly.
Numerous studies confirm Deming’s rule that most mistakes are systemic—i.e., ultimately the responsibility of management—and not due to individual incompetence.
Consequently, improvement measures should first address the system rather than simply replacing people.
Digitalization is an integral part of this:
A transparent data flow (e.g., through a manufacturing execution system that links quality data, machine data, and order data) can help promote an open failures culture.
When objective data shows where bottlenecks or failures occur, it is possible to discuss them objectively—instead of resorting to personal recriminations.
Shop floor employees then contribute to process improvement by collecting data, identifying problems, and working out solutions as a team.
Methodological competence before tool use
Another Deming theorem:
Only adopt methods once you understand the theory behind them.
In practice, this means:
Software alone does not solve problems – “A fool with a tool is still a fool.”
Without an understanding of the underlying quality methodology, even the best CAQ or MES system will not be successful.
Well-known concepts such as SPC, Six Sigma, zero defect programs (e.g., according to Philip B. Crosby), and TQM all have a common core:
They are based on statistical thinking, process orientation, and continuous improvement.
Only when a company has internalized these principles—such as
- how a process capability study works,
- what 3 Sigma means,
- or how Ishikawa analyses are carried out –
can it make meaningful use of the supporting tools.
It is therefore essential to invest in training and qualifying employees before or during the introduction of new tools.
An example:
The best SPC software is useless if no one knows how to interpret the control charts or if machine operators are not empowered to take the right action when a rule is violated.
To produce quality, you need
- the right attitude (quality before quantity),
- know-how (methodological knowledge)
- and tools (software, measuring equipment) – in that order.
Quality over quantity – and yet achieving both:
Interestingly, it is precisely this focus on quality that leads to higher productivity in the long term.
Deming put it this way:
“Emphasize the quality of performance, not the quantity. Do away with arbitrary piecework quotas.”
This is because rigid quantity targets that do not take quality into account tempt people to ignore problems or “push through” half-finished parts in order to meet target figures – which later leads to rejects or customer complaints.
Instead, managers should set qualitative goals – such as:
- Reducing the failure rate
- Increasing the number of initial inspections
- Reducing the complaint rate
Interestingly, the number of units produced increases indirectly:
“When organizations focus on quality, quality increases and costs decrease.
When they focus on costs (or pure quantity), costs increase and quality decreases.”
This Deming chain reaction is confirmed in many companies:
Quality improvement increases productivity, reduces rework, and lowers unit costs.
In manufacturing, this means specifically:
A combination of high process stability and low failure rates ensures that systems can produce good parts at optimal speed without interruption – which in turn results in maximum output with minimum waste.
This brings us to the topic of efficiency,
because that is precisely how productivity is measured.
Increase efficiency: Measure productivity and reduce losses
The 3 OEE factors at a glance

Efficiency in manufacturing can be measured using the OEE (Overall Equipment Effectiveness) metric.
OEE looks at how well a production plant actually performs compared to its theoretical maximum performance.
It is made up of three factors—availability, performance, and quality—which are multiplied together to give the OEE result (in percent).
An OEE value of 100% would mean that a plant is running without any losses:
it is in operation 100% of the planned time, produces at 100% of the maximum possible speed, and all parts are flawless.
In reality, of course, the OEE is significantly lower – typically between 60% and 85%, depending on the industry and degree of maturity.
The value makes it clear how efficiently machines are being utilized and where the greatest reserves lie dormant.
To improve OEE, you need to understand where the losses occur.
Traditionally, three questions are asked:
- Is the machine running at all?
→ If not, what stopped it?
(Availability losses due to unplanned downtime such as malfunctions, setup times, or breaks) - If it is running, is it running at full load at its rated capacity?
→ If not, what is slowing it down?
(Performance losses due to reduced speed, cycle time extensions, or minor interruptions) - Is the machine only producing good results?
→ If not, what went wrong?
(Quality losses due to scrap and rework)
These questions correspond exactly to the OEE components of availability, performance, and quality.
Anything that prevents a machine from producing good parts at full speed is considered a loss.
Multiplying the three factors (availability rate × performance rate × quality rate) gives the proportion of time during which the plant runs at target capacity, adding value.
Example:
An OEE of 74% means that only 74% of the theoretically possible production time was used effectively, while 26% was lost due to downtime, speed reduction, or scrap.
OEE helps companies identify specific weaknesses:
- If the main loss occurs in availability, efforts must be made to reduce downtime—for example, through better maintenance, faster setup, or higher plant availability.
- If the main loss is in performance, the focus should be on eliminating bottlenecks or minimizing speed losses—e.g., through process optimization, employee training, or improved material supply to eliminate micro-stops.
- If the quality rate is low, the focus is on improving quality – i.e., finding the causes of rejects, reducing process variation, ensuring supplier quality, etc.
It is important to note that
each of these variables also influences the others.
For example, quality improvements increase the available production time (fewer stops for rework) and often also the output (fewer interruptions due to quality checks).
Conversely, performance increases that come at the expense of quality—such as faster machine cycles with a higher scrap rate—do not lead to sustainable productivity gains.
That is why it is so crucial to optimize efficiency not against quality, but with quality.
The “Six Big Losses”
A well-known concept for analyzing efficiency losses is OEE’s “Six Big Losses.”
These include:
- Availability losses:
- Malfunctions (longer unplanned downtime)
- Maintenance (planned but non-value-adding downtime, e.g., material shortages)
- Performance losses:
- Minor interruptions (short downtimes, idling)
- Speed losses
- Quality losses:
- Scrap
- Rework
Categorizing losses in this way allows very specific countermeasures to be defined.
For example, scrap requires different solutions—such as process optimization, employee training, or machine capability studies—
than malfunctions, which can be reduced through TPM programs, spare parts management, or condition monitoring.
The OEE approach therefore ensures that productivity bottlenecks are discussed objectively and based on data.
Instead of a vague feeling that “something isn’t quite right,” you have specific percentages and times that you want to improve in a targeted manner.
REFA time bases
In addition to OEE, which provides a more comprehensive view of the entire plant, it is also worth taking a look at classic time management per order.
According to REFA, the order time (T) can be divided into
- setup time (tr) and
- execution time (ta).
Setup time includes all preparatory activities before the series runs – such as machine changeover, tool change, etc.
Execution time is the time during which parts are actually manufactured.
It is proportional to the number of pieces (n) and the piece time (te), i.e., the time per unit:
ta = n · te
The piece time (te) indicates how long it takes to process a single workpiece.
- For manual work, it consists of
basic time, recovery time, and distribution time. - For machine work, it consists of the pure machine running time per piece.
This subdivision helps to analyze productivity at the order and workplace level:
- Large batches generate a higher proportion of setup time.
- Small batches reduce productivity due to frequent setup.
Improvement methods such as SMED (Single Minute Exchange of Die) aim to minimize setup time (tr) –
bringing the order time (T) closer to the actual processing time.
Similarly, the cycle time (te) can be reduced through
process improvements, better tools, or employee training.
Productivity is then expressed as output per unit of time.
So if you either
- reduce setup times or
- produce more units per unit of time (reduce te) –
without compromising on quality,
then output per shift increases.
Overall equipment effectiveness as a bridge
What is interesting about OEE is that it directly incorporates quality.
The quality factor (good parts/total parts) means that scrap reduces effectiveness—
just as if the machine had not produced anything at all during that time.
This shows that scrap causes double damage:
- It costs production time, and
- it does not deliver a saleable product.
In the OEE formula, a scrap rate of, for example, 5% can also be imagined
as if the machine had been idle 5% of the time.
Thus, the effectiveness of the quality processes flows directly into the efficiency of the entire plant.
This once again illustrates the key message:
Quality and productivity are inextricably linked.
A factory with a high scrap rate cannot be highly productive—
it wastes material, time, and capacity.
Conversely, better quality (right-first-time) directly leads to higher usable plant capacity.
Now that we have considered both quality assurance (effectiveness)
and productivity (efficiency),
the question arises:
How can both goals be pursued simultaneously, rather than treating them as conflicting objectives?
This is where our central thesis comes into play.
“Light into darkness”: Using networked manufacturing to achieve data-driven excellence
The key to taking both quality and efficiency to a new level
lies in the digitalization and networking of manufacturing.
Many production areas today still resemble “black boxes,”
where problems can only be detected through experience and intuition.
However, the main thesis is:
Data can be used to systematize intuition.
By linking all relevant production data—
i.e., machine data, operating data (e.g., order progress, unit numbers),
and quality data—you can literally shed light on
hidden correlations.
Transparency through networked systems (MES/CAQ)

Let’s imagine that all machines are equipped with sensors and provide real-time data on their status—
such as operation, downtime, malfunctions, output speed, and product quality (e.g., inline measured characteristic values).
All this information is collected in a central system (MES/CAQ) and can be evaluated together.
Suddenly, it becomes clear why, for example, a line is falling behind schedule:
Perhaps it turns out that a particular piece of equipment has frequent short stops (→ availability problem).
Or that a test break is taken after every 50th workpiece (quality control that could be optimized or automated).
Or the data reveals that the failure rate always increases for order XY on machine Z – an indication of possible process problems with this particular combination.
This transparency makes it possible to derive targeted quality and productivity measures
instead of groping in the dark.
A practical example:
In an integrated CAQ-MES system, inspection orders are automatically assigned to each production order,
which measure random samples during the process (classic SPC inspection plans).
This inspection data is stored together with machine and order data in a central database.
Now, at the touch of a button, you can evaluate:
- Which quality result was recorded for which work process (operation)?
- And on which machine?
In a networked system, you can see, for example, for each work step (work step number = AVO)
- how many test specimens were OK (in order) or NOK (not in order),
- or whether tests were omitted.
This traceability concept—the traceability of each part and its quality status—offers enormous advantages:
- If failures occur, you can immediately narrow down which lot numbers or batches are affected.
- You can systematically analyze in which process step failures occur.
The process interdependence of product and production becomes clear:
Quality results can be assigned to each process step and each machine,
enabling correlations to be identified – e.g.:
- Higher failure rate on machine A compared to machine B,
- or dependencies on certain material suppliers.
This allows, for example, machine capabilities to be verified and compared:
Machine A may have more stable dimensional accuracy (higher Cpk value) than machine B over 100 batches –
a clear indication to examine the poorer machine more closely (e.g., maintenance, calibration, operator training, etc.).
Retrofitting and upgrading as an entry point into digitalization
One obstacle in existing factories is often that older machines do not have digital interfaces.
But this is where retrofitting comes into play:
Even decades-old systems can be digitally upgraded with relatively simple means.
Sensors can monitor machine conditions—such as vibration, power consumption, temperature, etc.—
and transmit the data to the central system via IoT devices.
This provides measurement data at a glance
where previously there was no visibility.
Large technology companies such as Bosch and ABB offer sensor solutions that enable “silent” old machines to “speak.”
This is particularly attractive for small and medium-sized enterprises because, instead of investing millions in new machines, they can network their existing equipment at a significantly lower cost.
A study by KfW (2016) found that only around 20% of medium-sized manufacturing companies in Germany have implemented Industry 4.0 networking – six out of ten cited high investment costs as an obstacle.
In practice, however, it has been shown that even low-cost retrofit projects can bring about significant improvements in productivity and quality.
Real-time data collection allows downtime to be avoided proactively (condition monitoring, predictive maintenance)
and processes to be optimally controlled.
Example:
A retrofitted sensor on a motor continuously measures vibrations, temperature, and performance.
The data is analyzed and alerts are issued early on in the event of anomalies, allowing maintenance to be planned proactively.
This greatly reduces unplanned downtime—the biggest obstacle to availability—→ efficiency gains through digitalization.
At the same time, the same sensors also monitor quality indicators (e.g., torque curves that provide information about process quality)—
which in turn benefits quality assurance.
Data-driven process optimization through analytics and AI
When all data from machines, processes, and quality comes together, modern analysis methods—from big data analytics to machine learning—can be used to identify patterns that even experienced experts might miss.
For example, an analysis could reveal that
- a certain combination of environmental conditions and material batches leads to higher scrap rates, or
- that a certain operator achieves significantly shorter setup times (because they have a better method) – which can then be transferred to other shifts as best practice.
Digitalization thus creates an objective factual basis that can be used to systematically improve both efficiency and effectiveness.
It is important to incorporate this data into the CIP (continuous improvement process) – i.e., through:
- regular reviews of key performance indicators,
- shop floor boards with current OEE values and quality key figures,
- interdisciplinary rounds for root cause analysis.
This turns the abstract Industry 4.0 concept into a very concrete benefit:
The company gets to know itself better and better and can decide where its priorities lie based on real figures.
Understanding and leveraging product-process interdependence
All of this highlights how closely product quality and process performance are interlinked.
An improvement in the process—e.g., more stable temperature control in a furnace—increases both product quality (less warping, higher yield) and efficiency (less energy waste, more consistent throughput).
Conversely, more complex product designs often require more powerful processes—for example, tighter tolerances require more precise machines and measuring systems.
In practical terms, this interdependence means that
quality and productivity cannot be viewed in isolation; the entire system must always be optimized.
Modern shop floor management systems take this into account by displaying key figures for quality and productivity together –
e.g., in a dashboard that shows OEE factors as well as ppm failure rates, Cpk values, etc.
This ensures that improvement measures have a holistic effect.
A classic example:
- The introduction of an Andon system (light signals in case of malfunctions) increases uptime by enabling a faster response to machine downtime while also reducing quality risks
— the machine does not continue to run incorrectly for minutes until someone notices. - Or the implementation of an inline inspection station at a critical point prevents defective parts from clogging up the next process – which both reduces scrap and saves throughput time.
Goal: Zero defects and maximum efficiency
Ultimately, it all boils down to achieving the ambitious but worthwhile goal of
100% good parts – 0% waste.
This “zero defect” goal may sound utopian at first, but it serves as a benchmark for manufacturing improvements.
This is because every step toward zero-defect production (e.g., through poka-yoke—technical failure prevention—or automated inspection systems) usually also brings efficiency gains:
Less scrap = less rework = more usable production capacity.
And vice versa:
Any increase in efficiency that reduces setup or waiting times, for example, also reduces the opportunity for failures and makes production more predictable – which in turn benefits quality.
Final thoughts: Effectiveness and efficiency – two sides of the same coin

In summary, it is clear that quality improvement and productivity enhancement are not opposites, but rather work synergistically together.
Quality should be produced, not just checked at the end.
This requires a shift in thinking towards
- proactive process control,
- data utilization, and
- a culture of continuous improvement.
If quality is built in from the outset—through capable processes, well-trained employees, clear standards, and digital monitoring—then failure rates and rework are reduced, which immediately frees up capacity and thus increases efficiency.
On the other hand, efficiency methods such as OEE provide clear priorities where improvements are most urgently needed – and often these are precisely the quality bottlenecks (e.g., high scrap rates or unstable processes).
Deming’s core statement still applies:
Increase quality → Productivity increases → Costs decrease.
Modern, networked manufacturing takes this principle to the next level by using data as an amplifier:
What you measure, you can improve.
Digital transparency makes manufacturing controllable and predictable – intuition is supplemented by evidence.
Ultimately, it’s about holistic process excellence that encompasses both product and process quality.
Companies that master this achieve a state in which high quality and high productivity are mutually dependent.
Your processes run smoothly, waste is systematically eliminated, and employees can take pride in the quality of their work—without being subject to unrealistic output targets.
The manufacturing of the future—in the spirit of Deming and fueled by Industry 4.0 data—is a manufacturing process in which effectiveness and efficiency are inextricably linked.
Producing quality means both:
- doing the right thing—and
- doing it right.