What is meant by process is capable?

However, in such cases, the index is given by Cpk, which is given as the lesser value of the above two.

Cp is the capability the process could achieve if the process was perfectly centered between the specification limits. On the other hand, Cpk is the capability the process is achieving whether or not the mean is centered between the specification limits.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128110355000180

Six Sigma Improvements in Business and Manufacturing

M. Joseph GordonJr., in Six Sigma Quality for Business and Manufacture, 2002

PROCESS-CAPABILITY

Before determining if your company and all your processes need to be in Six Sigma Process Capability it is important to discuss what you can get for your efforts and evaluating your processes for how you get them to Six Sigma. What is a capable process? Some quality assurance experts define a capable process as one having and maintaining a CpK index of at least 1.33. This equates to a maximum defect rate of 63 ppm while others say a maximum of 3.4 ppm is the true capability process meaning of Six Sigma control.

The difference between your company’s quality processing goal of a CpK of 1.33 or a CpK of 1.5 (Six Sigma) is only an additional 60 defects. If your production run is large, more than 20,000 parts, you can anticipate random defects for a process of CpK just meeting 1.33. When your production runs are greater than 200,000 and above, even if your process is within Six Sigma or a CpK of 1.5, you will still experience random defects due to random variability of the material, equipment, and processing variables. Therefore, for small runs that many companies experience in their daily manufacturing, it is more important to know the methodology of attaining an acceptable process capability of CpK 1.33 or a CpK of 1.5 than debating over which CpK value should be selected for your companies manufacturing goal.

No matter what run size and CpK goal, the manufacturing department along with quality assurance will be working together to improving processes. These will be processes that are substantially below your selected goal or daily, ongoing processes that have had their CpK dip below your goal, due to a variable in the process going out of control.

An example of process capability can best explain this concept. A molding shop producing a product historically for years has a specified minimum CpK on the operation of 1.33 that is attainable and repeatable each time the job is run on the same machine. This is shown in Figure 3, for a process-capability study of a process running at a speed of 60 seconds overall cycle time. The process and cycle, now running at an overall mold open to mold open of 60 seconds, is set to produce the required number of parts in a specified time period. This was consistent with the machinery demands and utilization of the current customer base. The customer was very satisfied with the quality of the products shipped weekly.

What is meant by process is capable?

Figure 3. In process-capability study of cycle adjustments the Gaussian curve indicates a CpK of 1.36.

(Adapted from reference [3])

Manufacturing engineering and quality assurance performed a process capability sturdy on this job and others so they would know if the process and other variables were as capable as possible after having the machine overhauled. The floor supervisor and manufacturing engineer were satisfied with the machines cycle and operation. The following data on pin length for a key customer product characteristic was evaluated on parts from a four cavity tool. A part from each cavity was initially evaluated to show the balanced runner mold was capable of producing repeatable parts cycle to cycle from each cavity. After this was confirmed, only one part from a specific cavity was measured for the process study. After 30 cycles the data points were accumulated and used for the analysis.

Data:

Specification = 1.250 in. ± 0.005 in.

Sigma = 0.0011 in.

μ = 1.2495

The sigma (process standard deviation) and μ (process mean value) were based on measurements taken from four individual cavities. Five parts were collected from each cavity for five cycles. When these were confirmed to be within specification, the single cavity was selected, cavity one, and a total of 30 parts were collected per hour and measured. These parts were considered representative of the production process.

Process-capability formulas were used to determine the process had a CpK = 1.36. As a result of the study the manufacturing process was deemed satisfactory and monitored as was typical during the manufacturing cycle. The process was considered to be capable and was in statistical control. This information was confirmed by the customer since product defects were not reported.

The customer requested additional product and a new manufacturing engineer was assigned the task of meeting this demand from sales. The initial process decision made was to reduce the cycle time by 50%, or 30 seconds, to meet the new order volume. The shop floor supervisor believed this was too great a decrease even though the material reacted, parts were ejected looking good, successfully. No visual change except part weight was lower by three tenths of a gram, which was being used as the process stability indicator.

On the first sample taken after the increased cycle time was set, the R chart indicated that the process was out of control, as confirmed by the change, lower part weight. The cycle time of 30 seconds was increased 50% more to 45 seconds and on the third sampling group the process yielded an R-chart reject. This is shown in Figure 4, on the range control chart.

What is meant by process is capable?

Figure 4. Cycle time change sends process out of control.

(Adapted from reference [3])

With time running out and material being scrapped the engineer with help from quality assurance ran a DOE (design of experiments) to determine the critical factors in the process. Not surprisingly, cycle time was found to be the most important factor along with screw speed that affected melt temperature and setup time in the mold cavity. These were the critical variables and it was decided that mold cavity temperature would not be adjusted to assist in reducing the cycle because of the critical pin length.

Since there had not been any problem running at a 60 second cycle, the manufacturing engineer wanted to characterize the process at the 45 second cycle to see if the R-chart rejection was a random event and the cycle not yet in equilibrium. Quality assurance agreed to this request with the restriction that all product would be inspected 100% until the cycle stabilized and proved satisfactory for repeatability of product manufacture. Samples were taken as before until sufficient data points were obtained.

Analysis showed the mean had increased to 0.2497, and the standard deviation had increased to 0.0022. After calculating the process capabilities, the manufacturing engineer calculated a Cp of only 0.81 and realized that even with perfect centering, the process could not produce at a sufficient quality level to meet the company and customer requirement of CpK of 1.33. As a result the cycle was reset to 60 seconds, equilibrium again obtained, and the CpK again recalculated. The control charts again resembled those of the initial process and the manufacturing process extended into the second shift to meet the customers quantity requirements.

Process-capability is a very valuable tool and goes along with the “do it right the first time” manufacturing philosophy. When correctly used it will aid in the selection of equipment, materials, speeds, and other variables that can affect on going product quality. The process-capability formulas and language for calculating the CpK information is listed.

Cp = USL – LSL/6 sigma

Cp = Inherent process-capability index

USL = Upper specification limit

LSL = Lower specification limit

Sigma = Process standard deviation (obtained from a representative, random sample of at least 20 parts).

CpL = μ – LSL/3 sigma

CpL = Lower process-capability index

μ = Process mean (obtained from a representative, random sample of at least 20 parts).

CpU = USL – μ/3 sigma

CpU = Upper process-capability index

CpK = min {CpU, CpL}

CpK = Process-capability index

Notes:

If μ is at the nominal dimension, Cp = CpK

CpK is always equal to or less than Cp.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444510471500082

Quality

John R. WagnerJr., ... Harold F. GilesJr., in Extrusion (Second Edition), 2014

26.2 Process Capability

Process capability is a measure of the inherent process performance. It is defined by sigma (σ), the standard deviation. Different σ levels are used to determine process capability, depending on the customer's needs and specifications. The data included in different standard deviation ranges are as follows:

±1σ includes 68.2% of the total area under a normal distribution curve. If the process is run at ±1σ capability, 317,300 parts out of every million fall outside the specification limits.

±2σ includes 95.45% of the total area under the normal distribution curve, with 45,500 parts out of a million falling outside the control limits.

±3σ includes 99.73% of the total area under the normal distribution curve or virtually the entire area. At ±3σ, there are still 2700 defective parts out of each million produced.

±6σ includes 99.9,999,998% of the area under the normal distribution curve, and 0.002 parts per million are expected to be defective.

In a 6σ process, statisticians allow for a 1.5σ shift. This adjustment results in 3.4 defects per million parts produced [3].

As an example, assume that a sheet product is being shipped to customer RSQ, who requests the impact strength to be 13 ± 3 ft-lbs at a ±3σ level. To supply samples to RSQ, some sheet is produced and impact properties are measured. Thirty-seven data points are gathered and plotted to give a normal distribution, as shown in Figure 26.3. Based on the data, can your company supply product to RSQ that meets the customer requirements 100% of the time? The average impact value is 13 with a standard deviation of 1.25. At 3σ, the data are anticipated to range from 13 ± 3 (1.25), giving a range of 9.25–16.75. Based on the data without process improvements to lower the standard deviation, it is impossible to satisfy RSQ's impact requirements. The process is incapable of producing product with 13 ± 3 ft-lbs at 3° that meets the customer's requirements 100% of the time. If the order is accepted based on the current operation, the product will be produced and sent to the customer that is outside the specification limit.

What is meant by process is capable?

Figure 26.3. Normal distribution—example with impact data.

Process capability measures the process repeatability relative to the customer specifications. Figure 26.4 shows two normal distribution curves defining product property profiles with specification limits. Process A is capable of producing a product that meets the customer's specifications 100% of the time, whereas process B is an incapable process.

What is meant by process is capable?

Figure 26.4. Comparison of capable and incapable processes.

Process capability is measured through a capability index, Cpk, defined by Eqns (26.4) and (26.5), where USL is the upper specification limit and LSL is the lower specification limit. A Cpk value <1 indicates that the process is not in control.

(26.4)Cpk=AllowablespreadinspecificationsActualspreadinspecifications(measurements)

(26.5)Cpk=USL−Mean3σorLSL−Mean3σ

Cpk values of 1.33 and 2 indicate that six parts out of every 100,000 and two parts out of 1,000,000,000 are defective or outside the allowable spread in specifications, respectively. Parts within the tolerance limits with a Cpk value of 1.33 are 99.994% in specification.

The final concept in process capability is the drive for continuous process improvements and zero defects. Zero defects is a quality system goal to remove all defects from the product [4].

Review Questions

1.

What are some of the functions of the Quality Assurance Department?

2.

What is Cpk? What does it measure? How is it used?

3.

Using control charts, define five situations in which a process is out of control and how it is recognizable on a control chart.

4.

What are some possible procedures for checking incoming raw materials?

5.

What are the purposes of using control charts, and how can they improve quality and productivity?

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781437734812000260

Use Cases for Subcontractors and Fabricators

Mohammad Nahangi, Minkoo Kim, in Infrastructure Computer Vision, 2020

7.4.1 Compatibility between tolerances and process capabilities

Process capabilities define the expected variation of a given process (e.g., steel frame welding, rebar placement, concrete pouring, component alignment, etc.) which in turn can be used to determine its probability of not exceeding required tolerances. Compatibility between processes and tolerances is important for ensuring that an assembly can be fabricated, assembled, and installed on-site correctly. For instance, if the length of a steel beam must have a tolerance of 3 mm (1/8″) to fit on-site properly, the processes affecting the length of that beam (e.g., cutting, measuring, grinding, etc.) must cumulatively have a variation less than 3 mm (1/8″). In this case, the tolerance can be divided (or absorbed) between compounding processes; however, because each process has its own specific capability (i.e., DV), the net variation of processes must be less than the specified tolerance for the length of the beam (Fig. 7.19A). This example describes a design approach referred to as tolerance allocation, where overall assembly tolerances are distributed to the underlying processes of the assembly. The reverse design approach is referred to as tolerance analysis and occurs where process capabilities are analyzed to derive a suitable overall assembly tolerance. For instance, to determine the amount of adjustability (i.e., tolerance) required for the connection of a prefabricated curtain wall system, the variability of the underlying building substrate as well as positional variability of the curtain wall must be analyzed to derive suitable tolerances (Fig. 7.19B). Both cases of tolerance analysis and tolerance allocation require some knowledge about the capabilities of processes in terms of their DV. While it may be difficult to determine the DV of construction processes, Milberg and Tommelein (Milberg et al., 2002) demonstrate that failure to consider process capabilities can result in severe conflicts during installation on-site.

What is meant by process is capable?

Figure 7.19. Examples of tolerance design approaches: (A) tolerance allocation for the variability on the size of a steel beam and (B) tolerance analysis for the design of a connection for a curtain wall system.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128155035000073

Process Capability Requirements

Günter Jagschies, Karol M. Łącki, in Biopharmaceutical Processing, 2018

Abstract

Process capability” is the process’s ability to deliver product of the required quality in the quantity necessary for each step in the life cycle of the product. In order to be of any practical value, a process intended to make biopharmaceuticals needs to meet the standards set by local and regional regulatory authorities around the world. It needs to reproducibly and economically manufacture the specified quality of the drug or vaccine it is designed for. This aspect involves the removal of any risks to the patients that may originate with the product. In addition to processes themselves, the ability to cope with the production of an ever-increasing number of drug candidates in the same facility is an additional focus. Appropriate operational flexibility in coping with different process requirements is an element of company production strategy that helps to save time and effort. A basic requirement with essentially every process nowadays can be described as "reduced development time and effort." Process developers must aim for process designs that cover most (preferably all) of the molecules of the same category entering development. Such designs form what is commonly referred to as platform processes.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081006238000049

Software System Safety

Nancy G. LevesonProfessor, Kathryn Anne Weiss Ph.D., in Safety Design for Space Systems, 2009

15.3 Current Practice

How is software safety handled today? The practices differ in Europe and the U.S. and between those who treat safety as a component reliability problem and those who treat it as an emergent system property.

First, it is important to note the dichotomy in the definition of safety throughout the aerospace industry, even within NASA itself. NASA centers and aerospace projects that focus on the human exploration of space tend to adopt the definition of safety used throughout the System Safety community, i.e., safety refers to a loss event, whether that loss refers to humans or equipment. The distinction used in this definition of safety to separate the loss of human life and the loss of equipment is the classification of the hazard that leads to the loss. In other words, a hazard that can lead to a loss of human life would have a much greater assigned severity than that of a hazard with lesser consequences. However, at the JPL and other organizations that focus on the robotic exploration of space, safety only refers to human safety. In these organizations, the safety team often focuses on assembly, test, and launch operations, where there are individuals who can be adversely impacted by the system entering a hazardous state. All other hazards fall under the category of mission critical, where the result of a hazardous state can lead to loss of equipment or loss of mission.

This section first describes the safety-related approaches and tools currently used in industry, including the SPICE (Software Process Improvement and Capability dEtermination) project, CMMI (Capability Maturity Model Integration), dependability/safety cases, and ARINC 653. Next a discussion of the implementation of System Safety as a discipline at NASA and the NASA Software Safety Standard is presented.

The SPICE project and CMMI are both approaches to standardizing process improvement for software. The SPICE project is an initiative to support the development of an international standard for software process assessment and is supported by the International Committee on Software Engineering Standards through its Working Group on Software Process Assessment (Dorling et al. 1995). The project has three principal goals:

1

To develop a working draft for a standard for software process assessment.

2

To conduct industry trials of the emerging standard.

3

To promote the technology transfer of software process assessment into the software industry worldwide.

CMMI is a process improvement approach from the Software Engineering Institute that identifies the essential elements of effective processes for both systems and software engineering. It is used to guide process improvement across a project, a division, or an entire organization (Software Engineering Institute 2007).

Both SPICE and CMMI focus on the standard evaluation of an organization with respect to its software engineering processes. The objective is to help ensure software quality through rigorous adherence to standard software process. Unfortunately, organizations often focus on the maturity levels themselves as outlined in SPICE and CMMI instead of their true process capability (Humphrey et al. 2007). This misplaced focus can result in high maturity ratings that do not lead to better organizational performance or product quality.

Another approach to safety analysis, used primarily in Europe, is called safety cases, often referred to in the United States as dependability cases by the Software Engineering Institute. In fact, safety cases are considered a subset of the overall dependability or quality case. A safety case is a demonstration of software product safety. The demonstration typically takes the form of a presentation of evidence with argumentation linking the evidence to the claim of software safety. Safety cases have a postdesign focus, which is the opposite of the system safety approach discussed below. The emphasis of a safety case is on proving that the current design is safe; therefore, considerable emphasis is placed on creating a proof of system safety whether the design is safe or not. In the system safety approach, safety is designed into the system from the beginning making the safety case trivial to provide.

Finally, there are several attempts at creating avionics and software standards, as well as reusable avionics interfaces, that enforce some safe design principles, the most notable of which is the ARINC 653 specification for system partitioning and scheduling. This standard is often required in safety- and mission-critical systems, particularly in the commercial aviation industry. ARINC 653 defines an Application Executive (APEX) for space and time partitioning that can be used wherever multiple applications need to share a single processor and memory in order to guarantee that one application cannot bring down another in the event of application failure. Each partition in an ARINC 653 system represents a separate application and makes use of memory space that is dedicated to it. Similarly, the APEX allots a dedicated time slice to each, thus creating time partitioning. Each ARINC 653 partition supports multitasking (ARINC 2003).

15.3.1 System Safety

As argued above, treating safety as a component reliability problem and, for software, attempting simply to get the software correct has limitations with respect to safety. Traditionally in NASA and the U.S. defense industry, safety has been treated as an emergent system property, and special approaches labeled system safety have been used. These same approaches apply to software, although the application to software is not always done well.

Although many of the basic concepts of system safety, such as anticipating hazards and accidents and building in safety, predate the post-World War II period, much of the early development of system safety as a separate discipline began with flight engineers immediately after the war and was developed into a mature discipline in the early ballistic missile programs of the 1950s and 1960s. It was developed in response to the same changes that are being seen more widely today, i.e., increased complexity and use of software and computers that led to accidents and potentially devastating near misses in very expensive and inherently dangerous systems.

The American space program was the second major application area to apply system safety approaches in a disciplined manner. After the AS 204 (Apollo 1) fire in 1967, NASA hired Jerome Lederer, the head of the Flight Safety Foundation, to head manned spaceflight safety, and later, all NASA safety activities. Through his leadership, an extensive program of system safety was established for space projects, much of it patterned after the Air Force and Department of Defense programs.

In contrast to the reliability engineering focus on preventing failures, system safety is concerned primarily with the management of hazards: their identification, evaluation, elimination, and control through analysis, design, and management procedures. As a subdiscipline of system engineering, it deals with systems as a whole rather than with subsystems or components. In system safety, safety is treated as an emergent property that arises at the system level when components are operating together. System safety not only considers the possibility of accidents related to component failures, but also the potential damage that could result from successful operation of the individual components.

Another unique feature of system safety is its inclusion of non-technical aspects of systems. Jerome Lederer wrote in 1968:

System safety covers the entire spectrum of risk management. It goes beyond the hardware and associated procedures to system safety engineering. It involves: attitudes and motivation of designers and production people, employee/management rapport, the relation of industrial associations among themselves and with government, human factors in supervision and quality control, documentation on the interfaces of industrial and public safety with design and operations, the interest and attitudes of top management, the effects of the legal system on accident investigations and exchange of information, the certification of critical workers, political considerations, resources, public sentiment and many other non-technical but vital influences on the attainment of an acceptable level of risk control. These non-technical aspects of system safety cannot be ignored. (Lederer 1986)

While, like most everyone else, NASA first attempted to treat system safety for software and software intensive systems in the same way as it treated it in hardware systems, the agency has recognized the differences between software and hardware and has recently adopted a revised NASA Software Safety Standard that takes standard system safety practices and adapts them for software (NASA 2004).

The activities in the software safety standard start in the concept phase and prior to the start, or in the early stages, of the acquisition or planning for the software. These activities implement a systematic approach to software safety as an integral part of the project's overall system safety program, software development, and software assurance processes. The goal is to design safety into the software and maintain safety throughout the software and system life cycle.

The first step in the NASA standard is to perform system and software safety analyses to determine if the software is safety critical, and how it can impact the safety of the system. Software is classified as safety critical if it:

Can cause or contribute to a system hazard.

Provides control or mitigation for hazards.

Controls safety-critical functions.

Processes safety-critical commands or data.

Detects and reports, or takes corrective action, if the system reaches a specific hazardous state.

Mitigates damage if a hazard occurs.

Resides on the same processor as safety-critical software (in which case methods of separating or partitioning the critical from the noncritical components may be required).

Processes data or analyzes trends that lead directly to safety-related decisions.

Provides full or partial verification or validation of safety-critical systems, including hardware or software subsystems.

System safety and software safety analyses are used to identify potentially hazardous software behavior, and specific software requirements and design constraints are imposed on the software to eliminate or mitigate any such hazardous behaviors, e.g., the software must not turn off the descent engines before the lander reaches the surface, or the software must ignore any inputs from the landing leg sensors while the lander is over 12 m above the planet's surface. The standard also specifies requirements for tracing the flow of safety requirements and constraints from the system hazards down to the software requirements and from there down to specific software design features that implement or resolve them and to test other verification activities.

This standard differs from some other approaches to software system safety by focusing (in specification, design, verification, and assurance) only on the potentially hazardous behavior of the software. The most popular alternative approaches identify safety-critical software and then put more effort into the standard software engineering activities used to implement and verify all the software requirements for those software components deemed safety critical, and not just the safety-critical behaviors. These alternatives, thus, take a reliability engineering approach to the problem.

System safety approaches to software safety (and the new NASA standard) also involve integrating the software system safety analysis with system safety analyses and activities to ensure that the software does not compromise any safety controls or processes and that it maintains the system in a safe state during all modes of operation. The activities continue throughout the operational use of the software to detect any unsafe behavior before it causes an accident (and to perform a root cause analysis if a loss or near miss does occur) and to ensure that routine upgrades and reconfigurations do not compromise safety. Change analysis is used to evaluate whether a proposed software or system change could invoke a hazardous state, affect a hazard control, increase the likelihood of a hazardous state, adversely affect safety-critical software, or change the safety criticality of an existing software element.

Procedures and methods for implementing the new standard are still being developed and validated, but now are starting to be used on NASA projects.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780750685801000154

EFFECTS OF STATIC AND DYNAMIC MEAN VARIATION ON THE PROCESS CAPABILITY

H.H. Mohammed, in Current Advances in Mechanical Design and Production VII, 2000

ABSTRACT

Measures of process capability are used widely to assess a process's ability to meet the quality specifications. The capability is quantified by dimensionless indices that provide common and easy principles for evaluating the performance of a process. In calculating a process's capability index(s), the process must be in control statistically, i.e., the mean and standard deviation parameters are consistent all the time. However, some processes are characterized by having unavoidable and repeated shifts and drifts in their mean values. These are called processes with mean variation. Such mean variation could be static, dynamic, or simultaneous static and dynamic. The reason(s) of that variation may be tool wear, differences in raw material, or change of suppliers. In this research, processes with dynamic mean variation (DMV) and those with simultaneous static and dynamic mean variation are analyzed. Their capability indices are developed and the corresponding production yields are calculated. Other useful practical utilities are investigated. The analyses and the developed capability indices are verified through simulation. An application to the results of this research will be conducted on an assembly process in Helwan Company for Metallic Appliances.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080437118500635

General design topics

Hugh Jack, in Engineering Design, Planning, and Management (Second Edition), 2022

Problems

10.68

What is process capability and how is it used?

10.69

What is acceptance sampling and when should it be used?

10.70

Describe the difference between Cp and the control limits.

10.71

Assume a process has a mean of 102 and a standard deviation of 1. What are the Cp and Cpk values for a tolerance of 100 ± 5?

10.72

Assume a process has a mean of 102 and a standard deviation of 1. What tolerances would give a Cp = 1.5 and Cpk = 1.5?

10.73

What is the difference between Cp and Cpk ?

10.74

When selecting tolerances, what values should be used for fewer than 3.4 failures per million?

10.75

How can SPC data be used to calculate Cp and Cpk for a new design?

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128210550000104

Selection of quality assurance methods

Peter Scallan, in Process Planning, 2003

Cp indice

The use of Cp for process capability indicates that the process is considered stable and that the process mean is centred on the nominal value of the tolerance band. It is the ratio of the tolerance spread to the process spread. The process spread is taken as the equivalent of six standard deviations or 6σ. This can be stated as (Oakland and Followell, 1990):

Cp=UTL−LTL6σ

where UTL and LTL are equal the width of the tolerance band = 2T. The standard deviation can be calculated as follows:

How can you tell if a process is capable?

All the data points fall well within the specification limits with a normal distribution. A process where almost all the measurements fall inside the specification limits is deemed a capable process.

What does it mean if a process is not capable?

This implies that the output is not stable or predictable.

What is process capability with example?

Process Capability Example 1: The ice cream that must be served in an ice cream parlor has to be between -15 degrees Celsius and -35 degrees Celsius. The process of refrigeration that keeps the temperature has a standard deviation (SD) of 2 degrees Celsius. And the mean value of this temperature is -25 degrees Celsius.

What does it mean to say that a process is capable but not stable and what does it mean to say that a process is stable but not capable?

A process that operates with its control limits is a stable process while one that operates within Specification Limits is capable. For a process to be deemed as capable, it needs to be consistently capable. For the process to be consistent, it needs to be stable.