Reading time ( words)
As a technical support engineer for a solder manufacturer, I like to joke that nobody ever calls with easy questions. One common question that is received from current and prospective customers is, "How do I test and evaluate new solder pastes?"
Although that seems like it should be a straightforward question with an easy answer, this could not be further from the truth. The method of evaluation is of vital importance to an assembler who is looking to adopt a new solder paste because the resulting decision will impact factory operations for years to come. However, the best method of evaluation differs from one user to the next based on several factors. More simply put, the best answer that can be given to the question of how to evaluate a solder paste is also the last answer most people that ask want to hear: “It depends.” Perhaps a better way to state the answer is, “What is important to you?”
So, if the best answer is different for everyone, how does one start to develop a test plan? A good start is to understand the level of resources that will be allocated to the evaluation, which is typically a function of organization size. If the evaluation is being performed by a small manufacturer with one SMT line and will be solely undertaken by a single engineer, then the focus should be limited to only the most important factors of solder paste product performance for that manufacturer. On the other hand, if the evaluation is being performed by a large multi-site (or even multinational) manufacturer and the testing is being performed by a team under the guidance of a subject matter expert, then the test plan should encompass all possible performance factors and be very wide in scope. Larger evaluations may also be able to utilize custom or purpose-designed test vehicles and may replicate tests at more than one location, where smaller organizations may be limited to testing on current or prototype product designs during breaks in the production schedule. The depth and breadth of any test plan will depend on the available resources being engaged efficiently but not overtaxed, such that a result takes an inordinate time to determine.
Now that there is an understanding about the general scope of the test plan, the next step is to determine the specific focus of the testing that will be performed. There are two key areas that any evaluation should investigate: quality and reliability.
Quality is defined by the American Society for Quality (ASQ) as “the characteristics of a product… that bear on its ability to satisfy stated or implied needs.” In this case, the focus is on the key process output variables (KPOVs) from the solder print and reflow processes that are used to ensure the quality of the resulting printed circuit board assembly (PCBA). This varies by product function and design, so these key factors vary by assembler. The best place to find the KPOVs for any process is the process control plan, if one exists. These factors are the factors that are controlled for or inspected in the current process and represent the effort to make the product to specification.
The most obvious of the KPOVs come from solder paste inspection (SPI) systems: print volume, which is typically normalized as transfer efficiency (actual print volume divided by aperture theoretical volume, expressed as a percentage). Other SPI-based outputs include area coverage and height, which are best used as a supplement to volumetric measurements. It is important to analyze SPI data as a function of stencil area ratio for each aperture, as the distribution of transfer efficiencies will be a function of area ratio (A/R). Combining all the data together will result in an overall data distribution that is a combination of many different sub-distributions. For example, if testing a paste with a Type 4 powder size distribution (per IPC/ANSI-J-STD-005), the transfer efficiency for apertures with an A/R above 0.8 should be very close to 100% and have a distribution with low variation. The same paste, when tested on an aperture with an A/R of 0.50 will have a very different distribution with an expected lower transfer efficiency and higher variation. Combining the data from multiple A/R apertures can mask the true level of performance, especially when too many data points come from locations where it is easy for all pastes to perform well.
Other factors related to print performance that can also be included are slump performance, performance after pauses in production, and stencil life. That list is certainly not comprehensive and any factor that is important to an assembly process is a candidate for testing during an evaluation. Smaller evaluations may be able to rely on the testing performed by the manufacturer to standardized test methods such as hot and cold slump, where evaluations with higher resource allocation may desire to replicate these tests during evaluation. Larger evaluations can also develop unique tests or test vehicles to reflect specific issues encountered or unique needs of their application; the limits to evaluation test development are imagination and resource availability.
Reflow is another area where quality measures can be applied as an evaluation test. Voiding is probably the most obvious and relevant quality test that can be performed after reflow. There are also means to test the wetting and spread of a paste, resistance to graping, solder ball performance, and head in pillow and non-wet open defects. The focus during test development should be to identify the key outputs from reflow that are pertinent to the process, just as was done for print quality and with an eye towards resource limitations.
Reliability is the second major factor that any evaluation should include. Reliability is defined by ASQ as “The probability of a product’s performing its intended function under stated conditions without failure for a given period of time.” In the case of solder paste, there are two areas of focus: mechanical, which is driven by solder alloy, and electrochemical, which is driven by flux chemistry.
Reliability testing requires an understanding of the service environment of the end product, which is why the proper tests depend very heavily on the design of the assembly and how the customer uses it (“performing its intended function”), where the customer plans to use it (“under stated conditions”), and the length of the warranty or customer’s expected product life (“without failure for a given period of time”). This is defined by the design function during product development, so these factors are generally easily determined by any organization performing both design and manufacturing. Contract manufacturers, on the other hand, rarely have visibility to these product factors and either need to choose representative tests or consult with their customers during development of any reliability test plan.
Unfortunately, reliability tests are neither inexpensive nor fast, so this is an area that can be tempting to cut out in smaller evaluations—but at the peril of those who choose to do so. Reliability factors are ones that cannot be easily observed at the time of manufacture, but manifest themselves as poor customer satisfaction over time, long after the decision to adopt a new material has been made. The Microsoft® Xbox 360 “Red Ring of Death” is an excellent example of a reliability problem that wasn’t detected during initial testing but became widespread in the service environment, demonstrating how expensive it can be if key reliability tests are not performed when appropriate.
To read the full version of the article, which appeared in the July 2018 issue of SMT007 Magazine, click here.