Long before it was built, it looked to me (MJMcC) as if the proposed helium leak tester would be close to the limits of what it could detect in the cycle time available if the devices being tested came out well enough. The design cycle time was defined by the production cycle in the automated production line, so there wasn't much scope for change. Despite the warning, the tester was built.
The method was to pressurize the small devices in a helium atmosphere on the first index table, then transfer them to a closed chamber on the second index table in which the helium could diffuse out again, assuming some had gone in and would come out via the same hole. At the vacuum connection the gas in the test chamber was checked for helium content using the mass-spectrometer. (See diagram).
Of course, that meant that the smaller the hole the less He there was inside after the pressure cycle ready to leak out in the test chamber. The detectable gas level was thus doubly dependent on the hole size and acceptable holes were very tiny holes.
First trials of the machine, itself a fairly complex arrangment of index tables and moving arms with grabbers, showed that at slow speed it could work but it was erratic. The levels of Helium they were obliged to look for were not far above the naturally occuring level in the atmosphere (which is a good rough universal calibration standard).
What we did
This was the point at which I (MJMcC) got involved again. Careful observation of the sequence of events during the cycling of the machine showed that the released Helium gas in the atmosphere around the test machine was actually getting into the test chambers as they closed and upsetting the measures. The first job was to fix the air flow and the extractor fans. That gave a big immediate improvement.
We had a calibrated Helium source, so we could calibrate the mass spectrometer which was attached to the vacuum pumping system that drew gas from the test chambers when they came into place.
Now the question was to verify the test acceptance level given by the machine makers to correspond to the acceptable leak rate (e.g. 10E-6 cc He / Sec.Atm).
It turned out that the design of the tester had been predicated on some assumptions about the interior of the device under test that were not certain. In particular there was doubt about the effective size of the interior cavity into which Helium could be forced and from which it would, when de-pressurized, be released. Indeed, no one could say with certainty whether we had to deal with a small outer cavity coupled through another leak path to a larger one or simply the outer cavity and outer leak alone. As it turned out that wasn't the only problem.
My model was a dynamic simulation of the flow of helium into the device, first while it was in the pressurised container on the first index table, then some release to atmosphere in the transfer process, then release into the confines of the test chamber on the second index table. The leakage there is input to the mass spectrometer for checking.
The device was seen as a pair of connected interior cavities. With this model, we could see the effect of cycle times, pressures, temperatures, relative sizes of interior spaces and interior leakage paths.
We found the range of leakage holes that could be detected, ranging from the ones that were just big enough to show up to the ones that were big enough not to be noticed. If that sounds strange, it's because big holes fill and then empty the interior spaces very quickly. By the time the device gets to the test all the Helium has gone during transfer! So there is a range of detectable hole sizes (with appropriate interior cavity spaces, of course.) We therefore could define a range of acceptable readings, related to the assumptions about the devices, some of which we could verify.
Well, it seemed we had it under control, but another complication arrose. A variant device design had some glass to metal seals that were exposed to the Helium atmosphere in the pressurization stage. Micro-cracks took in Helium and later released them. It wasn't relevant to the required test but this stray source of Helium in the test chambers messed up the procedure. The modelling and testing showed that there was a dynamic difference between the micro-crack behaviour and the important device behaviour so, given the modelling analysis, the transfer time was extended to let the microcracks release their Helium. Of course the machine had to be rebuilt!
On later reflection, I (MJMcC) thought that using radioactive gas that leaked in under pressure then which could be detected through the device walls would have been better as such a test would not depend on cavity sizes to any significant extent and only one way flow was needed. However, that had apparently been ruled out on safety (politically correct?) grounds.