Tolerances of Optical Thin Film Coating

The question of how accurately we must control the thickness of layers in the deposition of a given multilayer is surprisingly difficult to answer and has attracted a great deal of attention over the years.

Nowadays, we immediately think of the computer when we wish to carry out numerical studies, but this is a relatively recent innovation. Earlier studies lacked this luxury, and so were greatly influenced by the need to limit the volume of calculation. Nevertheless, the results were, and still are, of great value.

One of the earliest approaches to the assessment of errors permissible in multilayers was devised by Heavens, who used an approximate method based on the alternative matrix formulation in Equation 3.16. His method, useful mainly when calculations must be performed manually, consisted of a technique for recalculating the performance of a multilayer with a small error in thickness in one of the layers. He showed that the final reflectance of a quarter-wave stack is scarcely affected by a 5% error in any one of the layers.

Lissberger developed a method for calculating the performance of a multilayer involving the reflectances at the interfaces. In multilayers made up of quarter-waves, the expressions took on a fairly simple form that permitted the effects of small errors, in any or all of the layers, on the phase change caused in the light reflected by the multilayer to be estimated.

Lissberger’s results, applied to the all-dielectric single-cavity (Fabry–Perot) filter, show that the most critical layer is the cavity layer spacer. The layers on either side of the cavity layer are next most sensitive, and the remainder of the layers become progressively less sensitive the further they are from the cavity.

We mentioned in previous tutorials the paper by Giacomo et al., who examined the effects on the performance of narrowband filters of local variations in thickness, or “roughness,” of the films. This involved the study of the influence of thickness variations in any layer on the peak frequency of the complete filter.

The treatment was similar in some respects to that of Lissberger. For the conventional single-cavity filter, layers at the center had the greatest effect. If all layers were assumed equally rough, the design least affected by roughness would have all the layers of equal sensitivity, and attempts were made to find such a design. A phase-dispersion filter gave rather better results than the simple conventional single-cavity filter, but still fell short of ideal.

Baumeister introduced the concept of sensitivity of filter performance to changes in the thickness of any particular layer. The method involved plotting sensitivity curves over the whole range of useful performance of a filter, curves that indicated the magnitude of performance changes due to errors in any one layer.

His conclusions concerning a quarter-wave stack were that the central layer is the most sensitive and the outermost layers the least sensitive. An interesting feature of these sensitivity curves for the quarter-wave stack is that the sensitivity is greatest nearest the edge wavelength. This is confirmed in practice with edge filters, where errors usually produce more pronounced dips near the edge of the transmission zone than appear in the theoretical design.

Smiley and Stuart adopted a different approach using an analog computer. There were some difficulties involved in devising an analog computer, but, once constructed, it possessed the advantage at the time that any of the parameters of the thin-film assembly could be easily varied. A particular filter, which they examined, was:

\[
\text{Air} | 4H \, L \, 4H | \text{Air}
\]

with \(n_H = 5.00\) and \(n_L = 1.54\). This is a multiple-cavity filter of simple design. Errors in one of the \(4H\) layers and in the \(L\) layer were investigated separately. They found that errors greater than 1% in one \(4H\) layer had a serious effect; errors of 5%, for example, caused a drop in peak transmittance to 70% and errors of 10% a drop to 50%, together with considerable degradation in the shape of the pass band.

Errors of up to 10% in the \(L\) layer had virtually no effect on either the shape of the pass band or on the peak transmittance. This is absolutely in line with what we would nowadays expect from a multiple-cavity filter.

Heather Liddell performed an investigation as part of a study reported by Smith and Seeley into some effects of errors in the monitoring of infrared single-cavity filters of designs:

\[
\text{Air}|HLHL \, HH \, LHLHL|\text{Substrate}
\]

and

\[
\text{Air}|HL \, HH \, LHL|\text{Substrate}.
\]

A computer program was used to calculate the reflectance of a multilayer at any stage during deposition. Monitoring was assumed to be at or near a frequency of four times the peak frequency (i.e., a quarter of the desired peak wavelength) of the completed filter.

It was shown that, if all layers were monitored on one single substrate, then, provided the form of the reflectance curve during deposition was predicted and it was possible to terminate layers at reflectances other than turning values, there could be an advantage in choosing a monitoring frequency slightly removed from four times peak frequency.

If no corrections were made for previous errors, then a distinct tendency for errors to accumulate in even-order monitoring (i.e., monitoring frequency an even integer times peak frequency) was noted.

The major problem in tolerancing is that real errors cannot be treated as small; that is to say, first-order approximations are unrealistic. The error in one layer interacts nonlinearly with the errors in other layers, and it is not realistic to treat them as though their effects can be calculated in isolation and then linearly combined.

In recent years, the most satisfactory approach for dealing with the effects of errors and the magnitude of permissible tolerances has been the use of Monte Carlo techniques. In this method, the performance of the filter is calculated, first with no errors, and then a number of times with errors introduced in all the layers.

In the original form of the technique, introduced by Ritchie, the errors are thickness errors and are completely random and uncorrelated. They belong to the same infinite population, taken as normal with prescribed mean and standard deviation. The performance curves of the filter without errors and of the various runs with errors are calculated.

Although statistical analyses of the results can be made, it is almost always sufficient to simply plot the various performance curves together, allowing a visual assessment of the effects of errors of the appropriate magnitude.

The method essentially provides a set of traces that reproduce, as far as possible, what would actually be achieved in a succession of real production batches. The characteristics of the infinite normal population can be varied and the procedure repeated. It is sufficient to calculate some 8 or perhaps 10 curves for a set of error parameters. The level of error at which a satisfactory process yield would be achieved can then readily be determined.

In the earliest version of the technique, the various errors were drawn manually from random number tables and converted into members of a normal population using a table of the area under the error curve. (The procedure is described in textbooks of statistics—see Yule and Kendall, for example.)

Later versions of the technique simply generate the random errors by computer. Although the errors are usually drawn from a normal population, the type of population has little effect on the order of the results.

Normal distributions are convenient to program, and since there is no strong reason for not using them—and because errors made up of a number of uncorrelated effects are well represented by normal distributions—most error analyses make use of them.

The level of permissible errors depends, to some extent, on the index contrast in the filter. Figure 13.11 shows some examples of plots where the errors are simple independent thickness errors of zero mean. From these and similar results, we find that the thickness errors tolerated in simple edge filters and antireflection coatings are normally around 2% standard deviation.

This correlates quite well with the accuracy usually achievable by normal optical or quartz-crystal monitoring. Narrowband filters require rather better accuracy when random errors in thickness are involved.

The two-cavity filter of Figure 13.11 shows unacceptable passband distortion with random thickness errors as small as 0.5% standard deviation. This filter has a roughly 2% half-width. For narrower filters or filters with greater numbers of cavities, the tolerances must be tighter still.

In a single-cavity filter, the main effect of random errors is a peak wavelength shift, with the shape of the passband being scarcely affected even by errors as large as 10%. The standard deviation of the scatter in peak wavelength is slightly less than the standard deviation of the layer thickness errors, indicating that some averaging process is operating, although the orders of magnitude are the same.

A system of monitoring in which the thickness errors in different layers are uncorrelated requires that each layer should be controlled independently of the others. In this type of monitoring, we cannot expect high precision in the centering of narrowband single-cavity filters and we foresee great difficulties in producing narrowband multiple-cavity filters at all.

This monitoring arrangement is what we have called indirect monitoring. Systems where each layer is controlled on a separate monitoring chip are of this type. There are difficulties with monitoring low-index layers on a fresh glass substrate because of the small changes in transmittance or reflectance. The monitoring chips are usually changed after a low-index layer and before a high-index layer, with two or four layers per chip being normal.

Sometimes these layers are monitored to turning values, but more frequently, a method sometimes called level monitoring is used. Here the layer reflectance or transmittance signal is terminated at a point removed from the turning value where the signal is still changing, leading to inherently greater accuracy.

This approach involves what is essentially an absolute measurement of reflectance or transmittance. Thus, the termination point is frequently chosen to be after a turning value rather than before, so that the extremum can be used as a calibration. This usually implies a shorter wavelength for monitoring or the introduction of a geometrical difference between batch and monitor, such as placing the monitor nearer the source or using masks in front of the batch.

Narrowband filters are not normally monitored using the aforementioned methods. Instead, all the layers are monitored on the same substrate, usually the actual filter being produced, a system known as direct monitoring. At the peak wavelength of the filter, the layers should all be quarter-waves or half-waves, and so we can expect a signal that reaches an extremum at each termination point.

The achievable accuracy for any individual layer in this method cannot, therefore, be particularly high, and, at first glance, it may appear that the achievable accuracy would fall short of the required precision. Since each layer is deposited over all previous layers on the monitor substrate, an interaction occurs between the errors in any layer and those in the preceding layers. These interactions are not accounted for in the tolerancing calculations described earlier.

To address this, we require a technique that models the actual process as accurately as possible. This is a straightforward computing operation in which each layer is considered to be deposited on a surface of optical admittance corresponding to that of the multilayer that precedes it, rather than on a completely fresh substrate.

The results of such a simulation are shown in Figure 13.12 demonstrating the powerful error compensation mechanism that has been found to exist. This compensation has also been independently and simultaneously confirmed by Pelletier and his colleagues.

The nature of this compensation mechanism is perhaps best explained using an admittance diagram. Figure 13.13 shows such a diagram drawn for two quarter-wave layers. Since both the isoreflectance contours and the individual layer loci are circles centered on the real axis, the turning values must always occur at the intersections of the loci with the real axis, regardless of what has been deposited earlier. At the termination point of each layer, there is the possibility of restoring the phase to zero or to π.

In this arrangement, overshoot or undershoot in the previous layer affects the current layer. If the previous layer is too thick, the current one tends to be thinner to compensate, and vice versa. While it is impossible to cancel all effects of an error in a layer completely, the process effectively transforms thickness errors into reflectance errors at each stage. Since reflectance changes are a second-order effect, the compensation mechanism ensures that the peak wavelength of the filter remains at the desired value, i.e., the monitoring wavelength.

The remaining error—reflectance residual—manifests as changes in peak transmittance and half-width. The peak transmittance drops due to unbalanced reflectances on either side of the spacer layer, but this reduction is generally much smaller than the increase in bandwidth caused by the reflectance change.

This self-compensating behavior explains why even large thickness errors in individual layers do not necessarily preclude the production of a functional filter. For example, in Figure 13.12, thickness errors of up to 50% occur in some layers, yet the resulting filter characteristics remain useful.

In this monitoring arrangement, thickness errors in any individual layer are a combination of a compensation of the error in the previous layer and the error introduced in the layer itself. The magnitude of the thickness errors can be misleading in determining whether the filter can be successfully produced. As demonstrated in Figure 13.12, large thickness errors can still result in acceptable filter characteristics due to this error compensation mechanism.

The important factor is not the thickness error itself but the error in reflectance or transmittance in determining the turning values. Theoretical expressions relate reflectance or transmittance errors to the reduction in filter performance. These analyses also assess layer sensitivity to errors, identifying layers where the highest monitoring accuracy is required. This sensitivity analysis can differ from the thickness sensitivity described by Lissberger.

For example, high-index cavity layers exhibit the greatest sensitivity in the low-index layers following the cavity, whereas low-index cavities have their highest sensitivity within the cavity layer itself. This analysis also shows that as the number of layers increases, there is a point beyond which improving the half-width is no longer achievable.

At this point, the effect of errors increases more rapidly than the theoretical bandwidth reduction. To achieve further bandwidth reductions, the use of second- or higher-order spacers becomes necessary, aligning with practical experience.

From a monitoring perspective, high-index cavities are preferred over low-index cavities. High-index spacers reduce angular sensitivity and provide greater tuning range. However, in visible-region filters, where high-index layers may have greater absorption losses, low-index cavities are often used despite their monitoring challenges.

Formulae for calculating errors in reflectance, half-width, and peak transmittance as a function of random turning value errors exist. However, for most purposes, computer simulation suffices. It is noteworthy that first-order compensation is effective, but second-order monitoring (e.g., at the wavelength where all layers are half-waves) is less effective at preserving peak wavelength. Similarly, third-order monitoring is less effective than first-order monitoring, though scatter in peak wavelength is still reduced compared to second-order monitoring.

Multiple-Cavity Filters

Multiple-cavity filters exhibit similar behavior but include additional complexities. The coupling layers between Fabry–Perot sections of the filter are particularly sensitive to errors in unique ways. Preliminary investigations using admittance diagrams for multiple-cavity filters may not immediately reveal these sensitivities.

However, closer analysis reveals that a specific transition between layers near the central coupling layer often results in false compensation, where thickness errors are of the same, rather than opposite, sense (as shown in Figure 13.14).

In this scenario, an increase in thickness in one layer results in a similar increase in the subsequent layer and vice versa. This results in an overall change in the relative spacing of the cavities, producing multiple peaks in the filter’s characteristic curve.

These peaks become more pronounced as relative errors in spacing increase. One peak corresponds to the normal control wavelength and is close to theoretical transmittance, while additional peaks appear on either side, depending on the error nature.

This false compensation can be mitigated by independent control of the second layer in each pair, using a separate monitor plate, quartz-crystal monitoring, or timing. However, the layer must also be monitored on the regular substrate to preserve compensation for the full filter.


Telecommunications Filters and Precision

Narrowband filters for telecommunications applications, such as dense wavelength division multiplexing (DWDM) filters, present unique challenges. Figure 13.15 illustrates a DWDM filter specification alongside the impact of random thickness errors drawn from a normal population with a standard deviation of 0.003%. The errors perturb the filter’s performance to the limit of the acceptable specification.

Uniformity of these filters is achieved by rotating the substrate above an offset source, ensuring even deposition. Since these filters are small (often 1.4 mm²), they are typically produced on larger disks and diced into smaller units post-coating. However, the off-center positions introduce variability in deposition rates across the disk’s radius, leading to random termination errors in each layer. This variability is proportional to the fractional turn during deposition, with larger errors farther from the center.

To meet the established 0.003% error standard, the rotational speed of the disk must be sufficient to reduce errors. For example, assuming a maximum random error equivalent to 25% of a full turn, the disk would require 8333 complete rotations per layer to achieve this precision. At a deposition rate of 5 minutes per quarter-wave, this translates to a required rotational speed of 1700 RPM, which is typical for telecommunications filter production equipment.

Advanced Monitoring Systems for Narrowband Filters

Pelletier and colleagues conducted theoretical studies on maximètre monitoring systems for narrowband filters. As expected, these systems exhibit superior accuracy when monitoring individual layers compared to single-wavelength systems. However, in monitoring all layers on a single substrate, the error compensation process operates more intricately than in turning value methods.

For very small errors, the system works effectively. However, larger errors—particularly in critical layers—can accumulate, drastically broadening the bandwidth of a single-cavity filter or even causing a complete collapse in a multiple-cavity filter. Pelletier introduced two key concepts to describe this behavior:

1. Accuracy: Represents the error in an individual layer without considering the multilayer system as a whole.
2. Stability: Represents the cumulative effect of errors as the multilayer deposition progresses.

While the maximètre system’s accuracy surpasses that of the turning value method, its stability in controlling narrowband filters is poor, and the system can become unstable. To fully leverage the accuracy of such systems, **subsidiary measurements** are necessary to ensure stability.

Broadband Monitoring Techniques

The challenges of maintaining accuracy and stability in narrowband filter production have led to the development of broadband monitoring techniques. These systems perform simultaneous measurements at multiple wavelengths across a wide spectral range. A merit function is calculated, representing the difference between the actual and desired signals. The deposition process is terminated when the merit function reaches a minimum.

Although perfect deposition would theoretically yield a zero merit function, inevitable errors in layer index and homogeneity perturb the result. Computer simulations of broadband monitoring for components like beam splitters have demonstrated error compensation. However, the underlying theoretical basis for such compensation remains qualitative and may apply only to specific cases.

Extensions of broadband monitoring include systems that reoptimize designs in real-time based on measured errors in earlier layers. However, this requires accurate characterization of the errors—whether in thickness, optical constants, or both. If errors are mischaracterized, reoptimization can worsen performance.

Computational Manufacturing

Advances in computational power have enabled the modeling of production processes using techniques like Monte Carlo simulations. These simulations allow realistic studies of errors and tolerances. The approach, referred to as computational manufacturing by Tikhonravov et al., has been applied to diverse optical filter designs.

For instance, consider a longwave-pass filter for the visible and near-infrared spectrum, consisting of 31 layers of silica and tantala. A simulation of this filter incorporates signal noise with a standard deviation of 0.4% in transmittance. Figure 13.16 shows the theoretical performance, while Figure 13.17 illustrates the noisy monitoring signal for the first two layers.

To interpret the monitoring signal, a mechanical backlash mechanism is introduced to account for noise. The system registers an extremum only after the signal reverses by a prescribed amount, delaying recognition. Once the extremum is detected, a specified overshoot is applied to determine the termination level (illustrated in Figure 13.18). Noise causes late detection of extrema and early termination at prescribed levels.

Optimizing Monitoring Procedures

To address these challenges, the monitoring procedure can be optimized. For example, dividing the first four layers into two groups and monitoring them on separate chips significantly improves performance (Figure 13.20). This approach prevents error accumulation in early layers, which would otherwise degrade the filter’s pass region.

Despite these improvements, residual issues like layer shortening due to noise can shift the spectral edge. Adjusting the monitoring wavelength can correct this.

Quartz-Crystal Monitoring

Quartz-crystal monitoring measures mass thickness rather than optical thickness. While this method lacks robust error compensation, simulations comparing quartz-crystal and optical monitoring for broadband antireflection coatings indicate comparable performance. Published results for quartz-crystal monitoring are impressive.

For narrowband filters, where peak wavelength control is critical, direct optical monitoring remains essential. However, quartz-crystal monitoring is well-suited for most other filter types, especially in high-volume production of identical components.

Comparative Analysis of Monitoring Methods

Quartz-crystal and optical monitoring methods each have strengths and weaknesses. Quartz-crystal monitoring excels in stability for batch production but may lack the flexibility of optical monitoring. Conversely, optical monitoring is preferred for varied coating types and applications requiring precise control, such as far-infrared filters with large material thicknesses.

Error Compensation Mechanisms in Multiple-Cavity Filters

The behavior of multiple-cavity filters introduces unique complexities. In particular, the coupling layers between Fabry–Perot sections are highly sensitive to errors. Preliminary analysis may not immediately reveal significant differences in error sensitivity between these layers and those in single-cavity filters. However, closer investigation shows a peculiar phenomenon:

– False Compensation Effect: At a critical point near the central coupling layer, a thickness error in one layer results in a similar error in the subsequent layer rather than an opposite one. This leads to an increase or decrease in the relative spacing of cavities, creating a multiple-peaked characteristic curve. The number of peaks corresponds to the number of cavities (e.g., one peak for a two-cavity filter, two peaks for a three-cavity filter, etc.).

Figure 13.14 illustrates this phenomenon. False compensation can be mitigated by controlling the second layer independently, using separate monitoring plates, quartz-crystal monitors, or timing. However, these layers must also be monitored on the regular substrate to maintain compensation for the overall filter.

Challenges in Telecommunications Filters

High-performance filters for telecommunications, such as dense wavelength division multiplexing (DWDM) filters, present additional challenges. These filters are typically small, around 1.4 mm², and are fabricated on larger disks before being diced into individual units.

During production, substrates are rotated about their axis over an offset source to ensure uniformity. However, achieving exact layer thicknesses for each rotation is nearly impossible. The random error introduced by fractional rotations increases with the distance from the substrate’s center. To meet strict tolerances (e.g., 0.003% standard deviation in thickness), the number of rotations must be carefully calculated. For example:

– Assuming a maximum error of 25% per fractional turn, achieving a 0.003% error tolerance requires 8,333 complete rotations.
– For a single quarter-wave layer deposition taking 5 minutes, the disk must rotate at 1,700 RPM—a rate typical for such applications.

Stability in Monitoring Systems

Pelletier’s studies on maximètre monitoring systems reveal that while these systems excel in accuracy, they often lack stability when applied to narrowband filters. Errors accumulate over multiple layers, causing significant performance degradation.

To address this, modern monitoring systems incorporate subsidiary measurements to ensure stability. For example, broadband monitoring systems measure performance across a wide spectral region, using a merit function to assess deviations. Layer deposition is terminated when the merit function reaches a minimum.

Despite these advancements, challenges remain. Errors in earlier layers can propagate, causing deviations in subsequent layers. Advanced computational methods, such as Monte Carlo simulations, can model these effects and inform monitoring strategies.


The Role of Computational Manufacturing

Advances in computing power have revolutionized the study of errors and tolerances in optical thin-film coatings. Techniques like Monte Carlo simulations provide realistic insights into production processes, enabling engineers to predict and mitigate errors.

Case Study: Longwave-Pass Filter

A simulated longwave-pass filter for the visible and near-infrared spectrum illustrates the benefits of computational manufacturing. The filter consists of 31 layers of silica and tantala, with a core of quarter-waves and ripple-reducing matching layers. Figure 13.16 shows its theoretical performance.

The simulation incorporates signal noise with a standard deviation of 0.4% in transmittance. Figures 13.17 and 13.18 illustrate the monitoring signal and noise management, respectively. Early results revealed poor performance due to errors in the first few layers. Adjusting the monitoring strategy—e.g., reducing the number of layers per chip—resolved these issues, as shown in Figure 13.20.

Broader Applications

Computational manufacturing has been applied to diverse optical systems, from broadband antireflection coatings to narrowband filters. Quartz-crystal monitoring, often debated for its limitations, has demonstrated comparable performance to optical monitoring in simulations. For narrowband filters, however, optical monitoring remains essential due to its precision.


Comparative Analysis: Quartz-Crystal vs. Optical Monitoring

Quartz-Crystal Monitoring
– Measures mass thickness, making it less prone to optical misalignments.
– Ideal for batch production of identical filters.
– Lacks robust error compensation mechanisms.

Optical Monitoring
– Measures optical thickness, allowing precise control of narrowband filters.
– Preferred for applications with high variability or large material thicknesses.
– Incorporates advanced error compensation methods, such as broadband monitoring.

The choice between these methods depends on the specific application and production requirements.