当前位置:首页 >> >>

Optical Analysis and Opto-Mechanical Design for Miniaturized Laser Illumination Module in 3


OPTICAL ANALYSIS AND OPTO-MECHANICAL DESIGN FOR MINIATURIZED LASER ILLUMINATION MODULE IN 3D AREAL MAPPER

by Ming Luo

Thesis submitted to the Faculty of Virginia Polytechnic Institute and State University in partial fulfillment of the requirements of the degree of Master of Science in Electrical Engineering

Anbo Wang, Chairman Ahmad Safaai-Jazi A. Lynn Abbott

May 9, 2000 Blacksburg, Virginia

Keywords: Structured light, Spatial light modulator, Hologram Copyright 2000, Ming Luo

Optical Analysis and Opto-Mechanical Design for Miniaturized Laser Illumination Module in 3D Areal Mapper Ming Luo (ABSTRACT) A miniaturized spatial light modulator (SLM)-based structured-light illumination module with optical fiber input is designed to generate a coded 256 x 256 spots pattern for 3-D areal mapping applications. The projector uses the light from a He-Ne laser coupled to a polarization-maintaining (PM) fiber to illuminate a specially made hologram so that four virtual point sources are regenerated. The interference pattern of the four sources are filtered and modulated by an SLM. The output intensity can thus be encoded to form any arbitrary pattern through the electronic input applied to the SLM with a high speed. In this thesis, a complete optical diffraction analysis of the system is presented to provide guidelines for the optimal design of the system parameters. Through the theoretical analysis for square beam array generation, the important parameters for fabricating a hologram are given. The final system optical design and arrangement based on optical analysis are described. The detailed opto-mechanical construction of the LIM and the associated alignment, the computer simulation and the preliminary test results of the developed LIM are also provided.

ACKNOWLEDGEMENT

I would like to express my deep appreciation and gratitude to my advisor and committee chairman, Dr. Anbo Wang, for giving me the opportunity to work on exciting projects in my area of interest and for his support and expertise throughout the projects. I would also like to thank Dr. Safaai-Jazi and Dr. Lynn Abbott for serving in my committee and reviewing my thesis. I am thankful to all my lab partners and fellow students who have helped me with their valuable company and support. Specially thanks Hai Xiao and Dinesh Subramani, my colleagues of this project, for their help and good suggestions. With much love, I would like to thank my parents, for their unconditional love and encouragement. Finally I would like to thank my husband, for his love, encouragement, and support.

iii

TABLE OF CONTENTS

ABSTRACT............................................................................................................................ ii ACKNOWLEDGEMENTS .....................................................................................................iii TABLE OF CONTENTS.........................................................................................................iv LIST OF FIGURES................................................................................................................vii LIST OF TABLES ................................................................................................................viii CHAPTER 1. Introduction....................................................................................................... 1 1.1 Overview of Computer Vision................................................................................ 1 1.2 Structured light based 3D Mapping Technology ..................................................... 2 1.3 High Speed Structured light Illumination................................................................ 4 1.4 Motivation.............................................................................................................. 4 1.4.1 Miniaturized Light Illumination Module .................................................. 6 1.5 Contribution of the Research and the Outline of the Thesis..................................... 7 CHAPTER 2. LIM System Configuration................................................................................ 9 2.1 System Configuration............................................................................................. 9 2.2 Optical Elements of the LIM .................................................................................. 9 2.2.1 Laser Source .......................................................................................... 10 2.2.2 Polarization Maintaining Optical Fiber .................................................. 11 2.2.3 Hologram............................................................................................... 11 2.2.4 Spatial Light Modulator (SLM) and Beam Splitter................................. 12 2.2.5 Output Lens ........................................................................................... 15 CHAPTER 3. System Diffraction Analysis............................................................................ 16 3.1 The Principle of Diffraction Analysis ................................................................... 16 3.2 Overview of LIM System Diffraction Analysis..................................................... 17 3.3 Point Source Diffraction....................................................................................... 19 3.4 Pass the SLM and Diffraction Towards the Lens .................................................. 19 iv

3.5 Phase Transformation of the Lens......................................................................... 20 3.6 Diffraction to the Source Plane............................................................................. 21 3.7 Diffraction Towards the Observe Plane ................................................................ 22 3.8 System Output Pattern.......................................................................................... 24 3.9 Conclusion ........................................................................................................... 25 CHAPTER 4. Theoretical Analysis for Square Beam Array Generation................................. 26 4.1 Spherical Distortion.............................................................................................. 26 4.1.1 Analysis................................................................................................. 26 4.1.2 Computer Simulation............................................................................. 30 4.2 Overlap of the Four Point Sources’ Illuminations ................................................. 32 4.3 Phase Errors Analyses .......................................................................................... 34 4.3.1 Out-of-Plane Induced Phase Error.......................................................... 34 4.3.2 Computer Simulations of the Out-of-Plane Phase Error.......................... 36 4.3.3 Non-Square Source Positioning Induced Phase Error ............................. 37 4.3.4 Computer Simulations of the Non-Square Phase Error ........................... 38 4.4 Requirements for Hologram Fabrication............................................................... 39 CHAPTER 5. System Implementation and Test Results......................................................... 40 5.1 Optical Implementation........................................................................................ 40 5.1.1 Hologram Specifications........................................................................ 41 5.1.2 Lens Specifications ................................................................................ 44 5.2 Construction of the LIM....................................................................................... 47 5.2.1 Light Source Support and Alignment ..................................................... 48 5.2.2 Hologram Support and Alignment.......................................................... 49 5.2.3 SLM Support and Alignment ................................................................. 50 5.2.4 Output-Coupling Lens Support and Alignment ...................................... 51 5.2.5 LIM Box................................................................................................ 51 5.3 Preliminary Test and Results ................................................................................ 51

v

CHAPTER 6. Conclusions and Suggestions for Future Work ................................................ 54 6.1 Conclusions.......................................................................................................... 54 6.2 Suggestions for Future Work................................................................................ 55 REFERENCE ........................................................................................................................ 56 APPENDIX........................................................................................................................... 58 VITA..................................................................................................................................... 71

vi

LIST OF FIGURES
Figure 1.1 Configuration of 3-D computer vision system for object mapping........................... 3 Figure 1.2 3-D Areal Mapper................................................................................................... 5 Figure 2.1 Schematic configuration of the light illumination module (LIM)........................... 10 Figure 2.2 Regeneration of four virtual point sources by using a hologram ............................ 11 Figure 2.3 Operating principle of the spatial light modulator.................................................. 14 Figure 2.4 Structrue of the spatial light modulator (SLM) ...................................................... 14 Figure 3.1 System optical layout for diffraction analysis ........................................................ 18 Figure 4.1 The position relationship between the point sources and any interference point..... 27 Figure 4.2 3-D plot of the spherical distortion of the interference pattern (a = 2 mm) ............. 31 Figure 4.3 2-D contour plot of the output beamlets at the SLM plane (a = 2 mm)................... 31 Figure 4.4 The relationship between the point source position and illumination area.............. 32 Figure 4.5 Out-of-Plane induced phase error.......................................................................... 34 Figure 4.6 Computer simulation of the Out-of-plane phase error............................................ 37 Figure 4.7 Non-square position geometry of the four point sources........................................ 37 Figure 4.8 Simulation results of the non-square phase error ................................................... 39 Figure 5.1 Interception of the beams on the holographic medium........................................... 42 Figure 5.2 The simulation result for the lens aperture............................................................. 45 Figure 5.3 Optical layout of the LIM system.......................................................................... 47 Figure 5.4 Photograph of the developed light illumination module......................................... 52 Figure 5.5 Output stripe patterns from the LIM...................................................................... 53

vii

LIST OF TABLES
Table 2.1 The specification of Displaytech SLM ................................................................... 13 Table 4.1 Maximum position errors for different source separation........................................ 30 Table 5.1 Hologram specifications......................................................................................... 43 Table 5.2 Critical design parameters of the LIM .................................................................... 46 Table 6.1 LIM Specifications................................................................................................. 55

viii

CHAPTER 1. Introduction
1.1 Overview of Computer Vision The goal of a computer vision system is to create a model of the real world from its twodimensional images. As an interdisciplinary technology, computer vision is a technological combination of computer technology, semiconductor technology, electronic design, optics, automation, and information theories. Since images are two-dimensional projection of the three-dimensional world, a challenging task of computer vision is to recover the threedimensional information based on the knowledge about the objects in the scene and the projection geometry. Almost several decades of research and development on various computer vision systems have resulted in a dramatic improvement of the techniques used in computer vision. The applications of computer vision technologies have been extended to almost every major field of modern technologies including industry, agriculture, military, medical science, aerospace, and geographic studies. Based on the different information recovered and used, computer vision systems can generally be categorized into three major areas. These include threedimensional surface mapping for quality inspection purposes, image enhancement and analysis to extend the ability of human visions, and autonomous navigation for guiding vehicles and robotics. Three-dimensional surface mapping of an object is a very important research area of the computer vision technology. Through the quantitative measurement of the geometrical properties of objects, computer vision systems can be used for quality control of various products ranging from pizza to turbine blades, from submicron structures of wafers to autobody panels, and from apples to oranges [1]. Many techniques have been developed for a computer vision system to recover the three-dimensional geometric information from the real world. These include stereo imaging [2, 3], focus or blur estimation [4, 5], interferometry, Moiré technique [8, 9], structured-light and triangulation [6, 7].

1

Stereo imaging uses two cameras to view the same object from different angles. Threedimensional information of the object can thus be obtained by correlating the two images, similar to that a human being using the brain to coordinate his two eyes. Interferometry and Moiré techniques have also been commonly used in three-dimensional surface mapping. By extracting the phase information or the space frequency of the interference or Moiré fringes, the shape of the three-dimensional object can be mapped with a very high accuracy. Focus or blur estimation has also been used in computer vision systems to obtain the surface characteristics of an object. Due to the finite depth of field of the optical system, only objects that are at a proper distance from the camera appear focused in the image whereas those at other depths are blurred in proportion to their distance. Algorithms based on the convolution of estimated out-focus blur with the point spread function determined by the camera and the distance of the object from the camera can thus be used to recover the shape of the object. Structured light and triangulation maps a three-dimensional object by projecting a known optical pattern to the object surface. The light projector, camera and the object to be mapped form a triangular relation. After a calibration procedure, which determines the geometric properties of the triangle, the offset of the optical patterns viewed by the camera at the image plane can thus be used to retrieve the three-dimensional shape of the object. 1.2 Structured Light Based 3D Mapping Technology Structured light illumination has been used for several decades to extract three-dimensional information from surface topology [10]. Most commercially available 3D vision systems employ the structured light method because it currently has a great potential for fast, accurate and inexpensive high-resolution 3D mapping. As shown in Figure 1.1, in a typical 3-D computer vision mapping system, the structured light projector, the CCD camera and the object forms a triangle. A computer is also necessary to process the image so that the three dimensional information can be retrieved. The system tracks the scene reflected from the object which is illuminated by a predetermined optical pattern. Through advanced image processing techniques, the three-dimensional surface topology can thus be determined by

2

measuring the corresponding lateral offset of the received image. Various light structures have been used for different applications. Among these, single line [11, 12], multiple parallel lines [13, 14], crosses [15] and 2-dimensional grids [16] are most commonly used. Using a single line to encode a surface has the advantages such as the avoidance of fringe ambiguity and a very high resolution in the measurement of surface height as well as lateral structures. However, in order to obtain the three-dimensional surface information, a single stripe must be scanned over an entire surface and processed at each location of the scan. This scanning process requires the implementation of a very accurate scanning mechanism. Moreover, the mapping can be a very slow process due to the limited speed of the available scanning devices, and also because multiple images have to be taken and processed at a certain interval of each scan.

Structured pattern Object

CCD Camera Laser Projector

Figure 1.1 Configuration of a 3-D computer vision system for object mapping. The projection of multiple light stripes to the object’s surface eliminates the need for the scanning device. The simplest way to generate and project multiple lines is to use a laser, a grating and a cylindrical lens. The number of lines, spacing between lines, and the width and length of the individual lines can be precisely controlled. Because the entire surface can be

3

mapped by processing only a single image, the mapping speed can thus be improved dramatically, however, with the sacrifice of the lateral resolution. 1.3 High Speed Structured Light Illumination Early advancements of structured light generation came with laser technologies, but the methodologies had commercial limitations due to digital processing and interface bottlenecks. During the 1990’s, both workstation and PC technology was significantly improved in performance. High-speed interface standards became common and low cost storage mediums grew to gigabyte proportions. At the same time, the digital signal processor (DSP) technology also emerged. These special purpose digital chips could perform digital filtering at billions of operations per second. The emerging spatial light modulator (SLM) technology [22] and the continuous advancement in computer/DSP technology [26] allow further unification with optical and digital pattern recognition methodologies. The performance of SLMs is at a level that is comparable, in term of information flow, to the high speed DSPs. It is this new SLM and computer/DSP technology base that allows structured light illumination, digital pattern recognition and optical pattern recognition to be implemented on the same architecture, simultaneously and at a very high speed [10]. Today’s high speed, high-resolution SLMs represent a compact, reliable and improved method to project sophisticated patterns onto objects. Furthermore, SLMs provide a simple means of combining feedback between the reflected image and the projected pattern. The feedback capability lends itself to adaptive algorithms for achieving intelligent and high performance visual sensing. 1.4 Motivation The 3D Areal Mapping system proposed by DCS Corporation is an areal profiler that originally conceived as an improvement over the Numerical Stereo Camera (NSCS) technology [17, 18]. It represents the logical extension of non-contact triangulation-based laser profiling technology from point scanning, line scanning to area profiles with no scanning

4

at all. Basic components include a light illumination module (LIM) for successively projecting variable laser patterns on the object of interest, a CCD camera for optical image recording, and a computer to interface the LIM with cameras and to perform data processing, as shown in Figure 1.2.
Camera

laser LIM PM Fiber Measurand

Control Interface

Computer

Figure 1.2 3-D Areal Mapper. The 3D Areal Mapper is initially developed for the purposes of industrial turbine blade rework and medical patient positioning [19]. Turbine blade rework involves mapping previously cracked blades after annealing weldments have been applied, and subsequent reshaping by a highly accurate computer controlled milling machine. Since the process must be performed repeatedly on a high-speed assembly line, there are strict demands on the LIM in the areas of allowed mapping time, accuracy and output data format. Patient positing is a generic pre-treatment requirement, applying to many surgical applications including neurosurgery, sinus, spinal and knee surgery, and radiation surgery. Positioning of the patient in these treatments amounts to aligning the relevant coordinate system(s) of the critical volume in the patient to coordinate systems in the operating room. Small camera and LIM components will enhance marketability, as they must be mounted on existing equipment in an already crowded operating room.

5

1.4.1

Miniaturized Light Illumination Module

Central to the DCS’s areal mapping technology is the spatial light modulator-based light illumination module, which generates and projects periodically structured light in an unfocused array of beamlets that are spatially encoded by the SLM. The LIM functions as a structured light source for the 3-D Areal Mapping system by generating an M × N (M and N are usually powers of 2, such as 64,128 or 256) beamlets pattern, and switches “on” or “off” the individual lines or dots of a Spatial Light Modulator (SLM) to form a temporal series of patterns. The first generation LIM [19], based on NSCS technology, generates a 128 × 128 beamlets array using a lens to make a laser beam diverge, a four-facet prism to generate four virtual sources, and a microscope objective lens to produce a demagnified real image of the virtual sources. The beamlet array passes through a spatial light modulator (SLM) so that the individual beamlets can be identified. Because the laser beam is diverging, the prism introduces high optical aberrations. The prism-based LIM requires very stringent alignment, which in turn necessitates heavy and costly support structures. The first generation LIM was about five feet long, 25 kg weight and only 3% light efficiency. In order to reduce the size and complexity of the LIM, initial improvements in the LIM was achieved by replacement of glass optical components in the first generation LIM with the holographic optics. DCS Corporation developed the second-generation LIM prototype using their patented hologram techniques [20]. The prototype set up was about 18 inches high and occupied a 1' × 2' optical bench. This LIM prototype, generating 64 × 64 beamlets pattern, includes a He-Ne laser functioned as the light source and a hologram, a SLM, and outputcoupling lens. Compared to the very first generation of the LIM, the second-generation prototype has reduced its size dramatically, and the efficiency of light use was increased to 30%. Since the LIM prototype has been set up successfully, the even stricter system requirements were presented by DCS Corporation. Driven by the target applications for the 3-D Areal Mapper product, industrial turbine blade rework and medical patient positioning, the LIM is requested to enable much smaller weight and volume, high light efficiency, high resolution

6

and minimum alignment time. Jointly sponsored by the DCS Corporation and the Virginia Center of Innovative Technologies, the Photonics Laboratory at Virginia Tech successfully developed a miniaturized third generation LIM to upgrade the current DCS 3D areal mapper products in their potential applications. The miniaturized LIM has a reduced size of 6"×5"×3.8", which provides better portability and can be mounted on equipment that requires easy maneuverability such as in surgical instruments. The larger number of spots in the output pattern (256 × 256) result in the higher resolution of the surface profiling output and hence an improvement in the accuracy. The higher output power (1.5mW integrated over active area) results in a higher signal-to-noise ratio of the image captured by the camera. Moreover, the novel opto-mechanical design of the LIM allows a very simple alignment procedure and the complete optical diffraction analysis of the LIM system provides a general guidance for the optimal optical design of the system. The overall cost of implementing the LIM system has also been brought down to an acceptable level. 1.5 Contributions of The Research and Outline of the Thesis This thesis presents the research work conducted at the Photonics Laboratory in the development of the miniaturized light illumination module to support DCS’s efforts in upgrading its 3-D Areal Mapper. The reported work is concentrated in the optical and optomechanical design of the miniaturized LIM. The main contributions of this research are listed below: 1) System configuration. The principle of the LIM system is described with the emphasis on the optical functions of the components. 2) Optical design. The optical models for the key components of the LIM system are derived. Based on those optical models, the comprehensive diffraction analysis is presented. The results of the diffraction analysis provide a clear guideline to achieve the optimal performance of the LIM system. 3) Hologram design. In order to achieve a 256 by 256 beamlets error free output pattern, the hologram must be fabricated to provide four virtual point sources with a strict requirement in their positions. A detailed source error analysis based on the

7

interference theory is performed to provide the tolerances for the hologram fabrication. 4) Opto-mechanical design. According to designed optical layout, the opto-mechanical components of the LIM are designed using AutoCAD tool or chosen from venders. 5) System implementation. Based on the system analysis, the optical design and arrangement of the LIM system is given to ensure the LIM satisfies the specifications. The functions and the aligning procedures of each part are described. 6) System preliminary test and results. Chapter 1 of the thesis provides an introduction to the background of the structured light computer vision technology for 3D mapping; presents the motivation of developing the LIM and contributions to this research. Chapter 2 gives an overview of the LIM system in 3D Areal Mapper. The diffraction analysis for the LIM system is given in Chapter 3. Chapter 4 describes the theory of 256x256 beamlets generation by an interference method; analyzes source separation requirements, spherical distortion, and source position errors effect. Corresponding computer simulation results are presented in this chapter. Chapter 5 gives the final system optical design and arrangement based on the previous analysis. The detailed mechanical construction of the LIM and the associated alignment plan are also included in Chapter 5. Chapter 6 presents the conclusions and the suggestions for future work.

8

CHAPTER 2. LIM System Configuration
2.1 System Configuration The schematic configuration of the LIM is shown in Figure 2.1. The system uses a He-Ne laser at a wavelength of 633nm as the optical source. The output light from the laser is coupled to a polarization maintaining (PM) fiber with its polarization matching that of the PM fiber. The use of the PM fiber provides a polarized point source input to the system and reduces the size of the system by separating the source from the main optical system. The light from the PM fiber illuminates a specially made hologram as the reference beam to regenerate the four virtual point sources that are located at the four corners of a square. Because the hologram records the phase information of those four coherent point sources, the reconstructed virtual sources will have a fixed phase relation that is independent to the environmental changes. The light waves from the four virtual point sources interfere with each other to generate a two-dimensional spots pattern. In cooperation with a polarizing beam splitter, a spatial light modulator (SLM) is then aligned to the inference pattern in such a way that the pixels of the SLM are matched to the spots of the pattern. Therefore, by controlling each pixel of the SLM, we can switch a single spot on and off electrically so that the output mesh pattern can be modulated spatially according to the arbitrary input electronic signal of the SLM. An output-coupling lens is also used in the system so that the size of the pattern can be controlled to cover the area of interest. As pointed out later by the results of optical diffraction analysis, the lens also functions as the key component to correct the spherical aberrations and to fulfill illumination with an infinite depth of filed. 2.2 Optical Elements of the LIM As shown in Figure 2.1, the system mainly contains a laser source, a polarization maintaining optical fiber, a hologram, a polarizing beam splitter, a coherent spatial light modulator and an output coupling lens. The specifications and functions of those optical elements are listed below.

9

Figure 2.1 Schematic configuration of the light illumination module (LIM). 2.2.1 Laser Source

Lasers have revolutionized various fields of science and technology, and are being used in a wide range of applications in medicine, communications, measurement, and as a precise light source in many scientific investigations. Commercially available lasers can be categorized based on their characteristics, such as wavelength, power and output beam [28]. Lasers span the entire light spectrum from infrared to ultraviolet. The power output from a laser ranges from a milliwatt to millions of watts. As to output beam, the laser output may be a continuous wave, where the lasers emit light in a continuous ma nner; or it might be pulsed, where the lasers emit in short bursts. The light source for holography must have sufficient spatial and temporal coherence to allow the formation of an interference pattern over the desired volume of space (e.g., throughout the recording medium) and to keep this pattern stationary during the exposure time [27]. A laser source, being both monochromatic and coherent over a significant distance, is the ideal choice [25]. In general, the laser must have sufficient power at the required wavelength. The laser source used in the system is a high power He-Ne laser with an output power of 30 mW at the

10

wavelength of 633 nm. The use of the He-Ne laser also provides a very long coherence length, which is important to achieve a long distance operation and a wide mapping area. 2.2.2 Polarization Maintaining Optical Fiber

In order to reduce the size of the LIM system, the operation of the He-Ne laser is separated from the LIM via an optical fiber input. Because the system requires a polarized input light, a polarization maintaining (PM) fiber is used. The PM optical fiber purchased from the Newport Corporation is a “Bow-Tie” fiber optimized for the operation at 633nm and with a numerical aperture (NA) of 0.16. The beat-length of the PM fiber is less than 2mm, which can therefore provide a very good preservation of the input polarization. The coupling from the He-Ne laser to the PM fiber is achieved by a specially designed external coupling module provided by OZ Corporation in Canada. This compact laser-to-fiber coupling module has a typical power coupling efficiency higher than 60% through the use of a rotation tuning mechanism to match the polarization of the laser light and that of the PM fiber. 2.2.3 Hologram

Figure 2.2 Regeneration of four virtual point sources by using a hologram.

11

The 2-dimensional array of output spots can be produced by the interference of four coherent point sources located at the corner of a square. Since the phase and position among those four point sources must be kept unchanged to ensure a good long-term stability of the operation, a Holographic Structured Light Generator (HSLG) technology [20] is applied. A hologram is fabricated to record the phase relation of the four point sources, and later can be used to regenerate an image of four virtual point sources with a fixed phase relation by a single reference beam illumination, as shown in Figure 2.2. The light from the four point sources interferes as it propagates, forming an interference pattern in space. In the Chapter 4, we will give the key parameters and tolerances for fabricating the hologram via error analysis.

2.2.4

Spatial Light Modulator (SLM) and Beam Splitter

Spatial light modulators (SLMs) play an important role in many technical areas where the control of light on a pixel-by-pixel basis is desirable, such as optical data processing, adaptive optics, optical correlation, machine vision, image processing and analysis, beam steering, holographic data storage, and displays [27]. Several technologies have contributed to the development of SLMs. These include micro-electro-mechanical devices and pixelated electrooptical devices. Encompassed within these categories are amplitude-only, phase-only, or amplitude-phase modulators. SLMs of all varieties continue to have a significant impact on the photonics community. Spatial Light Modulator (SLM) is a device that modulates the coherent light based on its control input. A typical spatial light modulator is a two-dimensional array of pixels made of electro-optical materials. By applying a voltage signal to the individual pixel, the properties of the input light can be changed after the interaction of the light with the pixel material. The SLM can be bought off-the-shelf to meet the specifications of the miniaturized LIM. After comparison, Displaytech’s Ferroelectric Liquid Crystal (FLC) SLM was chosen because of the specifications shown in Table 2.1. Displaytech's Spatial Light Modulator (SLM) [21]

12

allows a user to spatially encode information on a beam of coherent light. This reflective SLM modulates light with fast-switching Ferroelectric Liquid Crystal (FLC) material in direct contact with the upper surface of a conventional CMOS VLSI chip. The operation of FLC devices is based on the principle of birefringence. Birefringence is the phenomenon in which the phase velocity of an optical wave propagating in the crystal depends on the direction of its polarization [29]. Ferroelectric devices also perform nearly three orders of magnitude faster than liquid crystal displays and deliver a superior contrast ratio and a wider viewing angle than nematic-based liquid crystal displays. Table 2.1 The specification of Displaytech SLM Array size Active area (mm) Efficiency SLM (kHz) full frame rate 256×256 Pixel pitch (?m) 15 1.0
2″ optical mounting circuit board

3.84×3.84 Gap width (?m) 65% 3 System mount Contrast ratio

100:1 (Zero order at 633 nm)

As shown in Table 2.1, the SLM used in the system, provided by Displaytech Inc, has 256 × 256 electronically addressable square pixels. When a control electrical voltage is applied, the pixel of the reflective SLM can reflect the input light and rotate its polarization by 90°. The SLM is usually used in combination with a polarizing beam splitter as shown in Figure 2.3. The input light is linearly polarized along x direction, which is aligned to the transmission polarization of the beam splitter so that all light is transmitted. The polarization of the reflected light from the SLM is modulated according to the voltage signal applied on the individual pixel.

13

Output

y

E’y

Ex Input Ex x E’y E’x

E’x z Polarizing beam-splitter Spatial light modulator

Figure 2.3 Operating principle of the spatial light modulator.

14um

14um 1um

wall

pixel

Figure 2.4 Structrue of the spatial light modulator (SLM). For those pixels that have been switched on, the corresponding reflected light will have a polarization rotated by 90° (now along y direction). Upon reaching the beam splitter, the portion of the light will be reflected to the observing plane. On the other hand, other pixels without voltage signals will reflect the light with the same polarization (along x direction).

14

This amount of light will transmit through the polarizing beam splitter and will not output to the observing plane. The output pattern at the observing plane thus carries the information that we input to the SLM chip, and the spatial modulation is achieved by changing the input signals to the SLM. As shown in Figure 2.4, the SLM consists of 256 × 256 pixels with the configuration of two-dimensional mesh. Each pixel is a 14 ?m × 14 ?m square. The wall separating the pixels has a thickness of 1?m. In short, the SLM acts as a pattern controller in the LIM to encode output patterns for areal mapping. For the triangular solution, the SLM is used to identify the beamlets with their origins, and thus to supply each beamlet’s triangulation baseline. A polarization-selective beamsplitter is integral to the SLM for operation at near-normal incidence. The SLM connects to the microcontroller via ribbon cables. The microcontroller takes commands from the controlling host computer and uploads display patterns to the SLM. Since the beamlet array will align with the SLM’s square array of 256 x 256 aperture, whose pitch is 15 ?m, it is therefore necessary to analyze the optical system to obtain a solid basis for later system design and fabrication. The diffraction analysis of the miniaturized LIM is presented in Chapter 3. 2.2.5 Output Lens

An output-coupling lens may be described as an optical wavefront modifying device. An optical wavefront, propagating through such a device is reshaped upon exit in a way unique to the lens. With the use of the output coupling lens, moreover, the size of the pattern can be controlled to cover the area of interest. As pointed out later by the results of optical diffraction analysis, the lens also functions as a key component to fulfill an infinite depth of illumination filed. The optimal design parameters will be achieved through the diffraction analysis.

15

CHAPTER 3. System Diffraction Analysis
3.1 The Principle of Diffraction Analysis [24] In almost all optical systems, some energy of light spreads outside the region predicted by rectilinear propagation. This effect, known as diffraction, is of fundamental and inescapable physical phenomenon. It is therefore very important to conduct a diffraction analysis prior to the engineering design of a delicate optical system. The physical phenomenon of diffraction can be intuitively described by the Huygen’s principle, which states that if each point on the wavefront of a light disturbance were considered to be a new source of a “secondary” spherical disturbance, then the wavefront at any later instant could be found by constructing the “envelope” of the secondary wavelets. Huygen’s principle nicely describes diffraction phenomena, but rigorous explanation demands a detailed study of the wave theory. However, the mathematics behind a rigorous explanation is rather complicated. In order to get relatively simple mathematic expression of the field distribution of the diffracted light pattern, some assumptions are necessary. Among those various approximations, Fresnel and Fraunhofer approximations are the traditionally used ones in dealing with typical optical systems. It is assumed commonly in these two approximations that the distance z between the aperture and observation plane is much larger than the maximum linear dimension of the aperture size. In addition, it is assumed that in the plane of observation only a finite region about the z-axis is of interest, and that the distance z is much larger than the maximum linear dimension of this region. The Fresnel approximation assumes that the distance r in the phase term can be adequately approximated by the first two terms (second order). In essence, this approximation is the replacement of the spherical Huygens’ wavelets by quadratic surfaces. Thereafter, the following formula can be used to describe the Fresnel diffraction U ( xo , y o ) = exp( jkz ) k U (xi , y i ) exp{ j [( xo ? xi ) 2 + ( y o ? y i ) 2 ]}dxi dy i , ∫∫ jλz 2z (3-1)

16

where U(xi, yi) is the field before diffraction, U(xo, yo) is the filed after diffraction, k= 2π is the propagation constant, and λ is the wavelength of the light, λ

z is the axial distance between the input plane and output plane. In general, most optical systems satisfy the requirement of Fresnel diffraction formula. Therefore this formula has been commonly used in many system analyses. Diffraction analysis can be further simplified if restrictions more stringent than those used in the Fresnel approximations are adopted. If we assume that the distance between the input plane and the observing plane is so large that it satisfies the following criterion z >> k ( xi + y i ) max , 2
2 2

(3-2)

where (xi, yi) are the coordinates used to describe the input field, then the quadratic phase factor can be approximated to be unity over the entire aperture. The diffracted field can thus be found as U ( xo , y o ) = exp( jkz ) jk k 2 2 exp[ ( xo + y o )]∫∫ U (xi , yi ) exp[ j ( xo xi + yo yi )]dxi dyi . (3-3) jλz 2z z

The above approximation is called Fraunhofer approximation and Equation (3-3) is the Fraunhofer diffraction formula.

3.2 Overview of LIM System Diffraction Analysis In order to achieve high quality output spots pattern, a diffraction analysis is necessary to optimize the optical design of the laser projector. We will assume that the hologram is ideal and introduces no distortion in our system. We can start the analysis at four virtual point sources, and trace the Fresnel diffraction of the light transmitting through apertures such as the beam splitter, the SLM pixels and the lens. By examining the final output pattern, we can optimize the choice of the optical components and their geometrical positions.

17

The simplified optical configuration is shown in Figure 3.1, where the light waves, starting from the reconstructed 4 point sources, will pass the spatial light modulator and the lens, and finally interfere to generate the two-dimensional spots pattern. There are five planes of interest, including 1) the source plane (α, β) where the reconstructed four-point sources locate, 2) SLM plane (ξ, η), 3) Lens plane (x′, y′), 4) Source image plane (u, v) where the sources are imaged by the lens, and 5) Observe plane (x, y). The optical analysis will be performed forward from one plane to the other directly based on the Fresnel diffraction formula. To simplify the analysis, we will trace the diffraction of a single point source and later coherently superpose the light waves from the four point sources to construct the final diffraction pattern.

α

ξ

x x’ u

β

y η y’ v z

z0 z1
Source Plane

d z2
Lens

z3
Source Image plane Observe plane

SLM plane

Figure 3.1 System optical layout for diffraction analysis.

18

3.3 Point Source Diffraction Assume that we have a point source located at the coordinate of (α,β) of the source plane. According to the Fresnel diffraction formula, we have the diffraction field distribution UBS(ξ,η) at SLM plane described by U BS (ξ ,η ) = jk A exp( jkz 0 ) exp[ [(ξ ? α ) 2 + (η ? β ) 2 ]] 2 z0 jλz 0

jk jk 2 jk A = exp( jkz 0 ) exp[ (α 2 + β 2 )] exp[ (ξ + η 2 )] exp[ ? (αξ + βη )] 2 z0 2 z0 z0 jλz 0

,

(3-4) where z0 is the distance between the source plane and the SLM plane, A is the amplitude of the source, and k is the propagation constant. If we define B= then UBS(ξ,η) becomes U BS (ξ , η ) = AB exp[ jk 2 jk (ξ + η 2 )] exp[ ? (αξ + βη )] . 2 z0 z0 (3-6) 1 jk exp( jkz 0 ) exp[ (α 2 + β 2 )] , jλz 0 2 z0 (3-5)

3.4 Pass the SLM and Diffract Towards the Lens

We can model the SLM in combination with the beam splitter as an optical spatial filter in (ξ,η) plane with the filtering function given by [24] U f (ξ , η ) = ρ Re ct (
128 128 ξ η ) Re ct ( ) ? [ ∑ ∑ δ (ξ + mυ ,η + nυ )] , W W m = ?128 n = ?128

(3-7)

where we assume that the efficiency of the SLM pixel is ρ, W=14?m is the width of the pixel and ν=15?m is the pitch of the pixel. The field right after passing the SLM is UAS(ξ,η) given by

19

U AS (ξ ,η ) = U BS (ξ ,η )U f (ξ ,η ) = AB exp[ jk 2 jk (ξ + η 2 )] exp[ ? (αξ + βη )]U f (ξ ,η ) , 2 z0 z0 jk 2 (ξ + η 2 )]U (ξ ,η ) 2 z0 (3-8)

= A B exp[ where

U (ξ ,η ) = exp[ ?

jk (αξ + βη )]U f (ξ ,η ) . z0

(3-9)

From the SLM plane, the optical wave continues propagating towards the lens plane (x’, y’). Right before the wave passing the lens, we have the following field distribution U BL ( x ' , y ' ) = 1 jk exp( jkd ) ∫∫ U AS (ξ ,η ) exp[ [( x '?ξ ) 2 + ( y '?η ) 2 ]] dξdη , jλd 2d (3-10)

where d is the distance between the SLM plane and the lens plane. If we define C= 1 exp( jkd ) , jλd (3-11)

and substitute Equation (3-8) into Equation (3-10), we then have U BL ( x ' , y ' ) = ABC ∫∫ U (ξ ,η ) exp[ jk 1 1 2 ( + )(ξ + η 2 )] 2 z0 d

jk jk exp[ ( x ' 2 + y ' 2 )] exp[ ? (ξx '+ηy ' )]dξdη 2d d 3.5 Phase Transformation of the Lens

.

(3-12)

To ease the optical analysis, we assume the lens is a thin lens and holds the paraxial approximation. If we further assume that the lens aperture is much larger than the numerical aperture (NA) of the incident beam, the filtering effect of the lens’s aperture can thus be neglected in the analysis. Therefore the phase transforming property of the lens can be modeled by the following function [24], L( x ' , y ' ) = exp[ ? jk ( x' 2 + y ' 2 )] , 2f (3-13)

20

where f is the focal length of the lens. Noted that the constant phase delay of the phase transfer function is dropped because it dose not affect the results in any significant way. The light waves, starting from the reconstructed 4 point sources, will pass the spatial light modulator and the lens, and finally interfere to generate the two-dimensional spots pattern. The lens transfers the phase of the field according to the following equation U AL ( x ' , y ' ) = U BL (ξ ,η ) L( x ' , y ' ) , where L(x’, y’) is the phase transfer function given by Equation (3-13). (3-14)

3.6 Diffraction to the Source Image Plane From the lens, the wave continues traveling towards the source image plane. There, we get the field Ui(u,v) described as U i (u , v ) = 1 jk exp( jkz 2 ) ∫∫ U AL ( x ' , y ' ) exp[ [(u ? x ' ) 2 + (v ? y ' ) 2 ]] dx ' dy ' , jλz 2 2 z2 (3-15)

where z2 is the distance between the lens plane and the source image plane. If we define D= 1 exp( jkz 2 ) , jλ z 2 (3-16)

and substitute Equations (3-12) and (3-13) into Equation (3-15), we get the field distribution in the source image plane expressed as U i (u, v ) = ABCD ∫∫ ∫∫ U (ξ ,η ) exp[ × exp[ jk 1 1 2 jk 1 1 1 ( + )(ξ + η 2 )] exp[ ( + ? )( x ' 2 + y ' 2 )] 2 z0 d 2 d z2 f

jk 2 jk jk (u + v 2 )] exp[ ? (ξx '+ηy ' )] exp[ ? (ux '+ vy ' )]dξdηdx ' dy ' . 2 z2 d z2 jk 2 jk 1 1 = ABCD exp[ (u + v 2 )]∫∫ U (ξ ,η ) exp[ ( + )(ξ 2 + η 2 )]F {u, v}dξdη 2 z2 2 z0 d (3-17) where F{u, v} is a Fourier transform from (ξ, η) to (x’, y’) defined by

21

F {u, v} = ∫∫ exp[

jk 1 1 1 x' y' u v ( + ? )( x ' 2 + y ' 2 )] exp[ ? j 2π ( ξ + x '+ y ' )]dx ' dy ' η )] exp[ ? j 2π ( 2 d z2 f λd λd λz 2 λz 2 1 1 1 jλ u v ξ 2 η 2 ) +( ) ]} = E exp{? 2π ( + ? ) ?1 [( + + 2 d z2 f λz2 λd λz 2 λd

(3-18) where E = jλ ( 1 1 1 ?1 + ? ) . d z2 f (3-19)

Define z1 the distance between the source plane and the lens plane, which can be calculated by z1 = z 0 + d From the lens image equation, we also have 1 1 1 + = , z1 z 2 f Substitute Equations (3-18) to (3-20) into Equation (3-17), we have
U i (u, v ) = ABCD exp[ jk 2 jk z (u + v 2 )]∫∫ U (ξ ,η ) exp[ ( 1 )(ξ 2 + η 2 )] F {u, v}dξdη 2 z2 2 z0 d

(3-20)

(3-21)

1 dz1 z jk = ABCDE exp[ (u 2 + v 2 )( ? )]∫∫ U (ξ ,η ) exp[? j 2π 1 (uξ + vη )]dξdη 2 2 z2 z0 z2 λz 0 z 2

(3-22)

3.7 Diffraction Towards the Observe Plane In our application, the distance between the source image plane to the observe plane (z3) is usually very large. It satisfies the following far field diffraction condition, given by z 3 >> k (u 2 + v 2 ) max . 2 (3-23)

So we can use Fraunhofer approximation to model the diffraction from the source image plane to the observe plane. By doing that, we can write the field distribution in the observe plane as
U o ( x, y ) = 1 jk 2 1 exp( jkz 3 ) exp[ ( x + y 2 )]∫∫ U i (u, v ) exp[ ? j 2π ( xu + yv )]dudv jλz3 2 z3 λz3 z1 1 (uξ + vη )]dξdη}exp[ ? j 2π ( xu + yv )]dudv λz0 z 2 λz3

= ABCDEF ∫∫ Q × {∫∫ U (ξ ,η ) exp[ ? j 2π

(3-24)

22

where F= and Q = exp[ jk 2 1 dz1 (u + v 2 )( ? )] , 2 z2 z0 z 2 2 (3-26) 1 jk 2 exp( jkz 3 ) exp[ ( x + y 2 )] , jλ z 3 2 z3 (3-25)

is the quadratic phase term. If we set Q=1, the quadrature phase effects can thus be eliminated from Equation (3-22). By doing this, we have 1 dz1 ? = 0. z2 z0 z2 2 (3-27)

Solving Equation (3-27) with help from Equations (3-20) and (3-21), we get d = f. This indicates that to achieve the best output pattern, we need to place SLM at the focal plane of the lens. Thereby, the field in the observe plane becomes
U o ( x, y ) = ABCDF ∫∫ {∫∫ U (ξ ,η ) exp[? j 2π z1 1 (uξ + vη )]dξdη} exp[? j 2π ( xu + yv )]dudv λz 0 z 2 λz 3

(3-28) There are two sets Fourier transforms included in the above equation. By applying the following duality property of Fourier transform F {F{ f ( x , y )}} = f ( ? x,? y ) , we can simplify Equation (3-28) to U o ( x, y ) = ABCDF ( = ABCDF ( λz 0 z 2 2 z z z z ) U ( ? 0 2 x ,? 0 2 y ) z1 z1 z 3 z1 z 3 λz 0 z 2 2 jk z z z z ) exp[ (αx + βy )]U f ( ? 0 2 x,? 0 2 y ) z1 z0 M z1 z 3 z1 z 3 . (3-30) (3-29)

λz z jk x y = ABCDF ( 0 2 ) 2 exp[ (αx + βy )] Re ct ( ? ) Re ct ( ? ) z1 z0 M MW MW ?
m = ?128 n = ?128

∑ ∑ δ (x + Mmυ , y + Mnυ )
M = z1 z 3 . z0 z 2 (3-31)

128

128

where M is the magnification factor of the system defined as

23

3.8 System Output Pattern The four point sources reconstructed by the hologram are coherent. If they are located at the (a/2, a/2), (-a/2, a/2), (-a/2, -a/2) and (a/2, -a/2) in the source plane respectively, we can coherently superpose their diffraction filed as U Σ ( x, y ) = U 1 ( x, y ) + U 2 ( x , y ) + U 3 ( x, y ) + U 4 ( x, y ) λz z jk jk jk (ax + ay )] + exp[ ( ? ax + ay )] + exp[ ( ?ax ? ay )] = ABCDF ( 0 2 ) 2 {exp[ 2 z0 M 2 z0 M 2 z0 M z1 + exp[
128 128 y x jk (ax ? ay )]} × Re ct ( ? ) Re ct ( ? ) ? ∑ ∑ δ (x + Mmυ , y + Mnυ ) 2 z0 M MW MW m = ?128 n = ?128

(3-32) The optical intensity distribution in the observe plane can thus be written as I = U Σ ( x, y )
2

= ABCDF ( × Re ct ( ?

2

λz 0 z 2 2 k k ax ) cos2 ( ay ) ) 16 cos2 ( z1 2 z0 M 2 z0 M

(3-33)

128 128 x y ) Re ct ( ? ) ? ∑ ∑ δ (x + Mmυ , y + Mnυ ) MW MW m = ?128 n = ?128

Equation (3-33) indicates that the output pattern includes two parts. One is the interference result of the optical waves from the four point sources. The other is the modulation function of the SLM. If we align the SLM so that the distance between the SLM and the sources satisfies the following relation 2 z 0π =υ , ak then, Equation (3-33) becomes I = ABCDF (
2

(3-34)

λz 0 z 2 2 x y ) 16 cos2 ( π ) cos2 ( π) z1 Mυ Mυ

128 128 x y ) Re ct ( ? ) ? ∑ ∑ δ (x + Mmυ , y + Mnυ ) × Re ct ( ? MW MW m = ?128 n = ?128

.

(3-35)

24

Each interference spot is now aligned to each pixel of the SLM. Therefore, each individual spot can be switched on and off by changing the voltage applied to the corresponding pixel on the SLM. In addition, from Equation (3-34), We can get z0 = υ a λ (3-36)

3.9 Conclusion The analysis of the system based on the optical diffraction theories points out very important guidelines for the optimal design of the LIM system. As indicated in Equations (3-26) and (327), the quadrature phase term can be eliminated if we place the SLM at the focal plane of the lens. Therefore, the spherical aberration resulted from the non-plane incident wave out of fiber endfaces can be eliminated from the final output pattern. Another design criterion that can be derived from the optical analysis is embodied in Equation (3-35). As indicated in the equation, by the proper design of the separation of the four virtual point sources (a) and the distance between the source plane and the SLM (z0), the interference spots will be aligned to the pixels of the SLM. The individual spot of the output pattern can thus be switched on or off by applying a proper voltage signal to the specific pixel of the SLM. Therefore, the output can be modulated to any pattern according to the input electronic signal to the SLM. The output light from an optical fiber has a fixed angle of illumination determined by the numerical aperture (NA) of the fiber. The interference pattern can only be generated in the overlap region of illumination from the four virtual fiber images recorded by the hologram. In order to allow a full modulation by the SLM, the overlap region of these four virtual sources must over the whole functional area of the SLM.

25

CHAPTER 4. Theoretical Analysis for Square Beam Array Generation
The 2-dimensional beamlets can be produced by the interference of four coherent point sources located at the corner of a square. In order to ensure a good stability of the operation, a Holographic Structured Light Generator (HSLG) technology [20] is applied to reproduce the four coherent point sources so that the phase and position among those four point sources can be kept unchanged during the application. As pointed out by the diffraction analyses, the interference spots generated by the hologram must align precisely with the SLM aperture to obtain high quality controllable beamlets. The fabrication process of the hologram for use in the HSLG can be described as follows. A laser beam is divided into five beams coupled into five fibers respectively; the beams exiting the four fiber ends collectively comprise the object beams. The four fiber ends are positioned in a common plane and on the four corners of a square. The laser beam from the fifth fiber end is used as the reference illumination. The object and reference beams combine at holographic medium and produce interference pattern to be recorded in the hologram. The hologram will form four mutually coherent point source images upon illuminating by a reconstruction beam which coincides with the reference beam. Therefore, the hologram specification calls for source separations and positions that will impact hologram fabrication and SLM interface. Once the HSLG master is fabricated, it can be replicated at a low cost. In this chapter, we focus on the optimal design of the hologram parameters including the optimization of source separation, analysis of spherical distortion, non-square error, and out of plane error. Other HSLG constrains include light efficiency, uniformity, phase accuracy and alignment requirements. 4.1 Spherical Distortion 4.1.1 Analysis As shown in Figure 4.1, the four sources are located on the corners of a square in the source plane (ξ, η). The Cartesian coordinates of the four sources are (a/2, a/2), (-a/2, a/2), (-a/2, -

26

a/2) and (a/2, -a/2). The coherent monochromatic waves from the four point sources located in the (ξ, η) plane will superpose to generate a two-dimensional interference pattern as they overlap in the (x, y) plane. The separation of the planes is z. The Interference pattern can be found through the following equation [23]: I ( x, y ) = [ Ae ikr1 + Ae ikr2 + Ae ikr3 + Ae ikr4 ] ? [ Ae ? ikr1 + Ae ?ikr2 + Ae ?ikr3 + Ae ? ikr4 ] = A 2 [4 + 2 cos k (r2 ? r1 ) + 2 cos k (r3 ? r1 ) + 2 cos k (r4 ? r1 ) + 2 cos k (r3 ? r2 ) + 2 cos k (r4 ? r2 ) + 2 cos k (r4 ? r3 )] where r1, r2, r3, r4 are distances from the four point sources to the point of observation, k= 2π is the propagation constant, λ is the wavelength, A is the amplitude of the electric λ (4-1)

field, and a is the separation between two adjacent corners of the square.

η (a/2,a/2) s1 ξ s4 (a/2,-a/2) s3 (-a/2,-a/2)

y

(-a/2,a/2) s2

r1 r2 r4
z

P(x, y) x

r3

z

Figure 4.1 The position relationship between the point sources and any interference point. In the Cartesian coordinate system, the distances can be calculated as: ri = [( x ? ξ i ) 2 + ( y ? η i ) 2 + z 2 ] 2 = r0 [1 + where r0 = ( x 2 + y 2 + z 2 ) 2 .
1 1

ξ i + ηi 2ξ x + 2η y 1 ? i 2 i ] 2 , i = 1, 2, 3, 4 2 r0 r0

2

2

(4-2)

(4-3)

27

For a relatively large distance (z >> (ξ, η, x, y)max), Equation (4-2) may be approximated to the binomial expansion ξ + ηi ξ x + ηi y ? i , ri = r0 + i 2r0 r0 Thereafter, Equation (4-1) can be further simplified to be I ( x, y ) = A2 [4 + 4 cos k
2 2 2 2

i = 1, 2, 3, 4

(4-4)

a( x ? y ) a( x + y ) ay ax ] + 4 cos k + 2 cos k + 2 cos k r0 r0 r0 r0

ka ka y ). x ) cos2 ( = 16 A cos ( 2 r0 2 r0 sources including the spherical distortion. Using indices m and n to define the maxima, the locations of the intensity maxima are calculated as λr0 mn ? ? x mn = m a , ? λr ? y mn = n 0mn a ? where
2 2 r0 mn = ( x mn + y mn + z2 )2 .
1

(4-5)

The Equation (4-5) expresses the interference pattern from four perfectly aligned and phased

|m|, |n| = 0, 1, 2…

(4-6)

(4-7)

The impact of the (x, y) dependence of r0mn is to shift the intensity maximum from the desired square array to the spherical pattern. To determine the impact of the spherical distortion, the locations of the intensity maxima are calculated. Solving Equations (4-6) and (4-7), we obtain the intensity maximum positions given by ? λz ( m 2 + n 2 )λ2 ? 1 x = m [ 1 ? ] 2 ? mn ? a a2 , |m|, |n| = 0, 1, 2… ? 2 2 2 λ z m + n λ ( ) ?1 2 ? y mn = n [1 ? ] ? a a2 ?

(4-8)

As indicated by Equation (4-8), the separation between two adjacent beamlets is not constant across the entire observing plane. It is seen that the spots will form a square pattern only when the second term in the denominator of each radical is sufficiently small, i.e., if a 2 2 >> mmax + nmax . λ (4-9)

28

This result is independent of the distance of operation. Take the desired square positions of the beamlets to be λz ? ? x0m = m a = m ? ?x 0 , |m|, |n|=0, 1, 2… ? λz ? y0n = n = n ? ?y 0 a ? Hence the desired separation between the adjacent beamlets is λz ? ? ?x 0 = a ? λz ? ?y 0 = ? a Compared Equation (4-8) with Equation (4-10), we find the spot position error to be: 1 λ 2 ? 2 2 ??xmn = x mn ? x0 m ? 2 ( a ) m(m + n )?x 0 , ? 1 λ 2 2 2 ? ?y mn = y mn ? y 0n ? ( ) n (m + n ) ?y 0 2 a ?

(4-10)

(4-11)

|m|, |n| = 0, 1, 2…

(4-12)

The beamlet position errors caused by the spherical distortion of the interference pattern could cause non-perfect alignment among the interference spots and the SLM pixels. In real applications, we let ?x 0 = ?y0 = υ , which means that the central interference pattern is aligned to the SLM pixel. If we set the alignment tolerance to a quarter of the SLM pixel pitch, that is ?x mn ?x mn 1 = ≤ ?x 0 υ 4 and ?y mn ?y mn 1 = ≤ ?y 0 υ 4 where υ is the pixel pitch of SLM. The maximal radial relative error ( Ermax ) can also be limited in Ermax | ?x mn |2 max + | ?y mn |2 max 1 ≤ = 2 2 4 ?x0 + ?y 0 (4-15) (4-14) (4-13)

At the corner of the N × N pattern, the misalignment error reaches the maximum, given by

29

| ?x mn |max | ?y mn |max 1 λ 2 N N 2 N 2 N3 λ 2 1 = = ( ) [( ) + ( ) ] = ( ) ≤ ?x 0 ?y 0 2 a 2 2 2 8 a 4 [ N3 λ 2 2 2 2 ( ) ] ( ?x 0 + ?y 0 ) N3 λ 2 1 8 a = ( ) ≤ 2 2 4 8 a ?x 0 + ?y 0

(4-16)

Or Ermax =

Solving Equation (4-16), we get the requirement for the source separation given by a≥ N3 λ 2 (4-17)

For 256×256 beamlets, λ = 0.633 ?m, we get a ≥ 1.833 mm. The Table 4.1 lists the maximum position errors generated by different source separations. We find that a source spacing of a = 2 mm is enough to ensure registration to 0.25 pixels over the entire SLM array. Table 4.1 Maximum position errors for different source separation Source separation a (mm) Max pos. error (pixel) 1.0 0.84 1.8 0.26 2.0 0.21

4.1.2 Computer Simulation Based on the above discussion, the Mathematica software tool is used to simulate the interference pattern generation. Assuming that the source wavelength λ = 0.633 ?m, the source separation a = 2 mm, and the distance between the source plane and the SLM plane z0 = 47.373 mm calculated via Equation (3-36), the square interference pattern can be generated using Equation (4-5). On the SLM plane, the unit of axes defines in ?m. Figure 4.2 shows 3-D surface plot of the radial relative position error of the interference spots with respect to the centers of the SLM pixels. As shown in Figure 4.2, the inherent spherical distortion of the interference beamlets results in the misalignments among the beamlets and the SLM pixels. The error reaches its maximum (almost a quarter of the pixel pitch) at the outer edges of the SLM pixels. Figure 4.3 shows the simulated central (a) and edge (b) portion of the output

30

beamlets, where we can see the central spots have almost no observable misalignments, however, the misalignment at the edge spots is quite observable.

Figure 4.2 3-D plot of the spherical distortion of the interference pattern (a = 2 mm).

(a)

(b)

Figure 4.3 2-D contour plot of the output beamlets at the SLM plane (a = 2 mm). (a) Central nine spots; (b) Edge nine spots.

31

4.2 Overlap of the Four Point Sources’ Illuminations The output light from an optical fiber has a fixed angle of illumination (θ), which is determined by the numerical aperture (NA) of the fiber according to the following equation θ = arcsin( NA) . (4-18)

η (a/2, a/2) s1 (-a/2, a/2) s2 ξ s4 (a/2, -a/2) s3 (-a/2, -a/2) θ

y

x z z

Figure 4.4 The relationship between the point source position and illumination area. The interference pattern can only be generated in the overlap region of illumination from the four virtual fiber images recorded by the hologram. In order to allow full modulation of the interference beamlets by the SLM, the overlap region of these four virtual sources must be over the whole functional area of the SLM. As shown in Figure 4.4, the illuminating area of each point source in the observing plane (x, y) can be calculated as Point source s1: Point source s2: Point source s3: Point source s4: a a A1 = {( x , y ) | ( x ? ) 2 + ( y ? ) 2 ≤ ( z tan θ ) 2 } 2 2 a a A2 = {( x , y ) | ( x + ) 2 + ( y ? ) 2 ≤ ( z tan θ ) 2 } 2 2 a a A3 = {( x , y ) | ( x + ) 2 + ( y + ) 2 ≤ ( z tan θ ) 2 } 2 2 a a A4 = {( x , y ) | ( x ? ) 2 + ( y + ) 2 ≤ ( z tan θ ) 2 } 2 2 32

Because the illuminating angle θ is relatively small, for example, θ = 0.11rad for typical single mode fibers, we can use the following approximation in our analysis. tan θ ≈ θ . The overlapping area A can thus be calculated as A = A1 I A2 I A3 I A4 = {( x, y ) | x 2 + y 2 ≤ z 2 tan 2 θ ? ≈ {( x, y ) | x 2 + y 2 ≤ z 2θ 2 ? a2 } 2 1 2 a }. 2 (4-20) (4-19)

Assuming z >> a , the overlapping area can be further simplified as A = {( x, y ) | x 2 + y 2 ≤ ( zθ ) 2 } . (4-21)

The overlap area of interference among those four point sources must cover the outermost pixels of the SLM. Therefore the following relation needs to be satisfied x0m
2 max

+ y0n

2 max

≤ zθ ,

(4-22)

where x0m|max, and y0m|max are the spot positions at the edge of the N × N interference pattern, which can be calculated from Equation (4-10) by setting mmax = nmax =N/2. Thereafter, we have: x0 m and y0n
max max

= mmax ? ?x0 =

N ?x0 , 2 N ?y 0 . 2

(4-23)

= n max ? ?y 0 =

(4-24)

Substitute Equations (4-11), (4-23) and (4-24) into (4-22), we get a≥ Nλ 2θ (4-25)

This gives us a minimum requirement for the source separation. For N = 256, λ = 0.633 ?m, θ = 0.11 rad, source spacing must satisfy a ≥ 1.04mm in order to allow an enough overlap area

33

of illumination to cover the entire SLM pixels. By this analysis, we find that a source spacing of a = 2.0 mm is adequate to ensure a full coverage of the interference illumination on the SLM pixels. 4.3 Phase Errors Analyses Achieving the desired square array of spots in the interference pattern requires that (1) the four source images must lie precisely on the corners of a square, (2) the normal to this square must be parallel to the source beam axes and (3) the four point sources must be coplanar. For hologram fabrication, the requirement is that these conditions be established and maintained over the time period necessary to expose the emulsions. In this section, and phase control requirements for hologram fabrication will be examined in detail. Phase errors include that the non-square position error and the non-coplanar error of four point sources. 4.3.1 Out-of-Plane Induced Phase Error As shown in Figure 4.5, we consider the situation of one point source (s3) simply shifts a depth position error of ?z, which introduces an initial phase error to the light wave of this particular source, given by ?? = k ? ?z .
η (a/2,a/2) s1 ξ s4 (a/2,-a/2) (-a/2,-a/2)

(4-26)

y P(x, y) x z z

(-a/2,a/2) s2

s′3 ?z

Figure 4.5 Out-of-Plane induced phase error.

34

Letting r0 = z in Equation (4-5) for simplicity, the shifted beamlets can be described as I ( x, y ) = A2 [4 + 2 cos k ay ax + ay ax + 2 cos( k + ?? ) + 2 cos k z z z ax ax ? ay ay + 2 cos(k + ?? ) + 2 cos k + 2 cos( k + ?? )] z z z ax ? ay ax + ay ?? ) + cos2 k = 4 A2 [cos2 (k + 2z 2 2z ax + ay ?? ax ? ay ?? ) cos( k ) cos k ] + 2 cos( + 2 2z 2 2z

.

(4-27)

Equation (4-27) indicates that the initial phase error introduced by the out of plane in the fiber position can result in the distortion of the interference pattern. Further examination of Equation (4-27) reveals that the phase error is periodic. If the depth position error of ?z is a multiple of the wavelength, which means that the initial phase shift is a multiple of 2π, phase error introduced by one point source offset in depth will cause the entire interference pattern to shift and there will be no distortion to the interference pattern. This can be compensated by re-aligning the SLM to the interference pattern. Nevertheless, to ensure that the interference pattern can still cover the entire SLM pixels, we adopt as our criterion for depth positioning that the center of the interference pattern (i.e., the normal to the source plane) falls on the center of the SLM to within some of tolerance ?l. Simple trigonometry then yields the condition: ?z ≤ a ? ?l / z Substituting Equation (3-36) into (4-28), we get ?z ≤ ?l λ υ (4-29) (4-28)

For ?l = 0.15 mm (corresponding to ten SLM pixels), υ = 15 ?m, and λ = 0.633 ?m, we find ?z ≤ 6.33 ?m. So, the proper depth positioning within a few micrometers is sufficient to center the interference pattern relative to the center of the SLM. However, if the phase shift is an odd integer number of π, the distortion will reach the maximum. To obtain tolerance for source phase control, rewrite the equation (4-26) as

35

?? = 2π where

?z = i 2π + 2π?f , | i | = 0, 1, 2 … λ

(4-30)

?z ?z ?z = i + ?f , i is an integer part of and ?f is a fraction part of . λ λ λ

Let ?? ′ = 2π?f . If ?? ′ = π or ?f = 0.5, we know from Equation (4-27), the interference pattern is the worst. Phase control could be accomplished with source depth adjustment. We define the condition to control the source phase according to the computer simulation as | ?? ′ |≤ π = 0.6 rad . 5 (4-31)

With Equation (4-31) as our criterion, the source phase must be controlled to better than 0.6 radians. Then, the resolution of the fiber position adjustment in the axial direction needs to satisfy | ?z |≤ 1 λ = 0.06 ?m . 10 (4-32)

If the adjustment accuracy of the fiber axial position cannot satisfy the condition shown in Equation (4-32), the phase error will result in a beating pattern and system performance will be unacceptable. 4.3.2 Computer Simulations of the Out-of-Plane Phase Error Based on Equation (4-27), we can simulate the out-of-plane phase errors. Figure 4.6 shows the simulation results, where (a) is simulated error for ??′ = π (the worst case) and (b) is for ??′ = 0.2 π (the required case).

36

(a)

(b)

Figure 4.6 Computer simulation of the Out-of-plane phase error. (a) ??′ = π; (b) ??′ = 0.2π. 4.3.3 Non-Square Source Positioning Induced Phase Error The interference pattern can suffer a distortion if the four point sources are not occupying the four corners of a square. Here, we assume that three point sources lie on the corners of a square and the fourth point source does not. For example, the coordinates of point source s4 in the source plane are offset by (?x, ?y) with respect to its desired position (a/2, -a/2) as shown in Figure 4.7.

(-a/2, a/2) s2

s1 (a/2, a/2)

s′4 (-a/2, -a/2) s3 ?x

?y s4 (a/2, -a/2)

Figure 4.7 Non-square position geometry of the four point sources.

37

Again, to simplify the analysis, we let r0 = z in Equation (4-1). The distorted interference pattern is then
I ( x, y) = A2 [4 + 2 cos k ax ax + ay ay + 2 cosk + 2 cos(k + ?? ) z z z ay ax ? ay ax + 2 cosk + 2 cos(k ? ?? ) + 2 cos(k ? ?? )] z z z ax ay ax ay ax + ay ax ? ay ?? ? + ? ? ?? ) + 2 cos cos k cos(k )]. = 4 A2 [cos2 k + cos2 (k ? ? 2z 2z 2 2 2z 2z 2

(4-33) where the phase error ?? in the above description of the interference beamlets can be written as ?? = ? k x ? ?x + y ? ?y . z (4-34)

This phase error can introduce a distortion to the interference pattern similar to the distortion caused by the out of plane phase error as described in Section 4.3.1. They are both periodic in nature, and can be compensated by accurately adjusting the initial fiber positions. However, Equation (4-34) also indicates that the non-square fiber position induced distortion varies with the location of the pattern. The outer spots will have larger distortions than the central spots in general. 4.3.4 Computer Simulations of the Non-Square Phase Error The computer simulation results, shown in Figure 4.8, are obtained based on Equations (4-33) and (4-34), where (a) is for ?x = 3.0 ?m and ?y = 3.0 ?m and (b) for ?x = 2.0 ?m and ?y = 2.0 ?m. Our calculations and computer simulation show that |?x| and |?y| must be less than 2.0 ?m in order to get fuzzy free pattern.

38

(a)

(b)

Figure 4.8 Simulation results of the non-square phase error. (a) ?x = 3.0 ?m and ?y = 3.0 ?m; (b) ?x = 2.0 ?m and ?y = 2.0 ?m. 4.4 Requirements for Hologram Fabrication According to the analysis and the computer simulation results, we can conclude as following. (1) To obtain 256 x 256 beamlets, the source spacing must be larger than 1.0 mm. (2) To ensure the beamlets and SLM pixel mismatch no more than one-quarter pixel pitch, we can choose the source separation to be 2 mm. (3) For the non-coplanar shift of the point source, the proper depth positioning within a few micrometers is sufficient to center the interference pattern. With Equation (4-31) as our criterion, the source phase must be controlled to better than 0.6 radians. While phase control could be accomplished with source depth adjustment, the tolerance would be one tenth wavelength or 0.06 ?m when λ = 0.633 ?m. (4) The point source corner position tolerance is 2.0 ?m. We know that the SLM pixel pitch υ is 15 ?m, a = 2.0 mm, the distance between the source plane and the SLM is calculated to be z0 = 47.393 mm using Equation (3-36).

39

CHAPTER 5. System Implementation and Test Results

5.1 Optical Implementation The analysis of the system based on the optical diffraction theories establishes a guideline for the optimal design of the LIM system. As indicated in Equations (3-26) and (3-27), the quadrature phase term can be eliminated if we place the SLM at the focal plane of the lens. Another design criterion that can be derived from the optical analysis is embodied in Equation (3-33). As indicated in the equation, by the proper design of the separation of the four virtual point sources (a) and the distance between the source plane and the SLM (z0), the interference spots will be aligned to the pixels of the SLM. The individual spot of the output pattern can thus be switched on or off by applying a proper voltage signal to the specific pixel of the SLM. Therefore, the output can be modulated to any pattern according to the input electronic signal to the SLM. The output light from an optical fiber has a fixed angle of illumination, which is determined by the numerical aperture (NA) of the fiber. The interference pattern can only be generated in the overlapping region of the illumination. In order to allow a full modulation by the SLM, the overlapping region of these four virtual sources must cover the whole functional area of the SLM. In Chapter 4, the position and tolerance of four point sources for manufacturing the hologram are given. The remaining parameters in the design of the hologram are 1) the distance between the holographic medium and the point sources in the object beam, 2) the distance between the holographic medium and the reference beam, and 3) the orientation of the holographic medium with respect to the optical axis. The recording geometry will depend on the system layout and the distance between the source plane and the SLM (z0) since the reconstruction geometry is the same as the recording geometry.

40

5.1.1 Hologram Specifications

Holograms may be classified in a number of ways depending on their thickness, method of recording and method of reconstruction. Based on method of recording, holograms fall into two basic categories [30]. They are the transmission hologram and the reflection holograms. If the reference beam and the beam bouncing off the object both hit the holographic plate from the same side, this makes a transmission hologram. If the reference beam hits the plate from one side, while the beam from the object hits the plate from the other side, the result is a reflection hologram. The advantages and disadvantages of a reflection hologram are: (1) No shrink problem, therefore less possibility of aberration when regenerating the image. (2) High efficiency (~ 90%). (3) Difficult to make. (4) Requirement for high quality record medium. The major advantage of a transmission hologram is that it is relatively easy to make. However, in general, transmission holograms have a low efficiency (around 40%). According to the prior calculations, the distance between the four point sources and the SLM (z0) is 47.393 mm if the source separation (a) is 2 mm. Allowing enough room for placing the input fiber connector, the transmission hologram is chosen. As for the angle between object and reference beams, 90° is preferable because of the optical simplicity. The size s of the intercepted area on the holographic medium is defined as s= 2l tan θ , sin α (5-1)

where l is the distance along the optical axis between the source plane and the holographic medium. The source plane is oriented perpendicular to the optical axis. θ is the half-angle of the diverging beam and α is the angle at which the holographic medium is placed relative to the optical axis. 41

y Source 2 Source 1

x

Source 4 Source 3 Reference

Figure 5.1 Interception of the beams on the holographic medium. In our case, θ is 0.11 rad as determined by the NA of the optical fibers. α is chosen to be 45°. If the distance between the holographic medium and the plane of four point sources is set to be 22 mm, each point source will intercept the holographic medium to produce an area of illumination with the size of 9.67mm. Because the source separation is 2 mm, the total size of the intercepted area produced by all four sources on the holographic medium will be 11.67 mm, as shown in Figure 5.1. The illumination area of the reference beam must cover that entire region to allow full recovery of the four point sources. We choose the distance between the reference source and the holographic medium to be 40 mm. The intercepted area produced by the reference beam on the holographic medium is thus calculated to be 12.45 mm. The hologram specifications are shown in Table 5.1.

42

Table 5.1 Hologram specifications Parameters
Hologram type Efficiency Exposure time

Design Value
Transmission 40% Depending on the recording medium. General 2 – 3 seconds

Size Degree between object and reference beams The distance between the reference source and hologram The distance between the object beam and hologram Recording wavelength Source position requirement

≥ 13 mm 90° 40 mm

22 mm 633 nm Four point sources in the corner of the square with the side length of 2 mm

Source position tolerance Source out-of-plane phase control Source depth adjustment tolerence

≤ 2.0 ?m ≤ 0.6 rad ≤ 0.06 ?m

Holograms may suffer from aberrations caused by the mismatch in the reference and reconstruction beams. Even a small deviation from the recording geometry can present distortions to the reconstructed pattern. The condition that will eliminate all the aberrations simultaneously is to duplicate exactly the reference beam in the reconstruction process. Therefore, in the reconstruction process, the reconstruction wavelength, angle and position of reconstruction reference beam must be the same as those of recording reference beam.

43

5.1.2 Lens Specifications In general, aberration and diffraction are the two major issues that affect the performance of an optical system. If the aberration of an optical system is well corrected enough and its performance is solely limited by diffraction, it is then called diffraction limited. To choose the lens for our optical system, two major effects, lens aberrations effects and diffraction effects need to be considered. Diffraction increases with increasing f-number, and aberrations decrease with increasing f-number. Determining optimum system performance often includes finding a point where the combination of these factors has a minimum effect. Lens parameters for optimization include focal length (f), clear aperture (Φ), f-number ( f # = shape and construction. Focal length SLM is set to be at the focal point of the lens. The size of the SLM holder limits the focal length of the lens. Referring to the mechanical drawing shown in Appendix A.4, the size of the SLM holder is about 2 inches (51 mm), and the size of beam-splitter is 8 mm. If we choose the margin to be 5 mm to 10 mm for lens adjustment, the requirement for lens focal length can be set as f > 1 1 l slm + hBS + Margin = 34.5mm ~ 39.5mm . 2 2 (5-2) f ) and lens Φ

Based on the above discussion, we choose the lens with a focal length of 40mm. Lens clear aperture Lens clear aperture defines the area that controls the amount of light incident to an optical system. In general, the clear aperture of a lens gives the diameter over which specifications are guaranteed. The size of SLM pixel is very small, which results in a large diffraction angle. This requires that the lens aperture is large enough to collect the higher order

44

diffraction light back to the source image plane. The diffracted field distribution across the lens surface can be expressed as U dlens = A
128 128

m = ?128 n = ?128

∑ ∑

Sinc[(

x 2m υ ? a W y 2mυ ? a W ) ], ) ]Sinc[( ? ? 2 z0 2 z0 z1 λ z1 λ

(5-3)

where υ is the SLM pixel pitch (15 ?m), W is the pixel size of the SLM (14 ?m), a is the source separation (2 mm), and λ is the wavelength of the source (0.633 ?m). We can plot the simulation result based on Equation (5-3), as shown in Figure 5.2. From Figure 5.2, we can see that if the lens aperture (Φ) is larger than 18 mm, the third order of diffraction can be collected by the lens.

10

15

0.8

0.6

0.4

0.2

Figure 5.2 The simulation result for the lens aperture. Lens shape In order to minimize the extra aberrations brought by the lens to the output pattern, we choose two-element achromatic lens, in which chromatic aberration has been corrected at a minimum of two wavelength, because single-let lens cannot satisfy the small aberration requirement and the large aperture (>18 mm) at the same time.

45

-10

-5

-0.2

1

18mm

5

Taking all these into account, we designed the LIM system with the critical parameters listed in Table 5.2. The dimensions of the LIM system are also illustrated in Figure 5.3. Table 5.2 Critical design parameters of the LIM Parameters λ: Laser wavelength a: Source separation υ: Pitch of the SLM pixels f: focal length of the lens Φ: Clear aperture of the lens z0: Distance between source plan and SLM d: Distance between SLM and lens z1: Distance between source plane and lens Design Value 633 nm 2 mm 15 ?m 40 mm > 18 mm 47.393 mm 40 mm d + z0

46

Figure 5.3 Optical layout of the LIM system. 5.2 Construction of the LIM The Light-Illuminate-Modulator (LIM) is designed in a way such that the precise optical alignments are achievable through properly adjusting each opto-mechanical part inside the LIM. The main purpose is to align the light source, the hologram, the spatial-light-modulator (SLM) and the out coupling lens such that the 256 by 256 clear beamlets pattern can be

47

obtained. In this section, the opto-mechanical design of each part is described and the functions of each part are presented in detail. We also suggest a typical alignment procedure. We divide the whole LIM system into five modules according to their functions. As shown in Figure 5.3, the five modules are ? ? ? ? ? Input source support and alignment Hologram support and alignment SLM support and alignment Out-coupling lens support and alignment LIM box

Each module performs its function to support different optical parts and to achieve precise alignment of the whole optical system. By manipulating these five modules, we can obtain a nice 256 by 256 beamlets pattern. The following parts carry the full explanations of the functions and the aligning procedures of each part. 5.2.1 Light Source Support and Alignment The light source used in the current LIM is a He-Ne laser with output power of 30mW at the wavelength of 633nm. The output beam of the laser is captured by a tiny lens pigtailed with a polarizing-maintaining (PM) fiber cable. There is an adjusting mechanism built at the interface between the laser output and the PM fiber cable. By carefully adjusting the 6 screws, efficient coupling can be achieved between the laser and the PM fiber. The typical output coupling efficiency is around 50% with the polarization ratio larger than 20dB. This part is purchased from OZ Inc. At the other end of the PM fiber cable, a FC type fiber connector is made with the output ends carefully polished. The output power from the end is measured to be about 17mW. The beam from the input fiber will be used as the reconstruction beam for the hologram to reproduce four coherent point sources. In order to obtain a correctly scaled, highly efficient, and fuzzy free 256 by 256 light spots pattern at the predicting direction, the input beam must duplicate the reference beam with a high accuracy. This requires that the input beam must

48

have the capability of fine adjustment with respect to the hologram. As shown in Appendix A.11, the PM fiber cable is mounted on a 5-dimensional fiber optic positioner (FPR2-C1) through a fiber chuck (FPH-CA) and an FC fiber connector. The fiber optic positioner and chuck are manufactured by Newport Corp. 5.2.2 Hologram Support and Alignment The hologram, which generates the images of four point sources located in a common plane, is fabricated by Dr. Gordon Little at University of Dayton, Ohio. When a single laser beam is directed to the hologram, the light from the four point sources interferes as it propagates, forming a coherent array of beamlets. In order to obtain the desired squared dot array, it is necessary to make sure the position of the hologram matches the fabrication set-up. Two degrees of adjustment are needed to achieve the objective above. One is rotating the hologram in its plane and the other is swinging it in its vertical plane. The implementation is described as following: 1. The hologram is mounted on a CL-mount supported by a 20° rotary stage (Spindler & Hoyer Inc). Their specifications are as following: 20° Rotary stage ? ±10° angluar-adjustment range ? 10″ angular resolution ? φ 25 mm clear aperture ? 40mm (long) x 14mm (wide) x 51mm (high) ? 75mm long if the screw drive is included CL mount ? includes threaded ring ? for φ 18 mm hologram ? φ 25mm x 10 mm (H)

49

2. The rotary support is then mounted on another 25mm rotation stage (OptoSigma). The specifications of the rotation stage are listed as follows: 25mm Rotation Stage ? Coarse adjustment: 360° ? Fine adjustment: ±5° ? Sensitivity: 5 arcmin ? Dimension: 25mm (L) x 41.5mm (W) x 13mm (H) 5.2.3 SLM Support and Alignment The SLM used in the system is an off-the-shelf model produced by Displaytech Inc. It is a reflection-based, ferroelectric liquid crystal device that operates by rotating the polarization of the incoming light. The interference pattern produced by the hologram passes through a 8mm cubic polarizing beamsplitter (manufactured by Sprindler & Hoyer Inc.) and incidents onto the SLM pixels. The SLM pixels can modulate the polarization of the light according to the input electronic signals, which results in an encoded pattern when the reflected light passes though the beamsplitter again. The distance between the hologram and the SLM is around 25mm. In order to fit the beamsplitter into this small space, we cement the beamsplitter directly to the SLM holder, as shown in the SLM holder design in Appendix A.4. As known from previous discussions, it is very important that the pixels of the SLM are adjusted to match the interference spots. To realize this, six-degree adjustment has to be used. Meanwhile, in order to simplify the structure of the LIM, the following method is used to implement the alignment of the SLM: 1. The SLM holder unifies the SLM and the polarizing beamsplitter in a single housing. 2. The SLM holder can be attached to a temporary external adjustment stage. By adjusting the external adjustment stage, the SLM 256 x 256 pixels can be aligned with the interference spots. 3. An accessory called block-2 (Appendix A.3) is designed and manufactured for positioning the SLM holder to the base (Appendix A.1). The block-2 can be adjusted

50

to tilt up and down, and mounted on the base using six screws (three adjustment screws and three locking screws). 4. After the exact alignment is made through the external alignment stage, adjust the block-2 to fit the bottom plane of the SLM holder and cement them. 5. Remove the external adjustment stage from the SLM holder. 5.2.4 Out-Coupling Lens Support and Alignment Depending on the size and distance of the object to be profiled, a suitable output-coupling lens can be selected for different applications. Here, we choose an achromatic lens with a focal length of 40mm. The lens is mounted in a C-mount lens housing (Appendix A.5) with the capability of adjusting the distance between the lens and the SLM. The lens was bought off-the-shelf from Spindler & Hoyer Inc. By this design, there is provision for the lens to be moved back and forth until the SLM is positioned at the focal plane of the lens to obtain a clear pattern of beamlets as output. 5.2.5 LIM Box The entire optical arrangement is placed inside a 6.0 (L) x 5.0 (W) x 3.8 (H) inch box, as shown in Appendix A.11. There is a slot in rear cover (Appendix A.10) to insert input PM fiber while the output is coupled through the coupling lens mounted on the front wall (Appendix A.8). The print circuit board that functions as the driver of the SLM is attached to the right wall (Appendix A.6) with four screws. There is a slot on the right wall for the cable connecting the driver to the SLM. The LIM box can be fixed to a tripod using one of the two ?-20 threads, one at center of the base (Appendix A.1), and the other on the center of the left wall (Appendix A.7). The box is painted in black to prevent multiple reflections. 5.3 Preliminary Tests and Results We implemented an actual LIM system according to the design parameters listed in Table 5.2. The optical components are mounted in opto-mechanical stages which allow precise

51

alignments to satisfy the design requirements. The photograph of the implemented LIM is shown in Figure 5.4. The overall size of the LIM is about 6″×5″×3.8″.

Input Fiber Hologram

BeamSplitter and SLM

Lens

Figure 5.4 Photograph of the developed light illumination module. We tested the performance of the LIM by inputting striped patterns to the SLM. Figure 5.5 shows the output patterns corresponding to the input signals of 1, 2, 8, 16 stripes respectively. Higher numbers of input stripes (32, 64, and 128) were also tested, and the results indicated that the system could output a specific pattern with a high resolution by modulating the SLM. However, due to the limited resolution of the image recording equipment available at the time when the experiments were performed, the recorded images of higher number of stripes didn’t produced well. We noticed that the edge of the output pattern had a lower contrast comparing to the central portion of the image. This was believed to be caused by the phase error among the reproduced four virtual point sources. When the hologram was made, the four fibers were not aligned

52

exactly to the same plane, which introduced the misalignment between the interference spots and the SLM’s pixels at the edge portion of the pattern. By employing more precise control of the fiber positions during the fabrication of the hologram, this phase error can be dramatically reduced.

(a)

(b)

(c)

(d) Figure 5.5 Output stripe patterns from the LIM.

(a) Single stripe; (b) 2 stripes; (c) 8 stripes; and (d) 16 stripes.

53

CHAPTER 6. Conclusions and Suggestions for Future Work

6.1 Conclusions A miniaturized spatial light modulator-based light illumination module (LIM) has been developed to enable high-speed projection of any arbitrary patterns formed by two dimensional interference spots. The comprehensive optical diffraction analysis of the system indicates that the spherical aberrations caused by the non-plane wave inputs can be eliminated by placing the SLM at the focal plane of the lens. The analysis also confirms that the interference spots of the output pattern can be switched on or off individually by precisely aligning the interference pattern to the SLM pixels. Through the theoretical analysis for square beam array generation and corresponding computer simulation results, the important parameters for fabricating a hologram are given. Achieving high quality 256 × 256 beamlets interference pattern at the wavelength 633 nm requires that: (1) the four source images must lie precisely on the corners of a 2 × 2 mm square with a corner positioning accuracy less than 2.0 ?m; (2) the normal to this square must be parallel to the source beam axes; (3) the phase of the four sources must be coplanar, with one tenth wavelength as our criterion, source phase control must be controlled to better than 0.6 radians. Based on the theoretical analyses, the detailed optical and opto-mechanical designs of the LIM system were given. An actual LIM was successfully developed at the Photonics Lab of Virginia Tech. Stripe patterns of various resolutions were tested on the developed LIM. The preliminary tests confirmed optical analysis results. The specifications of the LIM supplied by Photonics Lab are listed in Table 6.1.

54

Table 6.1 LIM Specifications Feature LIM size (not including laser) Beamlet output format Laser power at exit aperture Specification 6.0” (L) x 5.0” (W) x 3.8” (H) 256 x 256 1.5 mW integrated over active area

The main advantages of the LIM are its compact size and its lightweight. It is small enough to be mounted on a moving mechanism and can be used to profile objects of different size and at different distances. Since all the parts in the LIM are fixed and they do not need any future alignment, it saves time and can be used in applications requiring a short setup time. The structured light output from the LIM has infinite depth of focus and can be used to cover a wide area of the object with its 256 x 256 laser spot output. 6.2 Suggestions for Future Work The key features for the commercial use of the LIM system are pattern quality, the size, and output power. Good pattern quality depends on the quality of the four image point sources, and precise alignment between the interference spots generated by the four image point sources with respect to the SLM pixels. The critical factor that influences the quality of the four image point sources is the exact reconstruction of the reference light. If the fabrication of a hologram is carried out in the process of the installation and alignment of the LIM, better patterns can be obtained with higher output power. Moreover, using a high power laser source is one of the solutions to improve the power output from the LIM. During the design and development of the LIM system, we found that the spatial light modulator is the limiting effect to further reduce the size of the LIM. The reflective SLM is based on the polarization modulation. To separate the input beam from the output beam, a polarizing beamsplitter is necessary. This inevitably increases the complexity and the size of the system. It is expected that the further advancement in SLM technology can bring integrated transmission based SLM with high efficiency to the market.

55

REFERENCE
[1] R. Jain, R. Kasturi, B. G. Schunck, Machine Vision, McGraw-Hill Inc, 1995. [2] B. K. P. Horn, Robot Vision, McGraw-Hill inc, 1986. [3] M. Okutomi and T. Kanade, “A Multiple-Baseline Stereo”, IEEE Comp. Soc. Conf. Computer Vision and Pattern Recognition, 1991. [4] M. Subbarao, T. S. Choi, and A. Nikzad, “Focusing Techniques”, Joutnal of Optical Engineering, Vol. 32 No. 11, pp. 2824-2836, November 1993. [5] M. Subbarao and T. S. Choi, “Accurate Recovery of Three-Dimensional Shape from Image Focus”, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 266274, March 1995. [6] B. F. Alexander, K. C. Ng, “3-D Shape Measurement by Active Triangulation Using an Array of Coded Light Stripes”, SPIE Proceedings Vol. 850, pp.199, 1987. [7] D. Pérard and J. Beyerer, “Three-Dimensional Measurement of Specular Free-Form Surface with a Structured-lighting reflection technique”, SPIE Proceedings Vol. 3204, pp.74, 1997. [8] P.C. Kalmanson et al., “Mach-Zehnder interferometer fringe projector for variableresolution video morié”, SPIE Proceedings Vol. 3520, pp. 21, 1998. [9] T. Matsumoto et al., “Morié topography for three-dimensional profile measurement using the interference fringes of a laser”, Optical Engineering, Vol. 31 No. 12, pp.2668-2673, December, 1992. [10] L. G. Hassebrook et al., “Application of Communication Theory to High Speed Structured Light Illumination”, SPIE Proceedings, Vol.3204, pp.102, 1997. [11] S. J. Gorden and W. P. Seering, “Locating polyhedral features from sparse light-stripe data”, Proc. of 1987 IEEE Int. Conf. on Rob. & Auto., Vol. 2, pp. 801-806, 1987. [12] M. Asada, H. Ichikawa and S. Tsuji, “Determining surface property by projecting a stripe pattern”, In Proc. Int. Conf. on Pattern Recognition (Oct., Paris, France) IEEE-CS, IAPR, pp1162-1164, 1986. [13] J. A. Jalkio et al., “Three dimensional inspection using multistripe structured light”, Opt. Eng., Vol. 24, No. 6, pp. 966-974, 1985. [14] P. F. Jones and J. M. Aitken, “Comparision of three-dimensional imaging systems”, J. Opt. Soc. Am., Vol. 11 No. 10, pp. 2613-2621, 1994. 56

[15] K. R. Pelowski, “Three Dimensional Measurement with Machine Vision”, Vision’86, pp. 2-17~2-2-31, 1986. [16] E. L. Hall, et al., “Measuring curved surface for robot vision”, Computer, Vol. 15, No. 12, pp. 42-54, 1982. [17] Martin D. Altschuler, Jeffrey L. Posdamer, Gideon Frieder, “The Numerical Stereo Camera”, SPIE 3-D Machine Perception, Vol. 283, pp. 15-24, 1981. [18] Martin D. Altschuler, Bruce R. Altschuler, J. Taboada, "Laser electro-optic system for rapid three-dimensional (3D) topographic mapping of surfaces", Optical Engineering, Vol. 20, No. 6, pp. 953-961, 1981. [19] W. Paul Blase, Edwin S. Gaynor, Abraham Isser, "A laser-based structured light system for biometric topographical mapping", Alexandria, VA, DCS Corporation. [20] Edwin S. Gaynor, Michael S. Massimi, and William P. Blase, Holographic Structured Light Generator, U. S. Pat. No. 5,548,418, 1996. [21] Displaytech Inc., " SLM Device Description: 256 x 256 Ferroelectric Liquid Crystal Spatial Light Modulator", Longmont, CO, 1998. [22] Optical society of America, Spatial Light Modulators and Applications, Post conference edition, Vol. 8, 1998. [23] G. O. Reynolds et al., Physical Optical Notebook: Tutorials in Fourier Optics, SPIE and American Institute of Physics, 1989. [24] J. W. Goodman, Introduction to Fourier Optics, McGraw-Hill Book Company, 1968. [25] P. K. Rastogi, Holographic Interferometry, Springer-Verlag, 1994. [26] M. Graves, D. Riddoch and B. Batchelor, “High speed image processing using the TMS320C40 parallel DSP chip”, SPIE Vol. 2597, pp. 70, 1995. [27]Glenn T. Sincerbox, “Holographic storage: are we there yet?”, 1999. [28] Samuel M. Goldwasser, “Sam’s Laser FAQ”, 1998 [29] Partha P. Banerjee, Ting-Chung Poon, Principles of Applied Optics, Homewood, IL, Asken Associates, Inc., Richard D. Irwin, Inc., 1991. [30] Spatial Imaging Limited, “Spatial Imaging’s guide to dimensional imaging techniques”, 1999.

57

APPENDIX

Opto-mechanical Drawings are included.

58

A.1 Base

59

A.2 Block-1

60

A.3 Block-2

61

A.4 SLM Holder

62

A.5 C–Mount Lens Housing

63

A.6 Right Wall

64

65

A.7 Left Wall

66

A.8 Front Wall

67

68

A.9 Top Cover

A.10 Rear Cover

69

A.11 LIM

70

VITA: Ming Luo

The author was born on April 8, 1969 in Beijing, China. She received her Bachelors degree in 1991 and M.S degree in 1994 from College of Precision Instrument and Opto-electronic Engineering, Tianjin University. She joined Virginia Tech in summer 1997 for her Master’s degree in Electrical Engineering and completed it in spring 2000. Her primary interest is in the area of 3D imaging and high speed DSP technology. She will begin work at Lucent, Pennsylvania, in June 2000.

71


赞助商链接
相关文章:
更多相关文章: