Projet

Général

Profil

YoGA Ao features » Historique » Révision 4

Révision 3 (Florian Ferreira, 08/01/2015 14:59) → Révision 4/5 (Florian Ferreira, 08/01/2015 15:03)

h1. YoGA AO features 

 The YoGA_AO extension contains routines to simulate the whole process of image formation through the atmosphere, a telescope and an adaptive optics (AO) system. 

 There are two ways to use YoGA_Ao, either through the GUI or through the command line possibly using predefined scripts. The following description is valid for both ways but is probably more relevant to command line users. The main difference between the GUI and the command line interface is the way the simulation parameters are imported. In the latter case, the simulation parameters are centralized into a parameter file (.par). Examples of .par files are given in the data/par directory and can be used as templates. The high-level API contains a dedicated routine to read the parameters from this file and import them in the simulation environment. 

 [[YoGA Ao features#List-of-features|List of features]] 

 * [[YoGA Ao features#Simulation-geometry|Simulation geometry]] 
 * [[YoGA Ao features#Turbulence-generation|Turbulence generation]] 
 * [[YoGA Ao features#Wavefront-Sensing|Wavefront Sensing]] 
 * [[YoGA Ao features#Image-formation|Image formation]] 
 * [[YoGA Ao features#Modal-optimization|Modal optimization]] 

 [[YoGA Ao features#List-of-routines|List of routines]] 

 * [[YoGA Ao features#High-level-routines|High-level routines]] 
 * [[YoGA Ao features#Advanced-routines|Advanced routines]] 

 h2. List of features 

 Each API comes with a set of structures concentrating the configuration parameters for the simulation as well as various data used for computation and diagnostics. For the Yorick API, the list of structures can be found in the file yoga_ao_ystruct.i. Concerning the CUDA-C API, please refer to the file yoga_ao.cpp. Available features include: 

 * Kolmogorov-type turbulence generation over an arbitrary number of layers with arbitrary properties. 
 * Shack-Hartmann wavefront sensing including Laser Guide Stars (LGS) 
 * Short and long exposure imaging under the turbulence 

 h3. Simulation geometry 

 The main parameter that drives most of the choices for the simulation geometry is the Fried parameter r0. Typically, for an adequate sampling, the equivalent size of the pixels we use to simulate the turbulent phase screens should be less than half r0. To ensure a good sampling, in YoGA_Ao, r0 is simulated on about 6 pixels. This ratio defines the size of the "quantum" pixels and thus the size of the phase screens to simulate (as compared to the telescope size). From this screen size, the full images size is defined, taking into account the sampling required for imaging. 

 As an example, in the case of an ELT, the linear size of the phase screen support (and thus of the pupil) is of the order of 1.5k to 2k pixels. This means that the linear size of the image will be at least 4k (for a minimum Shannon sampling). This is a very large number which will imply heavy computations. 

 To cope for these various requirements we can define 3 different pupils:  

 * the large pupil (called ipupil) defined on the largest support (4kx4k in our previous example) more than half of which being 0 

 * the small pupil (spupil) defined only on the pupil size (2kx2k in our previous example) most of it being 1 

 * the medium pupil (mpupil) defined on a slightly larger support: typically 4 additional pixels as a guard band on each size. This guard band is useful for manipulations on phase screens like raytracing. This is also the actual size of the ground layer phase screen. 

 The image below helps to understand the various pupil sizes. White is the pupil, green is the support of spupil, blue the support of mpupil et black the support of ipupil. 

 !https://dev-lesia.obspm.fr/projets/attachments/download/570/pupil.png! 

 All these pupils are contained in arrays accessible as internal keywords of the following geom structure available from the Yorick API :  

 <pre> 
 <code class="c"> 
 struct geom_struct 
 { 
    long    ssize;         // linear size of full image (in pixels) 
    float zenithangle; // observations zenith angle (in deg) 
    // internal keywords 
    long    pupdiam;       // linear size of total pupil (in pixels) 
    float cent;          // central point of the simulation 
    pointer _ipupil;     // total pupil (include full guard band) 
    pointer _mpupil;     // medium pupil (part of the guard band) 
    pointer _spupil;     // small pupil (without guard band) 
    ... 
 }; 

 </code> 
 </pre> 

 some keywords have not been reported. Please check yoga_ao_ystruct.i for more details. 

 In this structure pupdiam (the diameter in pixels of the pupil is considered as an internal keyword). Two other structures contain the rest of the configuration parameters :  

 <pre> 
 <code class="c"> 
 struct tel_struct 
 { 
     float diam;          // telescope diameter (in meters) 
     float cobs;          // central obstruction ratio 
 }; 
 </code> 
 </pre> 

 <pre> 
 <code class="c"> 
 struct loop_struct 
 { 
     long    niter;        // number of iterations 
     float ittime;       // iteration time (in sec) 
 }; 
 </code> 
 </pre> 

 There is one high-level routines to init the geometry with only one parameter: the pupil diameter in pixels.  

 <pre> 
 <code class="c"> 
 func geom_init(pupdiam) 
     /* DOCUMENT geom_init 
       geom_init,pupdiam 
       inits simulation geometry, depending on pupdiam 
       the linear number of pixels in the pupil 
     */ 

 </code> 
 </pre> 





 h3. Turbulence generation 

 The turbulence generation is done through the process of extruding infinite ribbons of Kolmogorov turbulence (see [[Model Description]]). An arbitrary number of turbulent layers can be defined at various altitude and various fraction of r0, wind speed and directions (in the range 0°-90°). 

 <pre> 
 <code class="c"> 
 struct atmos_struct 
 { 
     long      nscreens;      // number of turbulent layers 
     float     r0;            // global r0 @ 0.5µm 
     float     pupixsize;     // pupil piwel size (in meters) 
     pointer dim_screens; // linear size of phase screens 
     pointer alt;           // altitudes of each layer 
     pointer winddir;       // wind directions of each layer 
     pointer windspeed;     // wind speeds of each layer 
     pointer frac;          // fraction of r0 for each layer 
     pointer deltax;        // x translation speed (in pix / iteration) for each layer 
     pointer deltay;        // y translation speed (in pix / iteration) for each layer 
 }; 


 </code> 
 </pre> 

 The phase screens size is computed in agreement with the system components. The positions of the various targets (imaging targets or wavefront sensing guide stars) in the simulation define the required field of view and thus the size of the altitude phase screens. 

 To create a dynamic turbulence, the phase screens are extruded in columns and rows. The number of rows and columns extruded per iteration is computed using the specified wind speed and direction. Because extrusion is an integer operation (can't extrude a portion of a column), additional interpolation is required to provide an accurate model (with non integer phase shifts). In YoGA_Ao, a combination of integer extrusion and linear interpolation (in between four pixels) is used for each layer. The phase is integrated along specified directions across the multiple layers with the positions of light rays being re-evaluated for each iteration and screen ribbons being extruded when appropriate. This explains the need for a guard band around the ground layer phase screen as light rays can partly cross the pupil pixels depending on the iteration number. 

 The overall turbulence generation is done on the GPU and rely on a C++ class: 


 <pre> 
 <code class="c"> 
 class yoga_tscreen 
 </code> 
 </pre> 

 This object contains all the elements to generate an infinite length phase screen including the extrusion method. All the screens for a given atmospheric configuration are centralized in another class: 



 <pre> 
 <code class="c"> 
 class yoga_atmos 

 </code> 
 </pre> 

 In this object phase screens can be added dynamically thanks to the use of a map of yoga_tscreen This has many advantages the first of which being the indexation: screens are indexed by altitude (float) and the use of iterators greatly simplifies the code. 

 The corresponding Yorick opaque object is:  

 <pre> 
 <code class="c"> 
  static y_userobj_t yAtmos 
 </code> 
 </pre> 

 and there are several Yorick wrappers to manipulate this object: 

 <pre> 
 <code class="c"> 
 extern yoga_atmos; 
     /* DOCUMENT yoga_atmos 
        obj = yoga_atmos(nscreens,r0,size,size2,alt,wspeed,wdir,deltax,deltay,pupil[,ndevice]) 
        creates an yAtmos object on the gpu 
     */ 
 </code> 
 </pre> 


 <pre> 
 <code class="c"> 
 extern init_tscreen; 
     /* DOCUMENT init_tscreen 
        init_tscreen,yoga_atmos_obj,altitude,a,b,istencilx,istencily,seed 
        loads on the gpu in an yAtmos object and for a given screen data needed for extrude 
     */ 
 </code> 
 </pre> 


 <pre> 
 <code class="c"> 
 extern get_tscreen; 
     /* DOCUMENT get_tscreen 
        screen = get_tscreen(yoga_atmos_obj,altitude) 
        returns the screen in an yAtmos object and for a given altitude 
     */ 
 </code> 
 </pre> 

 <pre> 
 <code class="c"> 
 extern get_tscreen_update; 
     /* DOCUMENT get_tscreen_update 
        vect = get_tscreen_update(yoga_atmos_obj,altitude) 
        returns only the update vector in an yAtmos object and for a given altitude 
     */ 
 </code> 
 </pre> 

 <pre> 
 <code class="c"> 
 extern extrude_tscreen; 
     /* DOCUMENT extrude_tscreen 
        extrude_tscreen,yoga_atmos_obj,altitude[,dir] 
        executes one col / row screen extrusion for a given altitude in an yAtmos object  
     */ 
 </code> 
 </pre> 


 Additionally there is a high-level routine to initialize the whole structure on the GPU from Yorick: 

 <pre> 
 <code class="c"> 
 func atmos_init(void) 
     /* DOCUMENT atmos_init 
        atmos_init 
        inits a yAtmos object on the gpu 
        no input parameters 
        requires 2 externals + 2 optional : y_atmos and y_geom + y_target and y_wfs 
        y_atmos    : a y_struct for the atmosphere 
        y_geom     : a y_struct for the geometry of the simulation 
        y_target : a y_struct for the targets 
        y_wfs      : a y_struct for the sensors 
        creates 1 external : 
        g_atmos : a yAtmos object on the gpu 
     */ 

 </code> 
 </pre> 




 h3. Wavefront Sensing 

 Wavefront sensing is done in two steps: first compute the Shack-Hartmann sub-images including diffraction effect and noise and then from these images, compute the centroids. The overall model is described here [[Model Description]]. 

 The pixel size requested by the user for the sub-apertures images are approximated following a rather robust approach to cope for any kind of dimensioning. We used an empirical coefficient to set the simulated subaps field of view (FoV) to 6 times the ratio of the observing wavelength over r_0 at this wavelength. This provides sufficient FoV to include most of the turbulent speckles. The same empirical coefficient is used to define de number of phase points per subaps as 6 times the ratio of the subaps diameter over r_0. This ensures a proper sampling of r_0. From this number of phase points we compute the size of the support in the Fourier domain. The "quantum pixel size" is then deduced from the ratio of the wavelength over r_0 over the size of the Fourier support. Then the pixel size actually simulated is obtained using the product of an integer number by this quantum pixel size as close as possible to the requested pixel size. 

 The wavefront sensor model description is stored in the following Yorick structure.  

 <pre> 
 <code class="c"> 
 struct wfs_struct 
 { 
   long    nxsub;            // linear number of subaps 
   long    npix;             // number of pixels per subap 
   float pixsize;          // pixel size (in arcsec) for a subap 
   float lambda;           // observation wavelength (in µm) for a subap 
   float optthroughput;    // wfs global throughput 
   float fracsub;          // minimal illumination fraction for valid subaps 
  
   //target kwrd 
   float xpos;        // guide star x position on sky (in arcsec)  
   float ypos;        // guide star x position on sky (in arcsec)  
   float gsalt;       // altitude of guide star (in m) 0 if ngs  
   float gsmag;       // magnitude of guide star 
   float zerop;       // detector zero point 
  
   // lgs only 
   float lgsreturnperwatt;    // return per watt factor (high season : 10 ph/cm2/s/W) 
   float laserpower;          // laser power in W 
   float lltx;                // x position (in meters) of llt 
   float llty;                // y position (in meters) of llt 
   string proftype;           // type of sodium profile "gauss", "exp", etc ... 
   float beamsize;            // laser beam fwhm on-sky (in arcsec) 
 ... 
 }; 

 </code> 
 </pre> 



 h3. Image formation 

 <pre> 
 <code class="c"> 
 struct target_struct 
 { 
   long      ntargets;    // number of targets 
   pointer lambda;      // observation wavelength for each target 
   pointer xpos;        // x positions on sky (in arcsec) for each target 
   pointer ypos;        // y positions on sky (in arcsec) for each target 
   pointer mag;         // magnitude for each target 
 }; 
 </code> 
 </pre> 

 


 h3. Modal optimization 

 Modal optimization is available for Least Square controller. This features computes modal gains to apply to the command matrix from a modal base of the DM and a set of open-loop slopes (_Modal Control Optimization_, E.Gendron & P.Léna, Astron. Atrophies. 291,337-347 (1994)).  
 In COMPASS, a matrix M2V (Modes to Volts) is computed from a Karhunen-Loeve basis of the DM (computed during the simulation) and open-loop slopes are recorded before the beginning of the simulation and used to compute a S2M (Slopes to Modes) matrix. Then, we are able to find optimal gains G to apply to each modes for improving performances in noisy AO system. Finally, the command matrix is computed as: M2V*G*S2M 
 To use this feature, you need to specify some new specific parameters in the input parameters file :  

 <pre> 
 <code class="c"> 

 struct controller_struct 
 { 
   [..] 
   int       modopti;    // Flag for modal optimization 
   int       nrec;       // Number of sample of open loop slopes for modal optimization computation 
   int       nmodes;     // Number of modes for M2V matrix (modal optimization) 
   float       gmin;       // Minimum gain for modal optimization 
   float       gmax;       // Maximum gain for modal optimization 
   int       ngain;      // Number of tested gains 
 }; 
 </code> 
 </pre> 

 You have to set modopti=1 to activate the feature. Then, you can specify the other parameters: if not, default values will be used. 
 Be careful with the nmodes parameter: maximum value is the number of actuators, but you may have to ignored some of them in order to make the matrix IMAT*S2M inversion possible. 
 Finally, note that modal gain are recomputed (i.e.. refreshed) each nrec iterations. 

 An example of parameter file which runs run a modal optimization simulation is available in data/par/1wfs8x8_1layer_rtc_modopti_dm.par 

 

 h2. List of routines 

 h3. High-level routines 

 h3. Advanced routines 


 <pre> 
 <code class="c"> 
 extern _GetMaxGflopsDeviceId    //get the ID of the best CUDA-capable device on your system 
 </code> 
 </pre>