Projet

Général

Profil

Install the platform » Historique » Version 35

Damien Gratadour, 10/11/2015 19:07

1 14 Arnaud Sevin
{{toc}}
2 1 Damien Gratadour
3 29 Arnaud Sevin
h1. Install MAGMA
4 1 Damien Gratadour
5
h2. Why MAGMA ?
6
7
The MAGMA project aims to develop a dense linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current "Multicore+GPU" systems.
8
9
Unlike CULA, MAGMA propose a dense linear algebra library handling double for free.
10 24 pierre kestener
11 35 Damien Gratadour
But MAGMA needs a LAPACK and a BLAS implementation. Actually, we try two options : openBLAS (free, easy to install) and MKL (free, need a registration but more powerful)
12 1 Damien Gratadour
13
h2. Dependencies : gfortran
14
15 26 Arnaud Sevin
Use your package manager to install dependencies:
16
* on scientific linux : yum install gcc-gfortran libgfortran
17 1 Damien Gratadour
* on debian : apt-get install gfortran gfortran-multilib
18
19 28 Arnaud Sevin
h2. Configure MAGMA with openBLAS
20 1 Damien Gratadour
21 28 Arnaud Sevin
h3. Dependencies : openblas (http://www.openblas.net)
22 1 Damien Gratadour
23 28 Arnaud Sevin
First, clone the GIT repository:
24
<pre>
25
git clone https://github.com/xianyi/OpenBLAS.git
26
</pre>
27 1 Damien Gratadour
28 28 Arnaud Sevin
compile it:
29
<pre>
30
cd OpenBLAS/
31
make
32
</pre>
33 14 Arnaud Sevin
34 28 Arnaud Sevin
install it:
35
<pre>
36
sudo make install PREFIX=/usr/local/openblas-haswellp-r0.2.14.a
37
</pre>
38 14 Arnaud Sevin
39 28 Arnaud Sevin
add to you .bashrc:
40
<pre>
41
export OPENBLAS_ROOT=/usr/local/openblas-haswellp-r0.2.14.a
42
</pre>
43 1 Damien Gratadour
44
h3. extraction
45 14 Arnaud Sevin
46 1 Damien Gratadour
MAGMA is available here : http://icl.cs.utk.edu/magma/software/index.html
47 14 Arnaud Sevin
48
extract the tgz file and go into the new directory
49 28 Arnaud Sevin
> ~$ tar xf magma-1.7.0-b.tar.gz
50
> ~$ cd magma-1.7.0
51 14 Arnaud Sevin
52
h3. configuration
53 1 Damien Gratadour
54 28 Arnaud Sevin
You have to create your own make.inc based on make.inc.openblas:
55 1 Damien Gratadour
56 28 Arnaud Sevin
example : *please verify GPU_TARGET, LAPACKDIR, ATLASDIR, CUDADIR*
57 1 Damien Gratadour
58
<pre><code class="Makefile">
59
#//////////////////////////////////////////////////////////////////////////////
60 28 Arnaud Sevin
#   -- MAGMA (version 1.7.0) --
61 1 Damien Gratadour
#      Univ. of Tennessee, Knoxville
62
#      Univ. of California, Berkeley
63
#      Univ. of Colorado, Denver
64 28 Arnaud Sevin
#      @date September 2015
65 1 Damien Gratadour
#//////////////////////////////////////////////////////////////////////////////
66
67 28 Arnaud Sevin
# GPU_TARGET contains one or more of Tesla, Fermi, or Kepler,
68
# to specify for which GPUs you want to compile MAGMA:
69
#     Tesla  - NVIDIA compute capability 1.x cards (no longer supported in CUDA 6.5)
70
#     Fermi  - NVIDIA compute capability 2.x cards
71
#     Kepler - NVIDIA compute capability 3.x cards
72
# The default is "Fermi Kepler".
73 1 Damien Gratadour
# See http://developer.nvidia.com/cuda-gpus
74 28 Arnaud Sevin
#
75
GPU_TARGET ?= Kepler
76 1 Damien Gratadour
77 28 Arnaud Sevin
# --------------------
78
# programs
79 1 Damien Gratadour
80
CC        = gcc
81 28 Arnaud Sevin
CXX       = g++
82 1 Damien Gratadour
NVCC      = nvcc
83
FORT      = gfortran
84
85
ARCH      = ar
86
ARCHFLAGS = cr
87
RANLIB    = ranlib
88
89
90 28 Arnaud Sevin
# --------------------
91
# flags
92 1 Damien Gratadour
93 28 Arnaud Sevin
# Use -fPIC to make shared (.so) and static (.a) library;
94
# can be commented out if making only static library.
95
FPIC      = -fPIC
96 14 Arnaud Sevin
97 28 Arnaud Sevin
CFLAGS    = -O3 $(FPIC) -DADD_ -Wall -fopenmp 
98
FFLAGS    = -O3 $(FPIC) -DADD_ -Wall -Wno-unused-dummy-argument
99
F90FLAGS  = -O3 $(FPIC) -DADD_ -Wall -Wno-unused-dummy-argument -x f95-cpp-input
100
NVCCFLAGS = -O3         -DADD_       -Xcompiler "$(FPIC)"
101
LDFLAGS   =     $(FPIC)              -fopenmp
102 14 Arnaud Sevin
103 1 Damien Gratadour
104 28 Arnaud Sevin
# --------------------
105
# libraries
106 17 Arnaud Sevin
107 28 Arnaud Sevin
# gcc with OpenBLAS (includes LAPACK)
108
LIB       = -lopenblas
109 17 Arnaud Sevin
110 28 Arnaud Sevin
LIB      += -lcublas -lcudart
111 17 Arnaud Sevin
112
113 28 Arnaud Sevin
# --------------------
114
# directories
115 17 Arnaud Sevin
116 28 Arnaud Sevin
# define library directories preferably in your environment, or here.
117
OPENBLASDIR = /usr/local/openblas-haswellp-r0.2.14.a
118
CUDADIR = /usr/local/cuda
119
-include make.check-openblas
120
-include make.check-cuda
121 17 Arnaud Sevin
122 28 Arnaud Sevin
LIBDIR    = -L$(CUDADIR)/lib64 \
123
            -L$(OPENBLASDIR)/lib
124 17 Arnaud Sevin
125
INC       = -I$(CUDADIR)/include
126
</code></pre>
127
128
h2. Configure MAGMA with MKL
129
130
h3. extraction
131
132
To download MKL, you have to create a account here : https://registrationcenter.intel.com/RegCenter/NComForm.aspx?ProductID=1517
133
134
extract l_ccompxe_2013_sp1.1.106.tgz and go into l_ccompxe_2013_sp1.1.106
135
136
install it with ./install_GUI.sh and add IPP stuff to default choices
137
138
h3. configuration
139
140 28 Arnaud Sevin
You have to create your own make.inc based on make.inc.mkl-gcc-ilp64:
141
142
example: *please verify GPU_TARGET, MKLROOT, CUDADIR*
143 17 Arnaud Sevin
<pre><code class="Makefile">
144
#//////////////////////////////////////////////////////////////////////////////
145 28 Arnaud Sevin
#   -- MAGMA (version 1.7.0) --
146 19 Arnaud Sevin
#      Univ. of Tennessee, Knoxville
147
#      Univ. of California, Berkeley
148 17 Arnaud Sevin
#      Univ. of Colorado, Denver
149 28 Arnaud Sevin
#      @date September 2015
150 17 Arnaud Sevin
#//////////////////////////////////////////////////////////////////////////////
151 14 Arnaud Sevin
152
# GPU_TARGET contains one or more of Tesla, Fermi, or Kepler,
153
# to specify for which GPUs you want to compile MAGMA:
154 28 Arnaud Sevin
#     Tesla  - NVIDIA compute capability 1.x cards (no longer supported in CUDA 6.5)
155 14 Arnaud Sevin
#     Fermi  - NVIDIA compute capability 2.x cards
156
#     Kepler - NVIDIA compute capability 3.x cards
157 28 Arnaud Sevin
# The default is "Fermi Kepler".
158 14 Arnaud Sevin
# See http://developer.nvidia.com/cuda-gpus
159
#
160 28 Arnaud Sevin
#GPU_TARGET ?= Fermi Kepler
161 14 Arnaud Sevin
162 28 Arnaud Sevin
# --------------------
163
# programs
164
165 20 Arnaud Sevin
CC        = gcc
166 28 Arnaud Sevin
CXX       = g++
167 20 Arnaud Sevin
NVCC      = nvcc
168
FORT      = gfortran
169
170
ARCH      = ar
171
ARCHFLAGS = cr
172 1 Damien Gratadour
RANLIB    = ranlib
173 20 Arnaud Sevin
174 1 Damien Gratadour
175
# --------------------
176
# flags
177
178
# Use -fPIC to make shared (.so) and static (.a) library;
179
# can be commented out if making only static library.
180
FPIC      = -fPIC
181
182
CFLAGS    = -O3 $(FPIC) -DADD_ -Wall -Wshadow -fopenmp -DMAGMA_WITH_MKL
183
FFLAGS    = -O3 $(FPIC) -DADD_ -Wall -Wno-unused-dummy-argument
184
F90FLAGS  = -O3 $(FPIC) -DADD_ -Wall -Wno-unused-dummy-argument -x f95-cpp-input
185
NVCCFLAGS = -O3         -DADD_       -Xcompiler "$(FPIC) -Wall -Wno-unused-function"
186
LDFLAGS   =     $(FPIC)              -fopenmp
187
188
# Defining MAGMA_ILP64 or MKL_ILP64 changes magma_int_t to int64_t in include/magma_types.h
189
CFLAGS    += -DMKL_ILP64
190
FFLAGS    += -fdefault-integer-8
191
F90FLAGS  += -fdefault-integer-8
192
NVCCFLAGS += -DMKL_ILP64
193
194
# Options to do extra checks for non-standard things like variable length arrays;
195
# it is safe to disable all these
196
CFLAGS   += -pedantic -Wno-long-long
197
#CFLAGS   += -Werror  # uncomment to ensure all warnings are dealt with
198
CXXFLAGS := $(CFLAGS) -std=c++98
199
CFLAGS   += -std=c99
200
201
202
# --------------------
203
# libraries
204
205
# IMPORTANT: this link line is for 64-bit int !!!!
206
# For regular 64-bit builds using 64-bit pointers and 32-bit int,
207
# use the lp64 library, not the ilp64 library. See make.inc.mkl-gcc or make.inc.mkl-icc.
208
# see MKL Link Advisor at http://software.intel.com/sites/products/mkl/
209
# gcc with MKL 10.3, Intel threads, 64-bit int
210
# note -DMAGMA_ILP64 or -DMKL_ILP64, and -fdefault-integer-8 in FFLAGS above
211
LIB       = -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -lpthread -lstdc++ -lm -liomp5 -lgfortran
212
213
LIB      += -lcublas -lcudart
214
215
216
# --------------------
217
# directories
218
219
# define library directories preferably in your environment, or here.
220
# for MKL run, e.g.: source /opt/intel/composerxe/mkl/bin/mklvars.sh intel64
221
#MKLROOT ?= /opt/intel/composerxe/mkl
222
#CUDADIR ?= /usr/local/cuda
223
-include make.check-mkl
224
-include make.check-cuda
225
226
LIBDIR    = -L$(CUDADIR)/lib64 \
227
            -L$(MKLROOT)/lib/intel64
228
229
INC       = -I$(CUDADIR)/include \
230
            -I$(MKLROOT)/include
231
</code></pre>
232
233
In this example, I use gcc but with MKL, you can use icc instead of gcc. In this case, you have to compile yorick with icc. For this, you have to change the CC flag in Make.cfg  
234
235
h2. compilation and installation
236
237
h3. compilation
238
239
just compile the shared target (and test if you want)
240
> ~$ make -j 8 shared sparse
241
242
h3. installation
243
244
To install libraries and include files in a given prefix, run:
245
> ~$ make install prefix=/usr/local/magma
246
  
247
The default prefix is /usr/local/magma. You can also set prefix in make.inc.
248
249
h3. tune (not tested)
250
251
For multi-GPU functions, set $MAGMA_NUM_GPUS to set the number of GPUs to use.
252 20 Arnaud Sevin
For multi-core BLAS libraries, set $OMP_NUM_THREADS or $MKL_NUM_THREADS or $VECLIB_MAXIMUM_THREADS to set the number of CPU threads, depending on your BLAS library.
253 28 Arnaud Sevin
254 30 Arnaud Sevin
h1. Install the platform
255 28 Arnaud Sevin
256 32 Arnaud Sevin
The COMPASS platform is distributed as a single bundle of CArMA and SuTrA libraries and NAGA & SHESHA and its extensions for Python. 
257 29 Arnaud Sevin
258
h2. Hardware requirements
259
260
The system must contain at least an x86 CPU and a CUDA capable GPU. list of compatible GPUs can be found here http://www.nvidia.com/object/cuda_gpus.html. Specific requirements apply to clusters (to be updated).
261
262
h2. Environment requirements
263
264
The system must be running a 64 bit distribution of Linux with the latest NVIDIA drivers and "CUDA toolkit":https://developer.nvidia.com/cuda-downloads. The following installation instructions are valid if the default installation paths have been selected for these components.
265
266
Additionally, to benefit from the user-oriented features of the platform, Anaconda should be installed.
267
In the last versions of compass (r608+), Yorick is no more supported.
268
269
For the widget, you also need pyQTgraph. You can install it like this : 
270
<pre>
271
pip install pyqtgraph 
272
</pre>
273
274
h2. Installation process
275
276
First check out the latest version from the svn repository :
277
<pre>
278
svn co https://version-lesia.obspm.fr/repos/compass compass
279
</pre>
280
then go in the newly created directory and then trunk:
281
<pre>
282
cd compass/trunk
283
</pre>
284
once there, you need to modify system variables in our .bashrc :
285
<pre>
286
# CUDA default definitions
287
export CUDA_ROOT=$CUDA_ROOT #/usr/local/cuda
288
export CUDA_INC_PATH=$CUDA_ROOT/include
289
export CUDA_LIB_PATH=$CUDA_ROOT/lib
290
export CUDA_LIB_PATH_64=$CUDA_ROOT/lib64
291
export CPLUS_INCLUDE_PATH=$CUDA_INC_PATH
292
export PATH=$CUDA_ROOT/bin:$PATH
293
</pre>
294
in this file, you also have to indicate the proper architecture of your GPU so as the compiler will generate the appropriate code.
295
<pre>
296 33 Arnaud Sevin
export GENCODE="arch=compute_52,code=sm_52"
297 29 Arnaud Sevin
</pre>
298 34 Arnaud Sevin
and change both 52 to your architecture : for instance a Tesla Fermi will have 2.0 computing capabilities so change 52 to 20, a Kepler GPU will have 3.0 or 3.5 (K20) computing capabilities, change 52 to 30 (or 35), a Maxwell GPU have 5.2 (M6000).
299 29 Arnaud Sevin
300
If you are using CULA, you have to specify it:
301
<pre>
302
# CULA default definitions
303
export CULA_ROOT= /usr/local/cula
304
export CULA_INC_PATH= $CULA_ROOT/include
305
export CULA_LIB_PATH= $CULA_ROOT/lib
306
export CULA_LIB_PATH_64= $CULA_ROOT/lib64
307
</pre>
308
309
If you are using MAGMA, you have to specify it:
310
<pre>
311
# MAGMA definitions (uncomment this line if MAGMA is installed)
312
export MAGMA_ROOT=/usr/local/magma
313
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$MAGMA_ROOT/lib
314
export PKG_CONFIG_PATH=$MAGMA_ROOT/lib/pkgconfig
315
</pre>
316
317
Last variables to define:
318
<pre>
319
export COMPASS_ROOT=/path/to/compass/trunk
320
export NAGA_ROOT=$COMPASS_ROOT/naga
321
export SHESHA_ROOT=$COMPASS_ROOT/shesha
322
export LD_LIBRARY_PATH=$COMPASS_ROOT/libcarma:$COMPASS_ROOT/libsutra:$CUDA_LIB_PATH_64:$CUDA_LIB_PATH:$CULA_LIB_PATH_64:$CULA_LIB_PATH:$LD_LIBRARY_PATH
323
</pre>
324
325
Once this is done, you're ready to compile the whole library:
326
<pre>
327
make clean install
328
</pre>
329
330
If you did not get any error, CArMA, SuTrA, NAGA and SHESHA are now installed on your machine. You can check that everything is working by launching a GUI to test a simulation:
331
<pre>
332
cd $SHESHA_ROOT/widgets && ipython -i widget_ao.py
333
</pre>