Hi,
I'm trying to start an interactive job with:
srun --nodes=1 --ntasks-per-node=48 --mem-per-cpu=3600MB --
time=02:00:00 --pty /bin/zsh
I can get a node:
srun: [I] No output file given, set to: output_%j.txt
srun: job 2054322 queued and waiting for resources
srun: job 2054322 has been allocated resources
However after a moment the job ends:
srun: First task exited 5s ago
srun: step:2054322.0 task 0: running
srun: step:2054322.0 tasks 1-47: exited
srun: Terminating job step 2054322.0
srun: Job step aborted: Waiting up to 62 seconds for job step to
finish.
srun: error: ncm0552: task 0: Killed
This used to work before!
Best regards
Pavel
Sorry,
my original script was:
#!/usr/local_rwth/bin/zsh
### Job name
#SBATCH -J LSDYNA_OMP
### File / path where output will be written, the %J is the job id #SBATCH -o LSDYNA_OpenMPI.%J
### Request the time you need for execution in minutes ### The format is: [hour:]minute, for 80 minutes you can use: 1:20 #SBATCH -t 120:00:00
### Request memory you need for your job in MB #SBATCH --mem-per-cpu=2000M #SBATCH --nodes=1
ulimit -s 600000
### Request the number of compute slots you want to use #SBATCH --ntasks=12
#SBATCH --mail-type=end
#SBATCH --mail-user=sim(a)isf.rwth-aachen.de
#SBATCH --account=rwth0398
### load modules
module load TECHNICS
module load intelmpi
module load lsdyna
cd $WORK/LSDYNA
# start non-interactive batch job
$MPIEXEC --propagate=STACK $FLAGS_MPI_BATCH ls-dyna_mpp_intel i=sFSWmodel.k
Without $ before STACK, just as in the documentation!
Hello everybody,
I want to run LSDYNA with intelmpi and im trying the script (Distributed Memory (Multi-Node, MPI) Parallel Job) as documented here:
https://doc.itc.rwth-aachen.de/display/CC/lsdyna
However i get this fail-message:
(OK) Loading TECHNICS environment
(EE) intelmpi/2018.4.274 already loaded, try unloading it first.
(!!) Please notice: Using lsdyna requires payment.
(!!) If in doubt, please contact your institute's IT-administrator or servicedesk(a)itc.rwth-aachen.de.
(OK) Loading lsdyna R9.1.0
(!!) hybrid parallelised versions for intelmpi only
(!!) MPI parallelised versions for intelmpi or openmpi/1.8.4
/var/spool/slurm/job1878073/slurm_script:33: command not found: --propagate=STACK
Is there maybe something wrong with the script given on the documentation? the variable STACK seems to be undefined, or is it?
My job script looks like this:
---------------------------------------------------------------------------------------------------------------------------------------
#!/usr/local_rwth/bin/zsh
### Job name
#SBATCH -J LSDYNA_OMP
### File / path where output will be written, the %J is the job id
#SBATCH -o LSDYNA_OpenMPI.%J
### Request the time you need for execution in minutes
### The format is: [hour:]minute, for 80 minutes you can use: 1:20
#SBATCH -t 120:00:00
### Request memory you need for your job in MB
#SBATCH --mem-per-cpu=2000M
#SBATCH --nodes=1
ulimit -s 600000
### Request the number of compute slots you want to use
#SBATCH --ntasks=12
#SBATCH --mail-type=end
#SBATCH --mail-user=sim(a)isf.rwth-aachen.de
#SBATCH --account=rwth0398
### load modules
module load TECHNICS
module load intelmpi
module load lsdyna
cd $WORK/LSDYNA
# start non-interactive batch job
$MPIEXEC --propagate=$STACK $FLAGS_MPI_BATCH ls-dyna_mpp_intel i=sFSWmodel.k
--------------------------------------------------------------------------------------------------------------------------
Best Regards,
Marek
Hi,
I'm running a rather large job array on the integrated hosting part (in
the moves account). In our understanding the whole hardware we
contributed to the IH should be split among all jobs of this account,
however way less (array) jobs are running than I would expect. Right now
there is only a single job array running for this account.
The job array has 6000 individual jobs, each needs a single core (I
don't set any arguments affecting core selection) and is running for up
to four minutes. Hence slurm should have a rather easy job to keep every
core busy. Given that we should have 7 nodes with 48 cores each, I
expect the number of running jobs to be at least 200-300 or so.
(Depending on how many jobs terminate very quickly and how long slurm
takes to start new ones).
However I see from `squeue -A moves -t R` that the number ob jobs is
usually around 20-30, sometimes below 10 and never seems to exceed 50.
Are there any limits on how many jobs are run concurrently?
If yes: What are these? Please increase them appropriately, at least for
IH accounts, so that we can actually use our hardware...
If no: What is going on here? I don't set any particular options in the
job, constraints are -C hpcwork -C skx8160. sinfo tells me that the
respective nodes are all available (mix or idle).
Best,
Gereon
--
Gereon Kremer
Lehr- und Forschungsgebiet Theorie Hybrider Systeme
RWTH Aachen
Tel: +49 241 80 21243
Hi,
I've lately noticed some of my jobs failing (timing out) with:
srun: Job 1692770 step creation temporarily disabled, retrying
srun: error: Unable to create step for job 1692770: Unable to contact
slurm controller (connect failure)
Any ideas what could be going wrong? I've been running similar jobs for
a long time and this type of failures seem quite recent...
Best regards
Pavel
Dear all,
from time to time I keep getting erros similar to this one when
submitting jobs:
RuntimeError: Execution of 'sbatch -t 3600 --mem-per-cpu=10G
--account=rwth0333 --job-name ChairliftRide_8192x4096_QP22_FTBE0to32 -o
log/ChairliftRide_8192x4096_QP22_FTBE0to32.queue_out.log -e
log/ChairliftRide_8192x4096_QP22_FTBE0to32.queue_out.log
rz_start_anysim.sh' exited with status != 0 (1): sbatch: error: Batch
job submission failed: Socket timed out on send/recv operation
Anyone else having this problem? Doing the same submission again works
fine. Looks like the controller can not handle the load of submissions?
Best
Johannes
--
M.Sc. Johannes Sauer
Researcher
Institut fuer Nachrichtentechnik
RWTH Aachen University
Melatener Str. 23
52074 Aachen
Tel +49 241 80-27678
Fax +49 241 80-22196
sauer(a)ient.rwth-aachen.de
http://www.ient.rwth-aachen.de
Dear All,
I tried to run MATLAB using a script, but it delivered an error. Can
anyone tell me if MATLAB is already preinstalled? If it is preinstalled,
what are the commands to load the corresponding modules?
Thanks for your help and best regards,
Georg
Dear Subrata, dear all,
The error message is from slurm itself. For whatever reason, it doesn't
find the the slurm configuration file.
Which frontend (login) are you using?
For example on the login nodes you should be able to locate the file,
with all read flags set:
> login18-1:~$ echo $HOSTNAME
> login18-1.hpc.itc.rwth-aachen.de
> login18-1:~$ locate slurm.conf
> /etc/clustershell/groups.conf.d/slurm.conf.example
> /etc/slurm/slurm.conf
> /usr/share/doc/slurm-18.08.5-2/html/slurm.conf.html
> /usr/share/man/man5/slurm.conf.5.gz
> login18-1:~$ ls -lah /etc/slurm/slurm.conf
> -r--r--r-- 1 root root 7.3K Apr 18 11:08 /etc/slurm/slurm.conf
On a different frontend, i.e. the copy server, you cannot:
> copy:~$ echo $HOSTNAME
> copy.hpc.itc.rwth-aachen.de
> copy:~$ locate slurm.conf
> /etc/clustershell/groups.conf.d/slurm.conf.example
> /usr/share/doc/slurm-18.08.5-2/html/slurm.conf.html
> /usr/share/man/man5/slurm.conf.5.gz
> copy:~$ ls -lah /etc/slurm/slurm.conf
> ls: cannot access /etc/slurm/slurm.conf: No such file or directory
Why sbatch is available on the node is a different question then:
> copy:~$ command -v sbatch
> /usr/bin/sbatch
On the old frontends the configuration file obviously doesn't exist
either, but sbatch still exists.
> cluster:~$ echo $HOSTNAME
> cluster.rz.RWTH-Aachen.DE
> cluster:~$ locate slurm.conf
> /etc/clustershell/groups.conf.d/slurm.conf.example
> /usr/share/doc/slurm-18.08.5-2/html/slurm.conf.html
> /usr/share/man/man5/slurm.conf.5.gz
> cluster:~$ command -v sbatch
> /usr/bin/sbatch
So make sure you login to one of the login nodes.
Best, Martin
Dr. Martin C. Schwarzer
RWTH Aachen University (Leitner Group, Hölscher Subgroup)
Worringerweg 2; 52074 Aachen; Germany
http://orcid.org/0000-0001-8435-9624
E-mails sent by me are always PGP/MIME signed.
If you are unable to open attachments because of this,
you might be using an outdated e-mail client.
In this case, please contact me and I will send you
the message with PGP/inline instead.
On 2019-04-25 10:04, Subrata Pramanik wrote:
> Dear All,
>
> I am using Gromacs with following script:
>
> #!/usr/bin/env zsh
>
> #SBATCH -p c18m
> #SBATCH --account=jara0187
> #SBATCH -J Subrata1
> #SBATCH -o GROMACSJOB.o%J
> #SBATCH -e GROMACSJOB.e%J
> #SBATCH --mail-user=s.pramanik(a)biotec.rwth-aachen.de
> <mailto:s.pramanik@biotec.rwth-aachen.de>
> #SBATCH --mail-type=BEGIN
> #SBATCH --mail-type=end
> #SBATCH -n 24
> #SBATCH --mem-per-cpu=5000
> #SBATCH -t 24:00:00
>
> ###### end of Slurm directives ######
>
> ###### start of shell commands ######
>
> ### load the necessary module files
> module load CHEMISTRY
> module load gromacs/5.1.2
>
> #path for job file
>
> cd $HOME/run1-test
>
>
> # Start the MPI-parallel mdrun directly ...
>
> $SLURM_NTASKS gmx_mpi mdrun -deffnm md_0_1 -cpi md_0_1.cpt -append
>
> I am getting this error:
> sbatch: error: s_p_parse_file: unable to status file
> /etc/slurm/slurm.conf: No such file or directory, retrying in 1sec up to
> 60sec
> sbatch: error: ClusterName needs to be specified
> sbatch: fatal: Unable to process configuration file
>
> If anybody has any suggestions, please let me know.
>
> Thanks in advance.
>
> Best regards,
> Subrata
>
>
>
>
>
>
>
>
>
> On Thu, Apr 25, 2019 at 9:57 AM Julien Guénolé
> <guenole(a)imm.rwth-aachen.de <mailto:guenole@imm.rwth-aachen.de>> wrote:
>
> Thanks to answer to the list: it can always be helpful to others...
>
> If you get an error with these,
>
> #SBATCH -p c18m
> #SBATCH --account=jara0187
>
> I don't know what to suggest more.
>
> Best wishes,
> Julien
>
> On 25/04/2019 09:20, Subrata Pramanik wrote:
>> Dear Julien,
>>
>> Thank you for your suggestion.
>>
>> In the LSF, I had this way:
>>
>> ##BSUB -P rwth0031
>> #BSUB -P jara0187
>>
>> In the SLURM, I changed in these ways:
>>
>> #SBATCH -p c18m
>> #SBATCH --account=jara0187
>>
>> OR
>>
>> #SBATCH -p rwth0031
>> #SBATCH --account=jara0187
>>
>> In both cases, I am getting the same error:
>> sbatch: error: s_p_parse_file: unable to status file
>> /etc/slurm/slurm.conf: No such file or directory, retrying in 1sec
>> up to 60sec
>> sbatch: error: ClusterName needs to be specified
>> sbatch: fatal: Unable to process configuration file
>>
>> I am herewith sending the script file. If possible, please have a
>> look and suggest me.
>>
>> Thanks in advance.
>>
>> Best regards,
>> Subrata.
>>
>>
>> On Wed, Apr 24, 2019 at 5:36 PM Julien Guénolé
>> <guenole(a)imm.rwth-aachen.de <mailto:guenole@imm.rwth-aachen.de>>
>> wrote:
>>
>> As indicated by the error message, did you try to specify the
>> ClusterName? For example:
>>
>> #SBATCH -p c18m
>>
>> if there is still an error, a copy of the batch file could be
>> useful!
>>
>> Best wishes,
>> Julien
>>
>>
>> On 24/04/2019 17:27, Subrata Pramanik wrote:
>>> Hi all,
>>>
>>>
>>> I am trying to submit a job to Gromacs using cluster.
>>>
>>> With LSF, it was working well.
>>>
>>> I am getting the follwong error:
>>>
>>> sbatch: error: s_p_parse_file: unable to status file
>>> /etc/slurm/slurm.conf: No such file or directory, retrying in
>>> 1sec up to 60sec
>>> sbatch: error: ClusterName needs to be specified
>>> sbatch: fatal: Unable to process configuration file
>>>
>>> Please suggest me to solve this problem.
>>>
>>> Thanks in advance.
>>>
>>> Best regards,
>>> Subrata.
>>>
>>>
>>> On Tue, Mar 12, 2019 at 8:20 PM Philipp Rüßmann
>>> <p.ruessmann(a)fz-juelich.de
>>> <mailto:p.ruessmann@fz-juelich.de>> wrote:
>>>
>>> Dear Marek,
>>>
>>> the command is #SBATCH --account=abcd1234.
>>>
>>> Cheers,
>>> Philipp
>>>
>>>
>>> Am 12.03.19 um 20:01 schrieb simon(a)isf.rwth-aachen.de
>>> <mailto:simon@isf.rwth-aachen.de>:
>>> > Hi all,
>>> >
>>> > Is there an equivalent of the #BSUB -p command in SLURM
>>> as well?
>>> >
>>> > Best Wishes,
>>> > Marek
>>> > _______________________________________________
>>> > claix18-slurm-pilot mailing list --
>>> claix18-slurm-pilot(a)lists.rwth-aachen.de
>>> <mailto:claix18-slurm-pilot@lists.rwth-aachen.de>
>>> > To unsubscribe send an email to
>>> claix18-slurm-pilot-leave(a)lists.rwth-aachen.de
>>> <mailto:claix18-slurm-pilot-leave@lists.rwth-aachen.de>
>>>
>>>
>>> --
>>> Dr. rer. nat. Philipp Rüßmann
>>> Peter Grünberg Institut (PGI-1) and Institute for
>>> Advanced Simulation (IAS-1)
>>> Forschungszentrum Jülich GmbH, D-52425 Jülich
>>> Tel.: +49 2461 61-5523
>>> E-Mail: p.ruessmann(a)fz-juelich.de
>>> <mailto:p.ruessmann@fz-juelich.de>
>>>
>>>
>>>
>>> ------------------------------------------------------------------------------------------------
>>> ------------------------------------------------------------------------------------------------
>>> Forschungszentrum Juelich GmbH
>>> 52425 Juelich
>>> Sitz der Gesellschaft: Juelich
>>> Eingetragen im Handelsregister des Amtsgerichts Dueren
>>> Nr. HR B 3498
>>> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen
>>> Huthmacher
>>> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt
>>> (Vorsitzender),
>>> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing.
>>> Harald Bolt,
>>> Prof. Dr. Sebastian M. Schmidt
>>> ------------------------------------------------------------------------------------------------
>>> ------------------------------------------------------------------------------------------------
>>> _______________________________________________
>>> claix18-slurm-pilot mailing list --
>>> claix18-slurm-pilot(a)lists.rwth-aachen.de
>>> <mailto:claix18-slurm-pilot@lists.rwth-aachen.de>
>>> To unsubscribe send an email to
>>> claix18-slurm-pilot-leave(a)lists.rwth-aachen.de
>>> <mailto:claix18-slurm-pilot-leave@lists.rwth-aachen.de>
>>>
>>>
>>>
>>> --
>>> Yours Sincerely,
>>>
>>> Subrata Pramanik
>>>
>>> _______________________________________________
>>> claix18-slurm-pilot mailing list -- claix18-slurm-pilot(a)lists.rwth-aachen.de <mailto:claix18-slurm-pilot@lists.rwth-aachen.de>
>>> To unsubscribe send an email to claix18-slurm-pilot-leave(a)lists.rwth-aachen.de <mailto:claix18-slurm-pilot-leave@lists.rwth-aachen.de>
>>
>> --
>> **************************************************
>> Dr. Julien GUÉNOLÉ
>> Research Group Head
>> ----------------------
>> Institute of Physical Metallurgy and Metal Physics
>> RWTH Aachen University
>> Kopernikusstrasse 14
>> 52074 Aachen, GERMANY
>> ---------------------- Room 202
>> Phone [office] +49 241 80 26866
>> Email guenole(a)imm.rwth-aachen.de <mailto:guenole@imm.rwth-aachen.de>
>> Web http://www.julien-guenole.fr
>> Twitter @nanouayeur
>> **************************************************
>>
>> _______________________________________________
>> claix18-slurm-pilot mailing list --
>> claix18-slurm-pilot(a)lists.rwth-aachen.de
>> <mailto:claix18-slurm-pilot@lists.rwth-aachen.de>
>> To unsubscribe send an email to
>> claix18-slurm-pilot-leave(a)lists.rwth-aachen.de
>> <mailto:claix18-slurm-pilot-leave@lists.rwth-aachen.de>
>>
>>
>>
>> --
>> Yours Sincerely,
>>
>> Subrata Pramanik
>
> --
> **************************************************
> Dr. Julien GUÉNOLÉ
> Research Group Head
> ----------------------
> Institute of Physical Metallurgy and Metal Physics
> RWTH Aachen University
> Kopernikusstrasse 14
> 52074 Aachen, GERMANY
> ---------------------- Room 202
> Phone [office] +49 241 80 26866
> Email guenole(a)imm.rwth-aachen.de <mailto:guenole@imm.rwth-aachen.de>
> Web http://www.julien-guenole.fr
> Twitter @nanouayeur
> **************************************************
>
> _______________________________________________
> claix18-slurm-pilot mailing list --
> claix18-slurm-pilot(a)lists.rwth-aachen.de
> <mailto:claix18-slurm-pilot@lists.rwth-aachen.de>
> To unsubscribe send an email to
> claix18-slurm-pilot-leave(a)lists.rwth-aachen.de
> <mailto:claix18-slurm-pilot-leave@lists.rwth-aachen.de>
>
>
>
> --
> Yours Sincerely,
>
> Subrata Pramanik
>
> _______________________________________________
> claix18-slurm-pilot mailing list -- claix18-slurm-pilot(a)lists.rwth-aachen.de
> To unsubscribe send an email to claix18-slurm-pilot-leave(a)lists.rwth-aachen.de
>
Thanks to answer to the list: it can always be helpful to others...
If you get an error with these,
#SBATCH -p c18m
#SBATCH --account=jara0187
I don't know what to suggest more.
Best wishes,
Julien
On 25/04/2019 09:20, Subrata Pramanik wrote:
> Dear Julien,
>
> Thank you for your suggestion.
>
> In the LSF, I had this way:
>
> ##BSUB -P rwth0031
> #BSUB -P jara0187
>
> In the SLURM, I changed in these ways:
>
> #SBATCH -p c18m
> #SBATCH --account=jara0187
>
> OR
>
> #SBATCH -p rwth0031
> #SBATCH --account=jara0187
>
> In both cases, I am getting the same error:
> sbatch: error: s_p_parse_file: unable to status file
> /etc/slurm/slurm.conf: No such file or directory, retrying in 1sec up
> to 60sec
> sbatch: error: ClusterName needs to be specified
> sbatch: fatal: Unable to process configuration file
>
> I am herewith sending the script file. If possible, please have a look
> and suggest me.
>
> Thanks in advance.
>
> Best regards,
> Subrata.
>
>
> On Wed, Apr 24, 2019 at 5:36 PM Julien Guénolé
> <guenole(a)imm.rwth-aachen.de <mailto:guenole@imm.rwth-aachen.de>> wrote:
>
> As indicated by the error message, did you try to specify the
> ClusterName? For example:
>
> #SBATCH -p c18m
>
> if there is still an error, a copy of the batch file could be useful!
>
> Best wishes,
> Julien
>
>
> On 24/04/2019 17:27, Subrata Pramanik wrote:
>> Hi all,
>>
>>
>> I am trying to submit a job to Gromacs using cluster.
>>
>> With LSF, it was working well.
>>
>> I am getting the follwong error:
>>
>> sbatch: error: s_p_parse_file: unable to status file
>> /etc/slurm/slurm.conf: No such file or directory, retrying in
>> 1sec up to 60sec
>> sbatch: error: ClusterName needs to be specified
>> sbatch: fatal: Unable to process configuration file
>>
>> Please suggest me to solve this problem.
>>
>> Thanks in advance.
>>
>> Best regards,
>> Subrata.
>>
>>
>> On Tue, Mar 12, 2019 at 8:20 PM Philipp Rüßmann
>> <p.ruessmann(a)fz-juelich.de <mailto:p.ruessmann@fz-juelich.de>> wrote:
>>
>> Dear Marek,
>>
>> the command is #SBATCH --account=abcd1234.
>>
>> Cheers,
>> Philipp
>>
>>
>> Am 12.03.19 um 20:01 schrieb simon(a)isf.rwth-aachen.de
>> <mailto:simon@isf.rwth-aachen.de>:
>> > Hi all,
>> >
>> > Is there an equivalent of the #BSUB -p command in SLURM as
>> well?
>> >
>> > Best Wishes,
>> > Marek
>> > _______________________________________________
>> > claix18-slurm-pilot mailing list --
>> claix18-slurm-pilot(a)lists.rwth-aachen.de
>> <mailto:claix18-slurm-pilot@lists.rwth-aachen.de>
>> > To unsubscribe send an email to
>> claix18-slurm-pilot-leave(a)lists.rwth-aachen.de
>> <mailto:claix18-slurm-pilot-leave@lists.rwth-aachen.de>
>>
>>
>> --
>> Dr. rer. nat. Philipp Rüßmann
>> Peter Grünberg Institut (PGI-1) and Institute for Advanced
>> Simulation (IAS-1)
>> Forschungszentrum Jülich GmbH, D-52425 Jülich
>> Tel.: +49 2461 61-5523
>> E-Mail: p.ruessmann(a)fz-juelich.de
>> <mailto:p.ruessmann@fz-juelich.de>
>>
>>
>>
>> ------------------------------------------------------------------------------------------------
>> ------------------------------------------------------------------------------------------------
>> Forschungszentrum Juelich GmbH
>> 52425 Juelich
>> Sitz der Gesellschaft: Juelich
>> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR
>> B 3498
>> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
>> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt
>> (Vorsitzender),
>> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald
>> Bolt,
>> Prof. Dr. Sebastian M. Schmidt
>> ------------------------------------------------------------------------------------------------
>> ------------------------------------------------------------------------------------------------
>> _______________________________________________
>> claix18-slurm-pilot mailing list --
>> claix18-slurm-pilot(a)lists.rwth-aachen.de
>> <mailto:claix18-slurm-pilot@lists.rwth-aachen.de>
>> To unsubscribe send an email to
>> claix18-slurm-pilot-leave(a)lists.rwth-aachen.de
>> <mailto:claix18-slurm-pilot-leave@lists.rwth-aachen.de>
>>
>>
>>
>> --
>> Yours Sincerely,
>>
>> Subrata Pramanik
>>
>> _______________________________________________
>> claix18-slurm-pilot mailing list -- claix18-slurm-pilot(a)lists.rwth-aachen.de <mailto:claix18-slurm-pilot@lists.rwth-aachen.de>
>> To unsubscribe send an email to claix18-slurm-pilot-leave(a)lists.rwth-aachen.de <mailto:claix18-slurm-pilot-leave@lists.rwth-aachen.de>
>
> --
> **************************************************
> Dr. Julien GUÉNOLÉ
> Research Group Head
> ----------------------
> Institute of Physical Metallurgy and Metal Physics
> RWTH Aachen University
> Kopernikusstrasse 14
> 52074 Aachen, GERMANY
> ---------------------- Room 202
> Phone [office] +49 241 80 26866
> Email guenole(a)imm.rwth-aachen.de <mailto:guenole@imm.rwth-aachen.de>
> Web http://www.julien-guenole.fr
> Twitter @nanouayeur
> **************************************************
>
> _______________________________________________
> claix18-slurm-pilot mailing list --
> claix18-slurm-pilot(a)lists.rwth-aachen.de
> <mailto:claix18-slurm-pilot@lists.rwth-aachen.de>
> To unsubscribe send an email to
> claix18-slurm-pilot-leave(a)lists.rwth-aachen.de
> <mailto:claix18-slurm-pilot-leave@lists.rwth-aachen.de>
>
>
>
> --
> Yours Sincerely,
>
> Subrata Pramanik
--
**************************************************
Dr. Julien GUÉNOLÉ
Research Group Head
----------------------
Institute of Physical Metallurgy and Metal Physics
RWTH Aachen University
Kopernikusstrasse 14
52074 Aachen, GERMANY
---------------------- Room 202
Phone [office] +49 241 80 26866
Email guenole(a)imm.rwth-aachen.de
Web http://www.julien-guenole.fr
Twitter @nanouayeur
**************************************************