Up to top level
AO15   AO16   AO17   AO18   AO19   Backgrounds   Calibration   Conference   Data   Docs   EPICMOS   EPICpn   Feedback   Gallery   Misc   OM   Pending   PhD_Theses   Publications   RGS   RadMonitor   SAS_Hardware   SAS_WS   SASv16.0   SASv16.0_Installation   SASv16.1   SASv16.1_Installation   SASv17.0   SASv17.0_Installation   SASv18.0   SASv18.0_Installation   SciSim   Simulators_other   Suggestions   Trash   Visibility   XMM-bouncing   XMM-news   XRPS   XSA   esas   incoming  

Logged in as guest

Viewing SASv17.0/84519
Full headers

From: pk394@cam.ac.uk
Subject: OutOfMemory error in emproc::hkgtigen
Compose reply
Download message
Move To:
3 replies: 1 2 3
4 followups: 1 2 3 4

Private message: yes  no

Notes:

Notification:


Date: Fri, 11 Jan 2019 17:28:55 +0100
From: pk394@cam.ac.uk
To: xmmhelp@sciops.esa.int
Subject: OutOfMemory error in emproc::hkgtigen
Full_Name: Peter Kosec
Submission from: (NULL) (193.147.152.102)


Hi,

We are getting the following error when trying to run the emproc, but also
epproc and rgsproc tasks while reducing XMM data:

** emproc::hkgtigen: fatal error (OutOfMemory), An out-of-memory condition has
occurred: A component of the task `hkgtigen' has tried to allocate more dynamic
memory than what is currently available in the system. A likely cause of this is
an attempt to read and/or write a very large data set with the DAL in its
high-memory-mode. In this case, please choose the low-memory-model (see the DAL
documentation for details) and try again. If this does not help it might be that
too many users are using the system or the swap space is insufficient.

This is happening on several datasets including 0831790301, using versions
xmmsas_20180620_1732 and also xmmsas_20170719_1539 on Fedora 28 Linux. 

The reduction proceeds after the error for emproc and epproc and we were able to
obtain the datasets, but it immediately crashes rgsproc. 

We have monitored the RAM memory during the reduction and noticed no spikes in
used RAM. The error appears immediately after starting the hkgtigen task within
emproc, epproc or rgsproc.

Thanks,
Peter


Reply 1

Resend
From: Ignacio de la Calle <xmmhelp@sciops.esa.int>
To: pk394@cam.ac.uk
Subject: Re: OutOfMemory error in emproc::hkgtigen (PR#84519)
Date: Fri Jan 11 18:09:23 2019
Dear Peter Kosec,

Could you send me the specifications of the pc where you are running your
analysis ? In the meantime I will try to reproduce the error.

Regards,

Ignacio de la Calle
XMM-Newton SOC

> Submission from: (NULL) (193.147.152.102)
> 
> 
> Hi,
> 
> We are getting the following error when trying to run the emproc, but also
> epproc and rgsproc tasks while reducing XMM data:
> 
> ** emproc::hkgtigen: fatal error (OutOfMemory), An out-of-memory condition
has
> occurred: A component of the task `hkgtigen' has tried to allocate more
dynamic
> memory than what is currently available in the system. A likely cause of
this
is
> an attempt to read and/or write a very large data set with the DAL in its
> high-memory-mode. In this case, please choose the low-memory-model (see the
DAL
> documentation for details) and try again. If this does not help it might be
that
> too many users are using the system or the swap space is insufficient.
> 
> This is happening on several datasets including 0831790301, using versions
> xmmsas_20180620_1732 and also xmmsas_20170719_1539 on Fedora 28 Linux. 
> 
> The reduction proceeds after the error for emproc and epproc and we were
able
to
> obtain the datasets, but it immediately crashes rgsproc. 
> 
> We have monitored the RAM memory during the reduction and noticed no spikes
in
> used RAM. The error appears immediately after starting the hkgtigen task
within
> emproc, epproc or rgsproc.
> 
> Thanks,
> Peter
> 
> 


Followup 1

Compose reply
Download message
Date: Fri, 11 Jan 2019 17:16:09 +0000
From: "P. Kosec" <pk394@cam.ac.uk>
To: Ignacio de la Calle <xmmhelp@sciops.esa.int>
Cc: "Dr R.M. Johnstone" <rmj@ast.cam.ac.uk>
Subject: Re: OutOfMemory error in emproc::hkgtigen (PR#84519)
Dear Ignacio,

Thank you for the reply. The pc is equipped with Intel i7 4790K and 16GB 
RAM.

Regards,
Peter


On 2019-01-11 18:09, Ignacio de la Calle wrote:
> Dear Peter Kosec,
> 
> Could you send me the specifications of the pc where you are running 
> your
> analysis ? In the meantime I will try to reproduce the error.
> 
> Regards,
> 
> Ignacio de la Calle
> XMM-Newton SOC
> 
>> Submission from: (NULL) (193.147.152.102)
>> 
>> 
>> Hi,
>> 
>> We are getting the following error when trying to run the emproc, but 
>> also
>> epproc and rgsproc tasks while reducing XMM data:
>> 
>> ** emproc::hkgtigen: fatal error (OutOfMemory), An out-of-memory 
>> condition
> has
>> occurred: A component of the task `hkgtigen' has tried to allocate 
>> more
> dynamic
>> memory than what is currently available in the system. A likely cause 
>> of this
> is
>> an attempt to read and/or write a very large data set with the DAL in 
>> its
>> high-memory-mode. In this case, please choose the low-memory-model 
>> (see the
> DAL
>> documentation for details) and try again. If this does not help it 
>> might be
> that
>> too many users are using the system or the swap space is insufficient.
>> 
>> This is happening on several datasets including 0831790301, using 
>> versions
>> xmmsas_20180620_1732 and also xmmsas_20170719_1539 on Fedora 28 Linux.
>> 
>> The reduction proceeds after the error for emproc and epproc and we 
>> were able
> to
>> obtain the datasets, but it immediately crashes rgsproc.
>> 
>> We have monitored the RAM memory during the reduction and noticed no 
>> spikes
> in
>> used RAM. The error appears immediately after starting the hkgtigen 
>> task
> within
>> emproc, epproc or rgsproc.
>> 
>> Thanks,
>> Peter
>> 
>> 
> 
> This message is intended only for the recipient(s) named above. It may
> contain proprietary information and/or
> protected content. Any unauthorised disclosure, use, retention or
> dissemination is prohibited. If you have received
> this e-mail in error, please notify the sender immediately. ESA
> applies appropriate organisational measures to protect
> personal data, in case of data privacy queries, please contact the ESA
> Data Protection Officer (dpo@esa.int).



Followup 2

Compose reply
Download message
Date: Fri, 11 Jan 2019 18:56:58 +0000
From: "P. Kosec" <pk394@cam.ac.uk>
To: Ignacio de la Calle <xmmhelp@sciops.esa.int>
Cc: "Dr R.M. Johnstone" <rmj@ast.cam.ac.uk>
Subject: Re: OutOfMemory error in emproc::hkgtigen (PR#84519)
Dear Ignacio,

We have managed to find the source of the problem. The problem goes away 
when downgrading glibc and dependent packages from:

glibc-2.27-37.fc28.x86_64
to
glibc-2.27-35.fc28.x86_64

glibc-2.27-37.fc28.x86_64 was released around 22 Dec 2018.


Best Wishes,
Peter


On 2019-01-11 17:16, P. Kosec wrote:
> Dear Ignacio,
> 
> Thank you for the reply. The pc is equipped with Intel i7 4790K and 
> 16GB RAM.
> 
> Regards,
> Peter
> 
> 
> On 2019-01-11 18:09, Ignacio de la Calle wrote:
>> Dear Peter Kosec,
>> 
>> Could you send me the specifications of the pc where you are running 
>> your
>> analysis ? In the meantime I will try to reproduce the error.
>> 
>> Regards,
>> 
>> Ignacio de la Calle
>> XMM-Newton SOC
>> 
>>> Submission from: (NULL) (193.147.152.102)
>>> 
>>> 
>>> Hi,
>>> 
>>> We are getting the following error when trying to run the emproc,
but 
>>> also
>>> epproc and rgsproc tasks while reducing XMM data:
>>> 
>>> ** emproc::hkgtigen: fatal error (OutOfMemory), An out-of-memory 
>>> condition
>> has
>>> occurred: A component of the task `hkgtigen' has tried to allocate 
>>> more
>> dynamic
>>> memory than what is currently available in the system. A likely
cause 
>>> of this
>> is
>>> an attempt to read and/or write a very large data set with the DAL
in 
>>> its
>>> high-memory-mode. In this case, please choose the low-memory-model 
>>> (see the
>> DAL
>>> documentation for details) and try again. If this does not help it 
>>> might be
>> that
>>> too many users are using the system or the swap space is 
>>> insufficient.
>>> 
>>> This is happening on several datasets including 0831790301, using 
>>> versions
>>> xmmsas_20180620_1732 and also xmmsas_20170719_1539 on Fedora 28 
>>> Linux.
>>> 
>>> The reduction proceeds after the error for emproc and epproc and we

>>> were able
>> to
>>> obtain the datasets, but it immediately crashes rgsproc.
>>> 
>>> We have monitored the RAM memory during the reduction and noticed
no 
>>> spikes
>> in
>>> used RAM. The error appears immediately after starting the hkgtigen

>>> task
>> within
>>> emproc, epproc or rgsproc.
>>> 
>>> Thanks,
>>> Peter
>>> 
>>> 
>> 
>> This message is intended only for the recipient(s) named above. It may
>> contain proprietary information and/or
>> protected content. Any unauthorised disclosure, use, retention or
>> dissemination is prohibited. If you have received
>> this e-mail in error, please notify the sender immediately. ESA
>> applies appropriate organisational measures to protect
>> personal data, in case of data privacy queries, please contact the ESA
>> Data Protection Officer (dpo@esa.int).



Reply 2

Resend
From: Ignacio de la Calle <xmmhelp@sciops.esa.int>
To: pk394@cam.ac.uk
Subject: Re: OutOfMemory error in emproc::hkgtigen (PR#84519)
Date: Mon Jan 14 11:25:59 2019
Dear Peter,

Thanks for letting us know. We were a bit surprise about the OutOfMemory
error when running this task. It would have been a bit hard to track.

Regards,

Ignacio de la Calle
XMM-Newton SOC


> Dear Ignacio,
> 
> We have managed to find the source of the problem. The problem goes away 
> when downgrading glibc and dependent packages from:
> 
> glibc-2.27-37.fc28.x86_64
> to
> glibc-2.27-35.fc28.x86_64
> 
> glibc-2.27-37.fc28.x86_64 was released around 22 Dec 2018.
> 
> 
> Best Wishes,
> Peter
> 
> 
> On 2019-01-11 17:16, P. Kosec wrote:
>> Dear Ignacio,
>> 
>> Thank you for the reply. The pc is equipped with Intel i7 4790K and 
>> 16GB RAM.
>> 
>> Regards,
>> Peter
>> 
>> 
>> On 2019-01-11 18:09, Ignacio de la Calle wrote:
>>> Dear Peter Kosec,
>>> 
>>> Could you send me the specifications of the pc where you are
running 
>>> your
>>> analysis ? In the meantime I will try to reproduce the error.
>>> 
>>> Regards,
>>> 
>>> Ignacio de la Calle
>>> XMM-Newton SOC
>>> 
>>>> Submission from: (NULL) (193.147.152.102)
>>>> 
>>>> 
>>>> Hi,
>>>> 
>>>> We are getting the following error when trying to run the
emproc, but 
>>>> also
>>>> epproc and rgsproc tasks while reducing XMM data:
>>>> 
>>>> ** emproc::hkgtigen: fatal error (OutOfMemory), An
out-of-memory 
>>>> condition
>>> has
>>>> occurred: A component of the task `hkgtigen' has tried to
allocate 
>>>> more
>>> dynamic
>>>> memory than what is currently available in the system. A likely
cause 
>>>> of this
>>> is
>>>> an attempt to read and/or write a very large data set with the
DAL in 
>>>> its
>>>> high-memory-mode. In this case, please choose the
low-memory-model 
>>>> (see the
>>> DAL
>>>> documentation for details) and try again. If this does not help
it 
>>>> might be
>>> that
>>>> too many users are using the system or the swap space is 
>>>> insufficient.
>>>> 
>>>> This is happening on several datasets including 0831790301,
using 
>>>> versions
>>>> xmmsas_20180620_1732 and also xmmsas_20170719_1539 on Fedora 28

>>>> Linux.
>>>> 
>>>> The reduction proceeds after the error for emproc and epproc
and we 
>>>> were able
>>> to
>>>> obtain the datasets, but it immediately crashes rgsproc.
>>>> 
>>>> We have monitored the RAM memory during the reduction and
noticed no 
>>>> spikes
>>> in
>>>> used RAM. The error appears immediately after starting the
hkgtigen 
>>>> task
>>> within
>>>> emproc, epproc or rgsproc.
>>>> 
>>>> Thanks,
>>>> Peter
>>>> 
>>>> 
>>> 
>>> This message is intended only for the recipient(s) named above. It
may
>>> contain proprietary information and/or
>>> protected content. Any unauthorised disclosure, use, retention or
>>> dissemination is prohibited. If you have received
>>> this e-mail in error, please notify the sender immediately. ESA
>>> applies appropriate organisational measures to protect
>>> personal data, in case of data privacy queries, please contact the
ESA
>>> Data Protection Officer (dpo@esa.int).
> 
> 


Followup 3

Compose reply
Download message
Date: Mon, 14 Jan 2019 10:32:21 +0000
From: "P. Kosec" <pk394@cam.ac.uk>
To: Ignacio de la Calle <xmmhelp@sciops.esa.int>
Subject: Re: OutOfMemory error in emproc::hkgtigen (PR#84519)
Dear Ignacio,

Just to clarify, at the moment we don't know if this is a glibc bug or 
an hkgtigen bug. This is the changelog between the working glibc -35 
version and the failing version -37 which might be of use:

* Thu Dec 13 2018 Carlos O'Donell <carlos@redhat.com> - 2.27-37
- Auto-sync with upstream branch release/2.27/master, commit 
f6d0e8c36f02b387d33f2cc58c7cb204f201d92e.
- rdlock stalls indefinitely on an unlocked pthread rwlock (swbz#23861)

* Thu Dec 13 2018 Florian Weimer <fweimer@redhat.com> - 2.27-36
- Auto-sync with upstream branch release/2.27/master, commit 
2794474c655a0f895862a6de9fb79a2fd2cdde28:
- powerpc: missing CFI register information in __mpn_* functions 
(swbz#23614)
- malloc: Implement tcache double free check (#1647395)
- inet/tst-if_index-long: New test case for CVE-2018-19591 (swbz#23927)
- elf: Fix _dl_profile_fixup data-dependency issue (swbz#23690)

Best,
Peter


On 2019-01-14 11:25, Ignacio de la Calle wrote:
> Dear Peter,
> 
> Thanks for letting us know. We were a bit surprise about the 
> OutOfMemory
> error when running this task. It would have been a bit hard to track.
> 
> Regards,
> 
> Ignacio de la Calle
> XMM-Newton SOC
> 
> 
>> Dear Ignacio,
>> 
>> We have managed to find the source of the problem. The problem goes 
>> away
>> when downgrading glibc and dependent packages from:
>> 
>> glibc-2.27-37.fc28.x86_64
>> to
>> glibc-2.27-35.fc28.x86_64
>> 
>> glibc-2.27-37.fc28.x86_64 was released around 22 Dec 2018.
>> 
>> 
>> Best Wishes,
>> Peter
>> 
>> 
>> On 2019-01-11 17:16, P. Kosec wrote:
>>> Dear Ignacio,
>>> 
>>> Thank you for the reply. The pc is equipped with Intel i7 4790K and
>>> 16GB RAM.
>>> 
>>> Regards,
>>> Peter
>>> 
>>> 
>>> On 2019-01-11 18:09, Ignacio de la Calle wrote:
>>>> Dear Peter Kosec,
>>>> 
>>>> Could you send me the specifications of the pc where you are
running
>>>> your
>>>> analysis ? In the meantime I will try to reproduce the error.
>>>> 
>>>> Regards,
>>>> 
>>>> Ignacio de la Calle
>>>> XMM-Newton SOC
>>>> 
>>>>> Submission from: (NULL) (193.147.152.102)
>>>>> 
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> We are getting the following error when trying to run the
emproc, 
>>>>> but
>>>>> also
>>>>> epproc and rgsproc tasks while reducing XMM data:
>>>>> 
>>>>> ** emproc::hkgtigen: fatal error (OutOfMemory), An
out-of-memory
>>>>> condition
>>>> has
>>>>> occurred: A component of the task `hkgtigen' has tried to
allocate
>>>>> more
>>>> dynamic
>>>>> memory than what is currently available in the system. A
likely 
>>>>> cause
>>>>> of this
>>>> is
>>>>> an attempt to read and/or write a very large data set with
the DAL 
>>>>> in
>>>>> its
>>>>> high-memory-mode. In this case, please choose the
low-memory-model
>>>>> (see the
>>>> DAL
>>>>> documentation for details) and try again. If this does not
help it
>>>>> might be
>>>> that
>>>>> too many users are using the system or the swap space is
>>>>> insufficient.
>>>>> 
>>>>> This is happening on several datasets including 0831790301,
using
>>>>> versions
>>>>> xmmsas_20180620_1732 and also xmmsas_20170719_1539 on
Fedora 28
>>>>> Linux.
>>>>> 
>>>>> The reduction proceeds after the error for emproc and
epproc and we
>>>>> were able
>>>> to
>>>>> obtain the datasets, but it immediately crashes rgsproc.
>>>>> 
>>>>> We have monitored the RAM memory during the reduction and
noticed 
>>>>> no
>>>>> spikes
>>>> in
>>>>> used RAM. The error appears immediately after starting the
hkgtigen
>>>>> task
>>>> within
>>>>> emproc, epproc or rgsproc.
>>>>> 
>>>>> Thanks,
>>>>> Peter
>>>>> 
>>>>> 
>>>> 
>>>> This message is intended only for the recipient(s) named above.
It 
>>>> may
>>>> contain proprietary information and/or
>>>> protected content. Any unauthorised disclosure, use, retention
or
>&

Message of length 5899 truncated


Reply 3

Resend
From: Ignacio de la Calle <xmmhelp@sciops.esa.int>
To: pk394@cam.ac.uk
Subject: Re: OutOfMemory error in emproc::hkgtigen (PR#84519)
Date: Mon Jan 14 11:36:00 2019
Dear Peter,

We have never have a problem in the past with hkgtigen. I will pass on this 
information to our software librarian and will get back to you if he has any 
extra information.

Thanks,

Ignacio de la Calle
XMM-Newton SOC


 Dear Ignacio,
> 
> Just to clarify, at the moment we don't know if this is a glibc bug or 
> an hkgtigen bug. This is the changelog between the working glibc -35 
> version and the failing version -37 which might be of use:
> 
> * Thu Dec 13 2018 Carlos O'Donell <carlos@redhat.com> - 2.27-37
> - Auto-sync with upstream branch release/2.27/master, commit 
> f6d0e8c36f02b387d33f2cc58c7cb204f201d92e.
> - rdlock stalls indefinitely on an unlocked pthread rwlock (swbz#23861)
> 
> * Thu Dec 13 2018 Florian Weimer <fweimer@redhat.com> - 2.27-36
> - Auto-sync with upstream branch release/2.27/master, commit 
> 2794474c655a0f895862a6de9fb79a2fd2cdde28:
> - powerpc: missing CFI register information in __mpn_* functions 
> (swbz#23614)
> - malloc: Implement tcache double free check (#1647395)
> - inet/tst-if_index-long: New test case for CVE-2018-19591 (swbz#23927)
> - elf: Fix _dl_profile_fixup data-dependency issue (swbz#23690)
> 
> Best,
> Peter
> 
> 
> On 2019-01-14 11:25, Ignacio de la Calle wrote:
>> Dear Peter,
>> 
>> Thanks for letting us know. We were a bit surprise about the 
>> OutOfMemory
>> error when running this task. It would have been a bit hard to track.
>> 
>> Regards,
>> 
>> Ignacio de la Calle
>> XMM-Newton SOC
>> 
>> 
>>> Dear Ignacio,
>>> 
>>> We have managed to find the source of the problem. The problem goes

>>> away
>>> when downgrading glibc and dependent packages from:
>>> 
>>> glibc-2.27-37.fc28.x86_64
>>> to
>>> glibc-2.27-35.fc28.x86_64
>>> 
>>> glibc-2.27-37.fc28.x86_64 was released around 22 Dec 2018.
>>> 
>>> 
>>> Best Wishes,
>>> Peter
>>> 
>>> 
>>> On 2019-01-11 17:16, P. Kosec wrote:
>>>> Dear Ignacio,
>>>> 
>>>> Thank you for the reply. The pc is equipped with Intel i7 4790K
and
>>>> 16GB RAM.
>>>> 
>>>> Regards,
>>>> Peter
>>>> 
>>>> 
>>>> On 2019-01-11 18:09, Ignacio de la Calle wrote:
>>>>> Dear Peter Kosec,
>>>>> 
>>>>> Could you send me the specifications of the pc where you
are running
>>>>> your
>>>>> analysis ? In the meantime I will try to reproduce the
error.
>>>>> 
>>>>> Regards,
>>>>> 
>>>>> Ignacio de la Calle
>>>>> XMM-Newton SOC
>>>>> 
>>>>>> Submission from: (NULL) (193.147.152.102)
>>>>>> 
>>>>>> 
>>>>>> Hi,
>>>>>> 
>>>>>> We are getting the following error when trying to run
the emproc, 
>>>>>> but
>>>>>> also
>>>>>> epproc and rgsproc tasks while reducing XMM data:
>>>>>> 
>>>>>> ** emproc::hkgtigen: fatal error (OutOfMemory), An
out-of-memory
>>>>>> condition
>>>>> has
>>>>>> occurred: A component of the task `hkgtigen' has tried
to allocate
>>>>>> more
>>>>> dynamic
>>>>>> memory than what is currently available in the system.
A likely 
>>>>>> cause
>>>>>> of this
>>>>> is
>>>>>> an attempt to read and/or write a very large data set
with the DAL 
>>>>>> in
>>>>>> its
>>>>>> high-memory-mode. In this case, please choose the
low-memory-model
>>>>>> (see the
>>>>> DAL
>>>>>> documentation for details) and try again. If this does
not help it
>>>>>> might be
>>>>> that
>>>>>> too many users are using the system or the swap space
is
>>>>>> insufficient.
>>>>>> 
>>>>>> This is happening on several datasets including
0831790301, using
>>>>>> versions
>>>>>> xmmsas_20180620_1732 and also xmmsas_20170719_1539 on
Fedora 28
>>>>>> Linux.
>>>>>> 
>>>>>> The reduction proceeds after the error for emproc and
epproc and we
>>>>>> were able
>>>>

Message of length 6942 truncated


Followup 4

Compose reply
Download message
Date: Tue, 30 Apr 2019 14:37:27 +0200 (CEST)
From: MAILER-DAEMON@scanmail.viaduc.fr (Mail Delivery System)
Subject: Undelivered Mail Returned to Sender
To: xmmhelp@sciops.esa.int
This is a MIME-encapsulated message.

--B8C1916009F.1556627847/scanmail.viaduc.fr
Content-Description: Notification
Content-Type: text/plain; charset=us-ascii

This is the mail system at host scanmail.viaduc.fr.

I'm sorry to have to inform you that your message could not
be delivered to one or more recipients. It's attached below.

For further assistance, please send mail to postmaster.

If you do so, please include this problem report. You can
delete your own text from the attached returned message.

                   The mail system

<dmokni@dynabuy-grandest.fr>: host 188.165.14.78[188.165.14.78] said: 550
5.1.1
    <dmokni@dynabuy-grandest.fr>: Recipient address rejected: User unknown
in
    virtual mailbox table (in reply to RCPT TO command)

--B8C1916009F.1556627847/scanmail.viaduc.fr
Content-Description: Delivery report
Content-Type: message/delivery-status

Reporting-MTA: dns; scanmail.viaduc.fr
X-Postfix-Queue-ID: B8C1916009F
X-Postfix-Sender: rfc822; xmmhelp@sciops.esa.int
Arrival-Date: Tue, 30 Apr 2019 14:37:26 +0200 (CEST)

Final-Recipient: rfc822; dmokni@dynabuy-grandest.fr
Original-Recipient: rfc822;dmokni@dynabuy-grandest.fr
Action: failed
Status: 5.1.1
Remote-MTA: dns; 188.165.14.78
Diagnostic-Code: smtp; 550 5.1.1 <dmokni@dynabuy-grandest.fr>: Recipient
    address rejected: User unknown in virtual mailbox table

--B8C1916009F.1556627847/scanmail.viaduc.fr
Content-Description: Undelivered Message
Content-Type: message/rfc822

Return-Path: <xmmhelp@sciops.esa.int>
Received: from viaduc-mx01.vaderetro-services.net
(viaduc-mx01.vaderetro-services.net [185.75.141.36])
	by scanmail.viaduc.fr (Postfix) with ESMTP id BB5181600D3
	for <dmokni@dynabuy-grandest.fr>; Tue, 30 Apr 2019 14:37:26 +0200 (CEST)
Received: from mx1.viaduc.fr (mx1.viaduc.fr [178.32.254.225])
	by viaduc-mx01.vaderetro-services.net (viaduc-mx01.vaderetro-services.net) with
ESMTP id A074151FFD
	for <dmokni@dynabuy-grandest.fr>; Tue, 30 Apr 2019 14:37:26 +0200 (CEST)
X-Greylist: from auto-whitelisted by SQLgrey-1.8.0-rc2
Received: from esrutmmgwext.esa.int (esrutmmgwext.esa.int [131.176.154.65])
	by mx1.viaduc.fr (Postfix) with ESMTPS id 9E0D6220086
	for <dmokni@dynabuy-grandest.fr>; Tue, 30 Apr 2019 14:37:26 +0200 (CEST)
Received: from [172.18.96.7] (port=39136 helo=esrlnxmtaelb02.esrin.esa.int)
	by esrutmmgwext.esa.int with esmtp (Exim 4.82_1-5b7a7c0-XX)
	(envelope-from <xmmhelp@sciops.esa.int>)
	id 1hLRy6-0004MZ-3B
	for dmokni@dynabuy-grandest.fr; Tue, 30 Apr 2019 14:34:43 +0200
Received: from esrlnxsemxgwn01.esrin.esa.int (esrlnxmtaelb01-mgt.esrin.esa.int
[172.18.28.24])
	by esrlnxmtaelb02.esrin.esa.int (Postfix) with ESMTP id EA01530E2D28
	for <dmokni@dynabuy-grandest.fr>; Tue, 30 Apr 2019 14:34:42 +0200 (CEST)
Received: from esrlnxsemxgwn01.esrin.esa.int (localhost [127.0.0.1])
	by localhost (Postfix) with SMTP id DCC5780165
	for <dmokni@dynabuy-grandest.fr>; Tue, 30 Apr 2019 14:34:42 +0200 (CEST)
Received: from scimta02.esac.esa.int (esrlnxmtaelb01-dmz.esrin.esa.int
[172.18.96.5])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate)
	by esrlnxsemxgwn01.esrin.esa.int (Postfix) with ESMTPS id 9F1C78015D
	for <dmokni@dynabuy-grandest.fr>; Tue, 30 Apr 2019 14:34:42 +0200 (CEST)
Received: from scimta02.esac.esa.int (localhost [127.0.0.1])
	by scimta02.esac.esa.int (Postfix) with ESMTPS id 5DD4381066
	for <dmokni@dynabuy-grandest.fr>; Tue, 30 Apr 2019 14:34:42 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
	by scimta02.esac.esa.int (Postfix) with ESMTP id 5175E8105C
	for <dmokni@dynabuy-grandest.fr>; Tue, 30 Apr 2019 14:34:42 +0200 (CEST)
X-CTCH-RefID: str=0001.0A0C0203.5CC840E3.000C,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0
X-Virus-Scanned: amavisd-new at scimta02.esac.esa.int
Received: from scimta02.esac.esa.int ([127.0.0.1])
	by localhost (scimta02.esac.esa.int [127.0.0.1]) (amavisd-new, port 10026)
	with ESMTP id gJCj4dp2t3fI for <dmokni@dynabuy-grandest.fr>;
	Tue, 30 Apr 2019 14:34:42 +0200 (CEST)
Received: from xvsoc01.vilspa.esa.es (xvsoc01.vilspa.esa.es [193.147.152.102])
	by scimta02.esac.esa.int (Postfix) with ESMTP id 2E8FF80F35
	for <dmokni@dynabuy-grandest.fr>; Tue, 30 Apr 2019 14:34:42 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
	by xvsoc01.vilspa.esa.es (8.13.6/8.12.10) with ESMTP id x3UCYfdE001946
	for <dmokni@dynabuy-grandest.fr>; Tue, 30 Apr 2019 12:34:41 GMT
Date: Tue, 30 Apr 2019 12:34:41 GMT
Message-Id: <13035_1556627682_5CC840E2_13035_27_2_201904301234.x3UCYfdE001946@xvsoc01.vilspa.esa.es>
From: xmmhelp@sciops.esa.int
To: dmokni@dynabuy-grandest.fr
Subject: Re: =?UTF-8?Q?Notre_Rendez-vous_=C3=A0_Saint-Maur-Des-Foss=C3=A9s?=
(PR#84519)
X-Loop: xmmhelp@sciops.esa.int
X-PMX-ESA-Disclaimer: yes
X-MC2-CHECKED-RECIPIENT: dmokni@dynabuy-grandest.fr
X-MC2-CHECKED-ALIAS-OF: 
X-VRSPAM-SCORE: 0
X-VRSPAM-STATE: legit

Message of length 7467 truncated

Up to top level
AO15   AO16   AO17   AO18   AO19   Backgrounds   Calibration   Conference   Data   Docs   EPICMOS   EPICpn   Feedback   Gallery   Misc   OM   Pending   PhD_Theses   Publications   RGS   RadMonitor   SAS_Hardware   SAS_WS   SASv16.0   SASv16.0_Installation   SASv16.1   SASv16.1_Installation   SASv17.0   SASv17.0_Installation   SASv18.0   SASv18.0_Installation   SciSim   Simulators_other   Suggestions   Trash   Visibility   XMM-bouncing   XMM-news   XRPS   XSA   esas   incoming  

Logged in as guest


Please make your (short) question the subject of your request!


Web interface using JitterBug ... back to the XMM home page