Hi Piers,
You could try creating a new file in the /tmp folder in process
then remove it on closing of the script. Subsequent scripts might
start then check for for the presence of this file. If it exists
simply delay until it is gone or exit the script.
Don't forget to log the exec and the exits if they are produced.
At least you will then see the issues clearly.
Example
#ScriptRun
#
while fileexists /tmp/ScriptOn
Sleep(5)
# or
if FileExists /tmp/ScriptOn Exit;
CreateFile /tmp/ScriptOn
#Do the rest of your work after the check.
...
#On exit from current instance
Ddeletefile /tmp/ScriptOn
Hope this helps.
best wishes
Tony White
On 13/2/19 12:45 pm, Piers via luv-main wrote:
> Hi there,
>
> I have a server that regularly runs jobs to import email via IMAP from remote servers.
>
> The cron runs a bash scrip that in turn iterates through the customer's install directories and then goes through each
> mail box that users have configured and parses the email. This is done by going back to get 100 email headers checking
> that the message ID hasn't been imported and if not getting the full body.
>
> The problem is that these jobs seem to run over each other and then eventually (occasionally) it will hang the server.
>
> I've tried a few things like getting less headers, spacing the cron jobs out and using lock files but neither combination
> has resolved this issue. I suspect that some of the IMAP services take too long to respond or rate limit connections (but
> since I don't control them I can't be sure).
>
> I am looking for suggestions as to:
>
> - How to fix the underlying problem
>
> - How to configure Nagios to monitor specific processes
>
> Many thanks
>
> Piers
>
>
> _______________________________________________
> luv-main mailing list
> luv-main@luv.asn.au
> https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main
_______________________________________________
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main