[olug] NEED to kill a process....

Jay Bendon jaybocc2 at gmail.com
Tue May 22 18:46:48 UTC 2012


in the future you may want to do a soft mount of the NFS (as opposed to
what appears to be a hard mount in use right now).  NFS will continue to
try and try and try to complete its task, this includes waiting for the san
to come back or whatever. A soft mount will cause it to error out if it
fails.
--Jay


On Tue, May 22, 2012 at 1:42 PM, Jay Bendon <jaybocc2 at gmail.com> wrote:

> When children are misbehaving, kill their parents.
>
> kill the parent process and you should be ok.
>
> you can send the process a sigwait which is what the parent should have
> sent it aswell.  This is usually how i handle a hung or zombie process.
>
> It could be an issue at the driver level, you can try rmmod {nfs,nfsd}
> (maybe lockd and nfs_acl aswell) and whatever other modules are used by the
> san/nfs.  and reinsert the modules.
> --Jay
>
>
>
> On Tue, May 22, 2012 at 12:38 PM, Christopher Cashell <topher-olug at zyp.org
> > wrote:
>
>> On Tue, May 22, 2012 at 12:29 PM, Noel Leistad <noel at metc.net> wrote:
>> > Oddly, it's clustered w/ another box that accesses SAN, LOAD on both
>> > nearing 100 98 94, although, performance hasn't "pooped the boat"
>> yet....
>>
>> The load value on the box with the hung process can be a little
>> misleading, too.  Processes that are in uninterruptable IO state like
>> this one get counted along with runnable processes when factoring the
>> load value.  This can result in some misleading information, as you
>> could have 50 hung processes in an IO WAIT state that are utilizing
>> negligible (CPU) resources, and a CPU average of 0.1%, but because of
>> those hung processes, you'll end up with a load average of 50.
>>
>> If there's just a single hung process, that may not skew the load
>> average quite as much, though.
>>
>> > Noel
>>
>> --
>> Christopher
>> _______________________________________________
>> OLUG mailing list
>> OLUG at olug.org
>> https://lists.olug.org/mailman/listinfo/olug
>>
>
>



More information about the OLUG mailing list