this outlines some issues with fileschanged
and some other things that need doing.

------
look into how stat works on nfs files and directories.
 --I'm using device id and inode from stat to get a unique key.
statfs doesn't give me a unique key of any kind.  statvfs isn't present.
if I can't get a unique key then I have to use something else.  ick.
even if a stat reports inode and device id of an nfs file, it still
wouldn't be unique.  I'd have to add in the hostname as part of a key.
hey maybe that's what it's part of a FAMEvent structure...
but I need to know it beforehand.  darn
but hey maybe stat returns unique device ids and inodes for nfs files.
if filechanged nodes aren't unique then some files might not be able to be monitored.  and it wouldn't give an error saying so.

hmmm..
what not just use the filenames themselves?
it's a heck of a long key but it IS unique.
that seems to make it segfault even more.

-------

fileschanged ignoring certain fam codes...
I'm ignoring exists and endexists.  should I be?
I'm doing my own accounting of files I'm monitoring.

--------
can recursive loops happen when I dereference symlinks?
I don't think so.
recursive loops could happen with nfs.

------------
When recursing large numbers of files it can hang.
fixed this problem the first-time around with the "second chance" code 
inside receive_fam_notification().
maybe I need a third chance?
right now fileschanged hangs when I try to monitor my whole filesystem.
it works when I monitor all of /usr .
maybe it's the fam server that's causing it, I don't know.
this is a time consuming problem to replicate.

----------
problem with monitoring a file that gets written to so repeatedly
that it doesn't ever get through the FAMPending loop.
eg) fileschanged --recursive --files-to-monitor /tmp/this > /tmp/this
need a way to detect very frequently changing files and then cancel monitoring of them.  eg) so frequently that the program can't operate.

--------------------
don't use static arrays for the written list code...  it's a total kludge.
use something that's more efficent and modularized.
current limitation: if more than 4096 files change within 2 seconds then this software breaks.

use gdbm for written list
----------------
find out why monitoring executing files doesn't do anything.
---------
need a smarter way to get the first file on the command line.
filelist filenames can't contain a space.
---
option to use slocate.db somehow.
maybe it can help speed things up.
----
calmly handle binary filelists.
keep track of number of files attempted to be monitored
versuses number of files monitored.
when this rate is high enough, show a warning.
