I have a directory with about 100000 small files (each file is from 1-3 lines, each file is a text file). In size the directory isn't very big (< 2GB). This data lives in a professionally administered NFS server. The server runs Linux. I think the filesystem is ext3, but I don't know for sure. Also, I don't have root access to the server.
These files are the output of a large scale scientific experiment, over which I don't have control. However, I have to analyze the results.
Any I/O operation/processing in this directory is very, very slow. Opening a file (fopen in python), reading from an open file, closing a file, are all very slow. In bash ls, du, etc. don't work.
The question is:
What is the maximum number of files in a directory in Linux in such a way that it is practical to do processing, fopen, read, etc? I understand that the answer depends on many things: fs type, kernel version, server version, hardware, etc. I just want a rule of thumb, if possible.