Observing the output of the tar command (-v flag) - observation by human eye - it seems to be pretty slower when it runs inside a sub-directory with about four hundred thousand small files (400000), there are others sub-directories inside this sub-directory with others thousands small files.
When tar starts to pack these sub-directories we can see each file reported by tar one by one taking about 1 to 2 seconds among them, its incredibly slow considering small files like a few bytes or dozens KBytes of size.
This filesystem is using jfs2 and it's hosted by an AIX 7.1 system. It's stored in a storage system using some sort of redundancy RAID mode based on SSD devices ("solid state hard disks"). There aren't alerts or any kind of reported issues in this storage system.
A lot of tests were made, sending and not sending the tar packing to either a tape device or a regular file, but the following test is enough to observe that unexpected slowness:
tar -cvf /dev/null .
How jfs2 works to deal with many small files ? Is it something that can be worked around setting up a jfs2 filesystem ? How tar deals with this kind of files structure ?
EDIT: Concurrency info
Tests were also made running it concurrently and not concurrently with others services in this FS or in the whole system. But it doesn't change the slowness, every time the tar command starts to deal with the sub-directories with many files, it behaves slower than before, ever.