Since "handle big data files" does not give much more information of what you want to do,
i'm guessing some simple data processing.
Bash shell scripts are generally terribly slow by themselves, but for text processing purposes might be just what you need, in combination with the cli tools such as grep, sed, awk, cut, sort, uniq, wc and others. In many cases this is the fastest working solution for your problem, especially if it is a one-time util you need. For more info on this approach, see [1]
If you need to handle big data files in your existing project, I would recommend against rewriting it in a different language [2]
If you need to write a tool for processing these files, and have no constraints, then a C++ program would be able to produce the fastest code. But "speed" also boils down to speed of development and are as I mentioned above, in relation to how much you will run the program.
1: http://www.commandlinefu.com/
2: http://onstartups.com/tabid/3339/bid/2596/Why-You-Should-Almost-Never-Rewrite-Your-Software.aspx