I have started to look into Hadoop. If my understanding is right i could process a very big file and it would get split over different nodes, however if the file is compressed t
yes, you could have one large compressed file, or multiple compressed files (multiple files specified with -files or the api).
TextInputFormat and descendants should automatically handle .gz compressed files. you can also implement your own InputFormat (which will split the input file into chunks for processing) and RecordReader (which extract one record at a time from the chunk)
another alternative for generic copmression might be to use a compressed file system (such as ext3 with the compression patch, zfs, compFUSEd, or FuseCompress...)