When you create an external table in Hive (on Hadoop) with an Amazon S3 source location is the data transfered to the local Hadoop HDFS on:
Map tasks will read the data directly from S3. Between the Map and Reduce steps, data will be written to the local filesystem, and between mapreduce jobs (in queries that require multiple jobs) the temporary data will be written to HDFS.
If you are concerned about S3 read costs, it might make sense to create another table that is stored on HDFS, and do a one-time copy from the S3 table to the HDFS table.