What is a container in YARN? Is it same as the child JVM in which the tasks on the nodemanager run or is it different?
According to the size of input data, multiple input splits are created. The MR job need to process this whole data so multiple tasks are being created(map & reduce tasks). So for each input split will be processed by one task. Now how to run this task, is suggested by Resource manager. Resource manager knows which node manager is free and which is busy, its like principal of college and node manager are the class teacher of college and principal knows which teacher is free. So it asks node manager to run that task(small fraction of entire job) in the container i.e. memory area such that jvm. So the job is run as an application master inside the container.
Word 'Container' is used in YARN in two contexts,
Container: Signifies an allocated resources to an ApplicationMaster. ResourceManager is responsible for issuing resource/container to an ApplicationMaster. Check Container API.
Launching a Container: Based on allocated resources (containers) ApplicationMaster request NodeManager to start Containers, resulting in executing task on a node. Check ContainerManager API.
In simple terms, Container is a place where a YARN application is run. It is available in each node. Application Master negotiates container with the scheduler(one of the component of Resource Manager). Containers are launched by Node Manager.
It represents a resource (memory) on a single node at a given cluster.
A container is
One MR task runs in such container(s).
Container is a place where the application runs its task. If you want to know the total no.of running containers in a cluster, then you could check in your cluster Yarn-Resource manager UI.
Yarn URL: http://Your-Active-ResourceManager-IP:45020/cluster/apps/RUNNING
At the "Running containers" column, the total no. of running containers details is present.
Note: If you are using spark, then the spark executors would be running inside the container. One container can accommodate multiple spark executors.
There can be multiple containers on a single Node (or a single very big one).
Every node in the system is considered to be composed of multiple containers of minimum size of memory (say 512MB or 1 GB). The ApplicationMaster can request any container as a multiple of the minimum memory size.
Source, see section ResourceManager/Resource Model.