What is the difference between concurrency and parallelism?
Examples are appreciated.
In electronics serial and parallel represent a type of static topology, determining the actual behaviour of the circuit. When there is no concurrency, parallelism is deterministic.
In order to describe dynamic, time-related phenomena, we use the terms sequential and concurrent. For example, a certain outcome may be obtained via a certain sequence of tasks (eg. a recipe). When we are talking with someone, we are producing a sequence of words. However, in reality, many other processes occur in the same moment, and thus, concur to the actual result of a certain action. If a lot of people is talking at the same time, concurrent talks may interfere with our sequence, but the outcomes of this interference are not known in advance. Concurrency introduces indeterminacy.
The serial/parallel and sequential/concurrent characterization are orthogonal. An example of this is in digital communication. In a serial adapter, a digital message is temporally (i.e. sequentially) distributed along the same communication line (eg. one wire). In a parallel adapter, this is divided also on parallel communication lines (eg. many wires), and then reconstructed on the receiving end.
Let us image a game, with 9 children. If we dispose them as a chain, give a message at the first and receive it at the end, we would have a serial communication. More words compose the message, consisting in a sequence of communication unities.
I like ice-cream so much. > X > X > X > X > X > X > X > X > X > ....
This is a sequential process reproduced on a serial infrastructure.
Now, let us image to divide the children in groups of 3. We divide the phrase in three parts, give the first to the child of the line at our left, the second to the center line's child, etc.
I like ice-cream so much. > I like > X > X > X > .... > ....
> ice-cream > X > X > X > ....
> so much > X > X > X > ....
This is a sequential process reproduced on a parallel infrastructure (still partially serialized although).
In both cases, supposing there is a perfect communication between the children, the result is determined in advance.
If there are other persons that talk to the first child at the same time as you, then we will have concurrent processes. We do no know which process will be considered by the infrastructure, so the final outcome is non-determined in advance.
Parallelism is simultaneous execution of processes on a multiple cores per CPU
or multiple CPUs (on a single motherboard)
.
Concurrency is when Parallelism is achieved on a single core/CPU
by using scheduling algorithms that divides the CPU’s time (time-slice). Processes are interleaved.
Units:
- 1 or many cores in a single CPU (pretty much all modern day processors)
- 1 or many CPUs on a motherboard (think old school servers)
- 1 application is 1 program (think Chrome browser)
- 1 program can have 1 or many processes (think each Chrome browser tab is a process)
- 1 process can have 1 or many threads from 1 program (Chrome tab playing Youtube video in 1 thread, another thread spawned for comments section, another for users login info)
- Thus, 1 program can have 1 or many threads of execution
- 1 process is
thread(s)+allocated memory resources by OS
(heap, registers, stack, class memory)
The simplest and most elegant way of understanding the two in my opinion is this. Concurrency allows interleaving of execution and so can give the illusion of parallelism. This means that a concurrent system can run your Youtube video alongside you writing up a document in Word, for example. The underlying OS, being a concurrent system, enables those tasks to interleave their execution. Because computers execute instructions so quickly, this gives the appearance of doing two things at once.
Parallelism is when such things really are in parallel. In the example above, you might find the video processing code is being executed on a single core, and the Word application is running on another. Note that this means that a concurrent program can also be in parallel! Structuring your application with threads and processes enables your program to exploit the underlying hardware and potentially be done in parallel.
Why not have everything be parallel then? One reason is because concurrency is a way of structuring programs and is a design decision to facilitate separation of concerns, whereas parallelism is often used in the name of performance. Another is that some things fundamentally cannot fully be done in parallel. An example of this would be adding two things to the back of a queue - you cannot insert both at the same time. Something must go first and the other behind it, or else you mess up the queue. Although we can interleave such execution (and so we get a concurrent queue), you cannot have it parallel.
Hope this helps!
Concurrent programming regards operations that appear to overlap and is primarily concerned with the complexity that arises due to non-deterministic control flow. The quantitative costs associated with concurrent programs are typically both throughput and latency. Concurrent programs are often IO bound but not always, e.g. concurrent garbage collectors are entirely on-CPU. The pedagogical example of a concurrent program is a web crawler. This program initiates requests for web pages and accepts the responses concurrently as the results of the downloads become available, accumulating a set of pages that have already been visited. Control flow is non-deterministic because the responses are not necessarily received in the same order each time the program is run. This characteristic can make it very hard to debug concurrent programs. Some applications are fundamentally concurrent, e.g. web servers must handle client connections concurrently. Erlang is perhaps the most promising upcoming language for highly concurrent programming.
Parallel programming concerns operations that are overlapped for the specific goal of improving throughput. The difficulties of concurrent programming are evaded by making control flow deterministic. Typically, programs spawn sets of child tasks that run in parallel and the parent task only continues once every subtask has finished. This makes parallel programs much easier to debug. The hard part of parallel programming is performance optimization with respect to issues such as granularity and communication. The latter is still an issue in the context of multicores because there is a considerable cost associated with transferring data from one cache to another. Dense matrix-matrix multiply is a pedagogical example of parallel programming and it can be solved efficiently by using Straasen's divide-and-conquer algorithm and attacking the sub-problems in parallel. Cilk is perhaps the most promising language for high-performance parallel programming on shared-memory computers (including multicores).
Copied from my answer: https://stackoverflow.com/a/3982782
To add onto what others have said:
Concurrency is like having a juggler juggle many balls. Regardless of how it seems, the juggler is only catching/throwing one ball per hand at a time. Parallelism is having multiple jugglers juggle balls simultaneously.
Concurrency: If two or more problems are solved by a single processor.
Parallelism: If one problem is solved by multiple processors.