I wanted to comment a comment about multithreading, from @hello_earth, 201510131124, but I don't have enough reputation points on Stackoverflow (I've mostly posted on Superuser up until now) :
Multithreading is typically not efficient when it comes to copying files from 1 storage device to 1 other, because the fastest throughput is reached for sequential reads, and using multiple threads will make a HDD rattle and grind like crazy to read or write several files at the same time, and since a HDD can only access one file at a time it must read or write one chunk from a file then move to a chunk from another file located in a different area, which slows down the process considerably (I don't know how a SSD would behave in such a case). It is both inefficient and potentially harmful : the mechanical stress is considerably higher when the heads are moving repeatedly across the platters to reach several areas in short succession, rather than staying at the same spot to parse a large contiguous file.
I discovered this when batch checking the MD5 checksums of a very large folder full of video files with md5deep : with the default options the analysis was multithreaded, so there were 8 threads with an i7 6700K CPU, and it was excruciatingly slow. Then I added the -j1 option, meaning 1 thread, and it proceeded much faster, since the files were now read sequentially.
Another consideration that derives from this is that the transfer speed will be significantly higher if files are not fragmented, and also, more marginally, if they are located at the begining of a hard disk drive, corresponding to the outermost parts of the platters, where the linear velocity is maximum (that aspect is irrelevant with a solid state drive or other flash memory based device).
Also, the original poster wanted “the most safe, efficient and fast way to achieve such a time consuming process” – I'd say that one has to choose a compromise favoring either speed/efficiency, or safety : if you want safety, you have to check that each file was copied flawlessly (by checking MD5 checksums, or with something like WinMerge) ; if you don't do that, you can never be 100% sure that there weren't some SNAFUs in the process (hardware or software issues) ; if you do that, you have to spend twice as much time on the task.
For instance : I relied on a little tool called SynchronizeIt! for my file copying purposes, because it has the huge advantage compared to most similar tools of preserving all timestamps (including directory timestamps, like Robocopy does with the /DCOPY:T switch), and it has a streamlined interface with just the options I need. But I discovered that some files were always corrupted after a copy, truncated after exactly 25000 bytes (so the copy of a 1GB video for instance had 25000 good bytes then 1GB of 00s, the copy process was abnormally fast, took only a split second, which triggered my suspicion in the first place). I reported this issue to the author a first time in 2010, but then he chalked it up to a hardware malfunction, and didn't think twice about it. I still used SI, but started to check files thoroughly every time I made a copy (with WinMerge or Total Commander) ; when files ended up corrupted I used Robocopy instead (files which were corrupted with SynchronizeIt, when they were copied with Robocopy, then copied again with SynchronizeIt, were copied flawlessly, so there was something in the way they were recorded on the NTFS partition which confused that software, and which Robocopy somehow fixed). Then in 2015 I reported it again, after having identified more patterns regarding which files were corrupted : they had all been downloaded with particular download managers. That time the author did some digging, and found the explanation : it turned out that his tool had trouble copying files with the little known “sparse” attribute, and that some download managers set this attribute to save space when downloading files in multiple chunks. He provided me with an updated version which correctly copies sparse files, but hasn't released it on his website (the currently available version is 3.5 from 2009, the version I now use is a 3.6 beta from October 2015), so if you want to try that otherwise excellent software, be aware of that bug, and whenever you copy important files, thoroughly verify if each copied file is identical to the source (using a different tool), before deleting them from the source.