This is an extension to my previous question
How does blocking mode in unix/linux sockets works?
What I gather from Internet now, all the process invoking bl
//Adapted ASPI original source code...
DWORD startStopUnit (HANDLE handle, BOOL bLoEj, BOOL bStart)
{
DWORD dwStatus;
HANDLE heventSRB;
SRB_ExecSCSICmd s;
//here
heventSRB = CreateEvent (NULL, TRUE, FALSE, NULL);
memset (&s, 0, sizeof (s));
s.SRB_Cmd = SC_EXEC_SCSI_CMD;
s.SRB_HaID = 0;
s.SRB_Target = 0;
s.SRB_Lun = 0;
s.SRB_Flags = SRB_EVENT_NOTIFY;
s.SRB_SenseLen = SENSE_LEN;
s.SRB_CDBLen = 6;
s.SRB_PostProc = (LPVOID) heventSRB;
s.CDBByte[0] = 0x1B;
s.CDBByte[4] |= bLoEj ? 0x02 : 0x00;
s.CDBByte[4] |= bStart ? 0x01 : 0x00;
ResetEvent (heventSRB);
dwStatus = SPTISendASPI32Command (handle,(LPSRB) & s);
if (dwStatus == SS_PENDING)
{
//and here, don´t know a better way to wait for something to finish without processor cicles
WaitForSingleObject (heventSRB, DEFWAITLEN);
}
CloseHandle (heventSRB);
if (s.SRB_Status != SS_COMP)
{
printf("Erro\n");
return SS_ERR;
}
printf("nao Erro\n");
return s.SRB_Status;
}
If in the use case of your application, context switching would be more expensive than eating a few CPU cycles because your condition would be guaranteed to get satisfied within a short time, then busy waiting may be good for you.
Otherwise, you can forcefully relinquish the CPU by sleeping or cond_wait()
ing.
Another scenario that I can think of for forceful context switching out is as follows:
while(condition)
sleep(0);
First of all, you have a misconception:
Blocking calls are not "busy waiting" or "spin locks". Blocking calls are sleepable -- that means the CPU would work on other task, no cpu are wasted.
On your question on blocking calls
Blocking calls are easier -- they are easy to understand, easier to develop, easier to debug.
But they are resource hog. If you don't use thread, it would block other clients; If you use thread, each thread would take up memory and other system resource. Even if you have plenty of memory, switching thread would make the cache cold and reduce the performance.
This is a trade off -- faster development and maintainability? or scalability.
I will try to be to the point as enough explanation is provided here through other answers and yes, learning from all these answers, i think a complete picture shall be. ---
Trade-off according to me shall between Responsiveness Vs Throughput of the system.
Responsiveness - can be considered from two perspectives
I think for responsiveness of the system, blocking calls are the best way to go. As it give the CPU to the other process in the ready queue, when the blocking-call in the blocked state.
And Of-course, for a particular process or per-process responsiveness, we shall consider a busy-wait/ spin-lock model.
Now, again to increase the overall system responsiveness we cannot decrease the time-slice (fine grain) for the scheduler, as this would waste too much of CPU resource in context switch. And thus throughput of the system would decrease drastically. Of-course, it is obvious that blocking model increases the throughput of the system, as the blocked calls does not consume CPU slices and gives it to the other/next process in the ready queue.
I think the best to go is-- design a system with a per-process responsiveness in mind, without effecting the overall responsiveness and throughput is-- through implementing a priority based scheduler, with considerations for priority inversion issues, if adding complexity does not bother you :).
Going to sleep until the scheduler wakes you is the normal/prefered thing to do.
Spinning (the alternative way to wait, without sleeping) is less usual and has the following effects:
Keeps the CPU busy, and prevents other threads from using the CPU (until/unless the spinning thread finishes its timeslice and is prempted)
Can stop spinning the very moment the thing which you're waiting for happens (because you're continuously checking for that event, and you don't need to take the time it takes to be woken up, because you're already awake)
Doesn't invoke the CPU istructions required to go to sleep and to wake up again
Spinning can be more efficient (less total CPU) than going to sleep, if the length of the delay is very short (e.g. if the delay is for only as long as it takes to execute 100 CPU instructions).
A Spin Lock burns CPU and the polled resource path for a continued waste of resources while the desired event does not occur.
A Blocking operation most importantly differs by leaving the CPU and associated resource path out and, installing a wait
of some form on the resource from which the desired event is expected.
In a multitasking or multithreaded/processor environment (the usual case for a long time now), where there are other operations possible while the desired event has not arrived, burning CPU and resource access paths leads to an awful waste of processing power and time.
When we have a hyperthreading system (like I think you are referring to in your question), It is important to note that the granularity at which CPU threads are sliced is very high. I would stick my neck out to also observe that all events -- on which you would tend to block -- would take sufficient time to arise, compensating the small time-slice for which they had to wait extra before unblocking.
I think J-16
s point is towards the condition where a sleeping (blocked) thread is leaving its code and data space unused while in blocked state. This could make a system relinquish resources (like data/code caches), which would then need to be refilled when the block is released. Therefore, subject to conditions, a block may effect in more resource wastage.
This is also a valid note and should be checked in design and implementation.
But, blocking is usually better than spin-locks in most conditions.