I have re-written a part of code in C. When testing it with logging the resource usage using getrusage(2) C API.
Before changing the code:
This is not an answer to exactly your question. Anyway, @Klas points that
An involontary context switch occurs when a thread has been running too long
So my idea is that you can check what your threads run too long. Use perf and find places in your code where the context switches happen most often. And possibly compare measuments for the old version of your program with the new one.
Perf (https://perf.wiki.kernel.org/index.php/Tutorial) has the event context-switches
. You can measure it and collect stacktraces where it happens. This is an example of measuring context switches:
perf record -e cs -g -p `pidof my_test` sleep 5
And then check where they happen. For example, there is a program on C++ with an infinitive loop with no syscalls at all. All switch contents have stracetrace from my function my_thread_func
:
perf report --stdio -g --kallsym=/boot/System.map-2.6.32-431.el6.x86_64
# Samples: 7 of event 'cs'
# Event count (approx.): 7
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .............................
#
100.00% my_test [kernel.kallsyms] [k] perf_event_task_sched_out
|
--- perf_event_task_sched_out
schedule
retint_careful
my_thread_func(void*)
On the contrary this is an measument for a program on C++ that has an infinitive loop with lots of syscalls:
# Samples: 6 of event 'cs'
# Event count (approx.): 6
#
# Overhead Command Shared Object Symbol
# ........ ............... ................. .............................
#
100.00% my_test_syscall [kernel.kallsyms] [k] perf_event_task_sched_out
|
--- perf_event_task_sched_out
schedule
|
|--83.33%-- sysret_careful
| syscall
|
--16.67%-- retint_careful
syscall