What is “non-blocking” concurrency and how is it different than normal concurrency?

前端 未结 8 978
滥情空心
滥情空心 2021-01-30 02:26
  1. What is \"non-blocking\" concurrency and how is it different than normal concurrency using threads? Why don\'t we use non-blocking concurrency in all the scenarios where
相关标签:
8条回答
  • 2021-01-30 02:48

    1: What is "non-blocking" concurrency and how is it different than normal concurrency using threads? Why don't we use non-blocking concurrency in all the scenarios where concurrency is required? Is there overhead for using non-blocking concurrency?

    Non blocking algorithms do not use specific object locking schemes to control concurrent access to memory (synchronized and standard object locks are examples that use object/function level locks to reduce concurrent access problems in Java. Instead these use some form of low level instruction to perform (on some level) a simulataneous compare and swap on a memory location; if this fails it just returns false and does not error out, if it works then it was successful and you move on. Generally, this is attempted in a loop until it works, since there will only be small periods of time (hopefully) when this would fail, it just loops a few extra times until it can set the memory it needs to.

    This is not always used because it is much more complex from a code perspective even for relatively trivial use cases than the standard Java synchronization. Moreover, for most uses the performance impact of the locking is trivial compared to other sources in the system. In most cases, the performance requirements are not nearly high enough to warrant even looking at this.

    Finally, as the JDK/JRE evolves, the core designers are improving the internal language implementations to attempt to incorporate the most efficient means of achieving these ends in the core constructs. As you move away from the core constructs, you lose the automatic implementation of those improvements since you are using less standard implementations (for instance jaxb/jibx; jaxb used to grossly underperform jibx, but now is equal if not faster in most cases that I've tested as of java 7) when you bump your java version.

    if you look at the code example below, you can see the 'overhead' locations. It's not really overhead per se, but the code must be extremely efficient in order to work non-locking and actually perform better than a standard synchronized version due to the looping. Even slight modifications can lead to code that will go from performing several times better than standard to code that is several times worse (for instance object instantiations that don't need to be there or even quick conditional checks; you're talking about saving cycles here, so the difference between success and failure is very slim).

    2: I have heard that non-blocking concurrency is available in Java. Are there any particular scenarios where we should use this feature?

    In my opinion you should only use this if you A) have a proven performance problem in your running system in production, on its production hardware; and B) if you can prove that the only inefficiency left in the critical section is locking related; C) you have firm buy in from your stakeholders that they are willing to have non-standard less maintainable code in return for the performance improvement which you must D) prove numerically on your production hardware to be certain it will even help at all.

    3: Is there a difference or advantage to using one of these methods with a collection? What are the trade-offs?

    The advantage is performance, the trade off is first that it's more specialized code (so many developers don't know what to make of it;making it harder for a new team or new hire to come up to speed, remember that the majority of the cost of software is labor, so you have to watch the total cost of ownership that you impose through design decisions), and that any modifications should be tested again to ensure that the construct is still actually faster. Generally in a system that would require this some performance or load and throughput testing would be required for any changes. If you aren't doing these tests then I would argue that you almost certainly don't need to even think about these approaches, and would almost definitely not see any value for the increased complexity (if you got it all to work right).

    Again, I just have to restate all the standard warnings against optimization generally, as many of these arguments are the same that I would use against this as a design. Many of the drawbacks to this are the same as any optimization, for instance, whenever you change the code you have to ensure that your 'fix' doesn't introduce inefficiency in some construct that was only placed there to improve performance, and deal with that (meaning up to refactoring the entire section to potentially remove the optimizations) if the fix is critical and it reduces performance.

    It is really, really easy to mess this up in ways that are very difficult to debug, so if you don't have to do it (which I have only found a few scenarios where you ever would; and to me those were pretty questionable and I would have preferred to not do it) do not do it. use the standard stuff and everyone will be happier!

    Discussion/Code

    Non blocking or lock free concurrency avoids the use of specific object locks to control shared memory access (like synchronized blocks or specific locks). There is a performance advantage when the code section is non-locking; however, the code in the CAS loop(if this is the way you go, there are other methods in Java) must be very, very efficient or this will end up costing you more performance than you gain.

    Like all performance optimizations, the extra complexity is not worth the effect for most use cases. Cleanly written Java using standard constructs will work as well if not better than most optimizations (and actually allow your organization to maintain the software more easily once you're gone). To my mind this only makes sense in very high performance sections with proven performance issues where the locking is the only source of inefficiency. If you do not definitely have a known and quantified performance problem, I would avoid the use of any technique like this until you have proven the problem is actually there because of the locking and not do to other issues with the efficiency of the code. Once you have a proven locking based performance problem I would ensure that you have some type of metric in place to ensure that this type of setup is actually going to run faster for you than just using standard Java concurrency.

    The implementation that I have done for this use CAS operations and the Atomic family of variables. This basic code has never locked up or raised any errors for me in this use case (random sampling input and output for offline testing from a high throughput translation system). It basically works like this:

    You have some object that is shared between threads, and this is declared as either an AtomicXXX or an AtomicReference (for most non-trivial use cases you'll run with the AtomicReference version).

    when the given value/object is referenced, you retrieve it from the Atomic wrapper, this gets you a local copy on which you perform some modification. From here you use a compareAndSwap as the condition of a while loop to attempt to set this Atomic from your thread, if this fails it returns false as opposed to locking up. This will iterate until it works (the code in this loop must be very efficient and simple).

    You can look up CAS operations to see how they work, it's basically intended to be implemented as a single instruction set with a comparison at the end to see if the value is what you tried to set it to.

    If the compareAndSwap fails, you get your object again from the Atomic wrapper, perform any modifications again, and then try the compare and swap again until it works. There is no specific lock, you're just trying to set the object back into memory and if it fails you just try again whenever your thread gets control again.

    Code for this is below for a simple case with a List:

    /* field declaration*/
    //Note that I have an initialization block which ensures that the object in this
    //reference is never null, this was required to remove null checks and ensure the CAS
    //loop was efficient enough to improve performance in my use case
    private AtomicReference<List<SampleRuleMessage>> specialSamplingRulesAtomic = new AtomicReference<List<SampleRuleMessage>>();
    
    
    /*start of interesting code section*/
    
        List<SampleRuleMessage> list = specialSamplingRulesAtomic.get();
        list.add(message);
        while(!specialSamplingRulesAtomic.compareAndSet(specialSamplingRulesAtomic.get(), list)){
            list = specialSamplingRulesAtomic.get();
            list.add(message);
        };
     /* end of interesting code section*/
    
    0 讨论(0)
  • 2021-01-30 02:51

    1) Non-blocking synchronization means that the risk of deadlocks is removed. No one thread will wait to get a lock held by another thread "forever".

    2) More on non-blocking synchronization algorithms in java.

    0 讨论(0)
  • 2021-01-30 02:51
    1. Wikipedia is a great resource for any Computer Science student. Here is an article on Non-blocking synchronization - http://en.wikipedia.org/wiki/Non-blocking_synchronization

    2. Non-blocking synchronization is available in any language, it is how the programmer defines their multi-threaded application.

    3. You should only use locking (i.e. synchronized (list) ) when necessary due to the time it takes to acquire the lock. In Java, the Vector is a thread safe data structure that is very similar to Java's ArrayList.

    0 讨论(0)
  • 2021-01-30 02:55

    What is Non-blocking Concurrency and how is it different.

    Formal:

    In computer science, non-blocking synchronization ensures that threads competing for a shared resource do not have their execution indefinitely postponed by mutual exclusion. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress; wait-free if there is also guaranteed per-thread progress. (wikipedia)

    Informal: One of the most advantageous feature of non-blocking vs. blocking is that, threads does not have to be suspended/waken up by the OS. Such overhead can amount to 1ms to a few 10ms, so removing this can be a big performance gain. In java, it also means that you can choose to use non-fair locking, which can have much more system throughput than fair-locking.

    I have heard that this is available in Java. Are there any particular scenarios we should use this feature

    Yes, from Java5. In fact, in Java you should basically try to meet your needs with java.util.concurrent as much as possible (which happen to use non-blocking concurrency a lot, but you don't have to explicitly worry in most cases). Only if you have no other option, you should use synchronized wrappers (.synchronizedList() etc.) or manual synchronize keyword. That way, you end up most of the time with more maintainable, better performing apps.

    Non-blocking concurrency is particularly advantageous when there is a lot of contention. You can't use it when you need blocking (fair-locking, event-driven stuff, queue with maximum length etc.), but if you don't need that, non-blocking concurrency tends to perform better in most conditions.

    Is there a difference/advantage of using one of these methods for a collection. What are the trade offs

    Both have the same behavior (the byte code should be equal). But I suggest to use Collections.synchronized because it's shorter = smaller room to screw up!

    0 讨论(0)
  • 2021-01-30 02:57

    1) What is Non-blocking Concurrency and how is it different.

    A task (thread) is non-blocking when it doesn't cause other tasks (threads) to wait until the task is finished.

    2) I have heard that this is available in Java. Are there any particular scenarios we should use this feature

    Java supports at its own multithreading. You can take benefit of it to run multiple tasks concurrently. When well written and implemented, this may speed up the program when running at a machine with multiple logical CPU's. To start, there are java.lang.Runnable and java.lang.Thread as low level concurrency implementations. Then there is high level concurrency in flavor of the java.util.concurrent API.

    3) Is there a difference/advantage of using one of these methods for a collection. What are the trade offs

    I would use Collections#synchronizedList(), not only because that's more robust and convenient, but also because a Map isn't a List. There's no need to attempt to homegrow one when the API already offers the facility.


    That said, there's a Sun tutorial about Concurrency. I recommend you to get yourself through it.

    0 讨论(0)
  • 2021-01-30 02:57

    Non-blocking synchronization is the same a blocking synchronization, both are kind of synchronization, the only difference is that non-blocking synchronization is faster overall.

    For starters you want to use synchronization only when multiple threads access the same resource in RAM. You can't use synchronization when trying to access things on disk, or better said, you need to use locks on disk.

    That said, how can you synchronize if no thread ever blocks?

    The answer is optimistic locking. This idea has existed for at least 20 years. Maybe more.

    You maybe have heard of the Lisp language. As it turns out functional languages never modify its parameters, only return new values, so they never need to synchronize.

    In Lisp you can have shared state, but it gets tricky. So most programs can run in parallel and never worry about synchronization.

    The idea of optimistic locking is that all threads modify shared valued willingly, but they have a local area to modify the values, and only apply the modification at the end, with one instruction, atomically, using CAS. Cas stands for Compare And Swap,it performs in just one CPU cycle, and it has been implemented in CPUs for at least 20 years.

    An excellent explanation of what CAS is: https://www.cs.umd.edu/class/fall2010/cmsc433/lectures/nonBlocking.pdf

    So if there is a conflict in the modification, it will only affect one of the writers, the rest will be done with it.

    Moreover, if there is no contention whatsoever, non-blocking algorithms perform a lot faster than their blocking counterparts.

    Tutorial on non-blocking algorithms in Java with code examples you can use in real life: http://tutorials.jenkov.com/java-concurrency/non-blocking-algorithms.html

    0 讨论(0)
提交回复
热议问题