I was reading an interesting article about the memory barriers and their role in JVM concurrency, and the example implementation of Dekker's algorithm took my attention
volatile boolean intentFirst = false; volatile boolean intentSecond = false; volatile int turn = 0; // code run by first thread // code run by second thread 1 intentFirst = true; intentSecond = true; 2 3 while (intentSecond) { while (intentFirst) { // volatile read 4 if (turn != 0) { if (turn != 1) { // volatile read 5 intentFirst = false; intentSecond = false; 6 while (turn != 0) {} while (turn != 1) {} 7 intentFirst = true; intentSecond = true; 8 } } 9 } }10 criticalSection(); criticalSection();1112 turn = 1; turn = 0; // volatile write13 intentFirst = false; intentSecond = false; // volatile write
The article mentions that since volatiles are sequentially consistent, the critical section is bound to be executed by one thread, which checks out with the happens-before guarantee. However, does that still hold if the two threads continue to execute the same logic in a loop? My understating is that the underlying OS scheduler may decide to pause the second thread in subsequent execution just before line 7 is executed and leave the first thread hit the critical section, and at the same moment, the OS resume the second thread and hit the critical section simultaneously. Is my understanding correct, and this example is given with the idea that this code is executed only once? If so, I'd assume that the answer to my question is "no" since volatiles are used only for memory visibility guarantees.