Ah d'accord.
En farfouillant, j'ai trouvé ceci du même auteur. Sous le titre "C++ and the Perils of Double-Checked Locking", il aborde le même sujet.
A la fin, ils font un historique en mentionnant les utilisations actuelles:
So when dealing with some memory locations (e.g. memory mapped ports or
memory referenced by ISRs), some optimizations must be suspended. volatile exists for specifying special treatment for such locations, specifically: (1) the content of a volatile variable is “unstable” (can change by means unknown to the compiler), (2) all writes to volatile data are “observable” so they must be executed religiously, and (3) all operations on volatile data are executed in the sequence in which they appear in the source code. The first two rules ensure proper reading and writing. The last one allows implementation of I/O protocols that mix input and output.
This is informally what C and C++’s volatile guarantees.
Java took volatile a step further by guaranteeing the properties above
across multiple threads. This was a very important step, but it wasn’t enough
to make volatile usable for thread synchronization: the relative ordering of
volatile and non-volatile operations remained unspecified. This ommission
forces many variables to be volatile to ensure proper ordering.
Java 1.5’s volatile [10] has the more restrictive, but simpler, acquire/release
semantics : any read of a volatile is guaranteed to occur prior to any memory reference (volatile or not) in the statements that follow, and any write to a volatile is guaranteed to occur after all memory references in the statements preceding it. .NET defines volatile to incorporate multithreaded semantics as well, which are very similar to the currently proposed Java semantics. We know of no similar work being done on C’s or C++’s volatile.
=> La JVM et la CLR semblent avoir étendu la notion de "volatile" pour effectuer un "memory barrier".... Mais bon quelle part, le code s'exécute sur une machine virtuelle par sur les machines réelles.
-W
Partager