core.atomic

The atomic module provides basic support for lock-free concurrent programming.

Note

Use the -preview=nosharedaccess compiler flag to detect

unsafe individual read or write operations on shared data.

Types 1

Specifies the memory ordering semantics of an atomic operation.

See Also

raw = 0Not sequenced. Corresponds to https://llvm.org/docs/Atomics.html#monotonic and C++11/C11 `memoryorderrelaxed`.
acq = 2Hoist-load + hoist-store barrier. Corresponds to https://llvm.org/docs/Atomics.html#acquire and C++11/C11 `memoryorderacquire`.
rel = 3Sink-load + sink-store barrier. Corresponds to https://llvm.org/docs/Atomics.html#release and C++11/C11 `memoryorderrelease`.
acq_rel = 4Acquire + release barrier. Corresponds to https://llvm.org/docs/Atomics.html#acquirerelease and C++11/C11 `memoryorderacq_rel`.
seq = 5Fully sequenced (acquire + release). Corresponds to https://llvm.org/docs/Atomics.html#sequentiallyconsistent and C++11/C11 `memoryorderseq_cst`.

Functions 25

fnT atomicLoad(MemoryOrder ms = MemoryOrder.seq, T)(auto ref return scope const T val) if (!is(T == shared U, U) && !is(T == shared inout U, U) && !is(T == shared const U, U)) pure nothrow @nogc @trustedLoads 'val' from memory and returns it. The memory barrier specified by 'ms' is applied to the operation, which is fully sequenced by default. Valid memory orders are MemoryOrder.raw, MemoryOrder...
fnT atomicLoad(MemoryOrder ms = MemoryOrder.seq, T)(auto ref return scope shared const T val) if (!hasUnsharedIndirections!T) pure nothrow @nogc @trustedDitto
fnTailShared!T atomicLoad(MemoryOrder ms = MemoryOrder.seq, T)(auto ref shared const T val) if (hasUnsharedIndirections!T) pure nothrow @nogc @trustedDitto
fnvoid atomicStore(MemoryOrder ms = MemoryOrder.seq, T, V)(ref T val, V newval) if (!is(T == shared) && !is(V == shared)) pure nothrow @nogc @trustedWrites 'newval' into 'val'. The memory barrier specified by 'ms' is applied to the operation, which is fully sequenced by default. Valid memory orders are MemoryOrder.raw, MemoryOrder.rel, and Mem...
fnvoid atomicStore(MemoryOrder ms = MemoryOrder.seq, T, V)(ref shared T val, V newval) if (!is(T == class)) pure nothrow @nogc @trustedDitto
fnvoid atomicStore(MemoryOrder ms = MemoryOrder.seq, T, V)(ref shared T val, auto ref shared V newval) if (is(T == class)) pure nothrow @nogc @trustedDitto
fnT atomicFetchAdd(MemoryOrder ms = MemoryOrder.seq, T)(ref return scope T val, size_t mod) if ((__traits(isIntegral, T) || is(T == U *, U)) && !is(T == shared)) pure nothrow @nogc @trustedAtomically adds `mod` to the value referenced by `val` and returns the value `val` held previously. This operation is both lock-free and atomic.
fnT atomicFetchAdd(MemoryOrder ms = MemoryOrder.seq, T)(ref return scope shared T val, size_t mod) if (__traits(isIntegral, T) || is(T == U *, U)) pure nothrow @nogc @trustedDitto
fnT atomicFetchSub(MemoryOrder ms = MemoryOrder.seq, T)(ref return scope T val, size_t mod) if ((__traits(isIntegral, T) || is(T == U *, U)) && !is(T == shared)) pure nothrow @nogc @trustedAtomically subtracts `mod` from the value referenced by `val` and returns the value `val` held previously. This operation is both lock-free and atomic.
fnT atomicFetchSub(MemoryOrder ms = MemoryOrder.seq, T)(ref return scope shared T val, size_t mod) if (__traits(isIntegral, T) || is(T == U *, U)) pure nothrow @nogc @trustedDitto
fnT atomicExchange(MemoryOrder ms = MemoryOrder.seq, T, V)(T * here, V exchangeWith) if (!is(T == shared) && !is(V == shared)) pure nothrow @nogc @trustedExchange `exchangeWith` with the memory referenced by `here`. This operation is both lock-free and atomic.
fnTailShared!T atomicExchange(MemoryOrder ms = MemoryOrder.seq, T, V)(shared(T) * here, V exchangeWith) if (!is(T == class) && !is(T == interface)) pure nothrow @nogc @trustedDitto
fnshared(T) atomicExchange(MemoryOrder ms = MemoryOrder.seq, T, V)(shared(T) * here, shared(V) exchangeWith) if (is(T == class) || is(T == interface)) pure nothrow @nogc @trustedDitto
fnbool casWeak(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq, T, V1, V2)(T * here, V1 ifThis, V2 writeThis) if (!is(T == shared) && is(T : V1)) pure nothrow @nogc @trustedStores 'writeThis' to the memory referenced by 'here' if the value referenced by 'here' is equal to 'ifThis'. The 'weak' version of cas may spuriously fail. It is recommended to use `casWeak` only ...
fnbool casWeak(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq, T, V1, V2)(shared(T) * here, V1 ifThis, V2 writeThis) if (!is(T == class) && (is(T : V1) || is(shared T : V1))) pure nothrow @nogc @trustedDitto
fnbool casWeak(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq, T, V1, V2)(shared(T) * here, shared(V1) ifThis, shared(V2) writeThis) if (is(T == class)) pure nothrow @nogc @trustedDitto
fnbool casWeak(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq, T, V)(T * here, T * ifThis, V writeThis) if (!is(T == shared S, S) && !is(V == shared U, U)) pure nothrow @nogc @trustedStores 'writeThis' to the memory referenced by 'here' if the value referenced by 'here' is equal to the value referenced by 'ifThis'. The prior value referenced by 'here' is written to `ifThis` and...
fnbool casWeak(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq, T, V1, V2)(shared(T) * here, V1 * ifThis, V2 writeThis) if (!is(T == class) && (is(T : V1) || is(shared T : V1))) pure nothrow @nogc @trustedDitto
fnbool casWeak(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq, T, V)(shared(T) * here, shared(T) * ifThis, shared(V) writeThis) if (is(T == class)) pure nothrow @nogc @trustedDitto
fnvoid atomicFence(MemoryOrder order = MemoryOrder.seq)() pure nothrow @nogc @safeInserts a full load/store memory fence (on platforms that need it). This ensures that all loads and stores before a call to this function are executed before any loads and stores after the call.
fnvoid pause() pure nothrow @nogc @safeGives a hint to the processor that the calling thread is in a 'spin-wait' loop, allowing to more efficiently allocate resources.
fnTailShared!T atomicOp(string op, T, V1)(ref shared T val, V1 mod) if (__traits(compiles, mixin("*cast(T*)&val" ~ op ~ "mod"))) pure nothrow @nogc @safePerforms the binary operation 'op' on val using 'mod' as the modifier.
fnbool atomicValueIsProperlyAligned(T)(ref T val) pure nothrow @nogc @trusted
fnbool atomicPtrIsProperlyAligned(T)(T * ptr) pure nothrow @nogc @safe
fnbool casWeakByRef(T, V1, V2)(ref T value, ref V1 ifThis, V2 writeThis) pure nothrow @nogc @trusted

Templates 6

tmplcas(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq)

Performs either compare-and-set or compare-and-swap (or exchange).

There are two categories of overloads in this template: The first category does a simple compare-and-set. The comparison value (ifThis) is treated as an rvalue.

The second category does a compare-and-swap (a.k.a. compare-and-exchange), and expects ifThis to be a pointer type, where the previous value of here will be written.

This operation is both lock-free and atomic.

Parameters

hereThe address of the destination variable.
writeThisThe value to store.
ifThisThe comparison value.

Returns

true if the store occurred, false if not.
Functions
bool cas(T, V1, V2)(T * here, V1 ifThis, V2 writeThis) if (!is(T == shared) && is(T : V1))

Compare-and-set for non-shared values

bool cas(T, V1, V2)(shared(T) * here, V1 ifThis, V2 writeThis) if (!is(T == class) && (is(T : V1) || is(shared T : V1)))

Compare-and-set for shared value type

bool cas(T, V1, V2)(shared(T) * here, shared(V1) ifThis, shared(V2) writeThis) if (is(T == class))

Compare-and-set for shared reference type (class)

bool cas(T, V)(T * here, T * ifThis, V writeThis) if (!is(T == shared) && !is(V == shared))

Compare-and-exchange for non-shared types

bool cas(T, V1, V2)(shared(T) * here, V1 * ifThis, V2 writeThis) if (!is(T == class) && (is(T : V1) || is(shared T : V1)))

Compare and exchange for mixed-sharedness types

bool cas(T, V)(shared(T) * here, shared(T) * ifThis, shared(V) writeThis) if (is(T == class))

Compare-and-exchange for class

tmplIntForFloat(F) if (__traits(isFloating, F))
tmplIntForStruct(S) if (is(S == struct))
tmplValidateStruct(S) if (is(S == struct))
tmplTailShared(U) if (!is(U == shared))
tmplTailShared(S) if (is(S == shared))