Generic algorithms have been available in C++ for decades, but the last two versions of the language have really ramped up the functionality. C++17 added support for parallel execution of generic algorithms to easily take advantage of multi-core CPUs. Then C++20 added support for ranges, a composable version of generic algorithms that’s even closer to LINQ in C#. Today we’ll explore both of these!
Posts Tagged threading
As C# includes classes like Thread
and Mutex
, the C++ Standard Library also provides support for multi-threading. Classes like std::thread
and std::mutex
are very similar, but there are larger differences when it comes to C#’s lock
, async
, and await
keywords. Read on to learn how to write multi-threaded C++!
There is language-level support in C# for per-thread storage of variables. The same goes for the volatile
keyword. C++ also supports per-thread variables, but with per-thread initialization and de-initialization. It has a volatile
keyword too, but it’s meaning is quite different from C#. Read on to learn how to properly use these features in each language.
Multi-threading is essential to performance on all modern processors. Using multiple threads brings along with it the challenge of synchronizing data access across those threads. Unity’s job system can do some of this for us, but it certainly doesn’t handle every case. For times when it doesn’t, C# provides us with a bunch of synchronization options. Which are fastest? Today we’ll find out!
Last time we saw that jobs apparently have their own Temp
allocator. Still, it was unclear how many of these allocators there are. One per job job? One per thread? Just one? Today we’ll run an experiment to find the answer!
Temp
memory is backed by a fixed size block that’s cleared by Unity every frame. Allocations on subsequent frames return pointers to this same block. The allocated memory therefore isn’t unique. How much of a problem is this? Today we’ll do some experiments to find out!
Last week’s article came to the conclusion that allocating Temp
memory from within a job was safe. This week we’ll look into that a little deeper to find out that it might not be as safe as it looks!
What do you do when a job you’re writing needs to allocate memory? You could allocate it outside of the job and pass it in, but that presents several problems. You can also allocate memory from within a job. Today we’ll look into how that works and some limitations that come along with it.
Unity 2019.1 was released last week and the Burst compiler is now out of Preview. It promises superior performance by generating more optimal code than with IL2CPP. Let’s try it out and see if the performance lives up to the hype!
Last week’s article introduced two new native collection types: NativeIntPtr
and NativeLongPtr
. These were useful for both IJob
and IJobParallelFor
jobs, but performance was degraded in IJobParallelFor
. Today we’ll remedy that, explore some more aspects of Unity’s native collection and job systems, and learn more about CPU caches along the way.