Compilation10 min read

The Invisible Work Behind a Single "Download" Button

You’ve reached that magical moment in development. The features are bug-free, the UI is clean, you have zero warnings, and your C++ engine is humming along beautifully. Your local machine sings a song of victory. It’s done!

At least, you think it is.

But then, the cold reality sets in. This perfect program only runs on your computer. Now comes the final boss of any serious software project: compiling for every target OS and architecture your users might have. Do you find people willing to lend you their laptops? Do you take the dark dive into cross-compilation? Virtual machines?

I had to find the answer to all these questions as we move towards the release of Pivot 1.0, and it's a journey worth sharing.

The Challenge: The "Compilation Matrix"

For a C++-powered addon to work for everyone, it’s not enough to compile it once. We have to build a unique, native binary for every combination of Operating System and CPU Architecture. This creates a "compilation matrix" that looks something like this:

  • Windows on x86-64 (Most Windows PCs)
  • macOS on x86-64 (Intel Macs)
  • macOS on arm64 (Apple Silicon Macs)
  • Linux on x86-64 (Most Linux PCs)

Suddenly, our one perfect program becomes four separate, complex compilation targets. And before we can even tackle that, we have to make a fundamental decision: static vs. dynamic linking.

A Quick Detour: Packing Your Suitcase (Static vs. Dynamic Linking)

When you compile a C++ program, it relies on other pieces of code called libraries. The question is, how do you package those libraries?

  • Dynamic Linking is like assuming your hotel will have a toothbrush. Your program is small and assumes the necessary libraries will be present on the user's computer at runtime. This is great for development, but it can create chaos for distribution. If the user has a different version of a library (or doesn't have it at all), your addon will fail to run.

  • Static Linking is like packing the toothbrush, the soap, and the towels in your own suitcase. It packages every single dependency directly into your final binary file. This makes the file much larger, but it guarantees your program will run, no matter what the user has on their system.

For Pivot, the choice was clear: we must use static linking for maximum reliability. The trade-off is significant—my core engine binary jumped from around 10KB to over 1MB—but the peace of mind is worth it.

The Solution: An Army of Virtual Robots (GitHub Actions)

As a solo developer on a Linux machine, I can't keep a stable of Macs and Windows PCs in my office. The solution is automation. I used GitHub Actions, a powerful CI/CD tool, to create a workflow that essentially builds a virtual machine for each target in our matrix, compiles the code natively on that machine, and saves the result.

The workflow looks like this for each target:

  1. Spin up a fresh virtual machine (e.g., macos-latest).
  2. Install all the necessary dependencies (the C++ compiler, Python, Boost libraries).
  3. Check out the source code.
  4. Run the compile command.
  5. If it succeeds, upload the final compiled binary as a "build artifact."

The Unspoken Hell of "Just Use CI/CD"

Now, for anyone who's set up a CI/CD pipeline, you know that a "big list of instructions" is a deceptively simple phrase. The reality was a multi-day battle against a mountain of cryptic error messages.

On macOS, the default Clang compiler had a different opinion on C++ standards than my Linux GCC. On Windows, the MSVC compiler required a completely different set of flags and library paths that had to be wrangled from the ether. And let's not even talk about getting Boost to compile correctly on all three platforms from a command line!

Each "successful" green checkmark on my GitHub Actions workflow was the result of hours of searching obscure Stack Overflow threads and tweaking YAML files. It's a rite of passage, but it's a part of the 'invisible work' that's required to make a cross-platform tool feel effortless for the end-user.

The Boss Battle: The Chaos of Shared Memory

Here's where it got really tricky. A core feature of the Elbo Studio ecosystem is a high-speed IPC (Inter-Process Communication) system using shared memory. But it turns out, every operating system has its own strong opinions on how this should work, centered around a standard called POSIX.

  • macOS (Darwin): Complies strictly. It requires shared memory segments to be named with a leading slash, like /my_segment_name.
  • Linux: Complies loosely. The leading slash is not required.
  • Windows: Does not comply at all. It has a completely different system for managing shared memory.

To solve this, I used the fantastic Boost.Interprocess library, which creates a cross-platform abstraction layer over these native systems. However, the differences are so fundamental that I still needed to use conditional compilation to write separate code branches just for Windows. In practice, that looks something like this:

SharedMemoryHandle open_shared_memory_with_fallback(const std::string &shm_name)
{
#ifdef _WIN32
    // Windows: Try different namespace prefixes for shared memory
    std::string candidates[] = {shm_name, "Local\\" + shm_name, "Global\\" + shm_name};
    for (const auto &candidate : candidates) {
        try {
            return windows_shared_memory(open_only, candidate.c_str(), read_write);
        } catch (const interprocess_exception &) {}
    }
    throw std::runtime_error("Unable to open shared memory: " + shm_name);
#else
    // POSIX: Direct shared memory access
    return shared_memory_object(open_only, shm_name.c_str(), read_write);
#endif
}

This #ifdef block is a small but perfect example of the hidden complexity. It’s a tiny fork in the road that has to be maintained and tested separately, all to create one seamless experience for the user, regardless of their OS.

The Final Treasure: A Seamless User Experience

So after all that work, what's the point? The goal is a single, magical download for the user.

We don’t want you to have to guess which build you need. An addon should be simple. So, the final Pivot_Pro.zip file contains all the compiled binaries for every platform, neatly tucked away. When you install the addon, a small script runs that acts as a bouncer, checking your OS and architecture at the door and handing you the right key.

def get_platform_id() -> str:
    """Get platform identifier for module loading (e.g., 'linux-x86-64', 'macos-arm64').
    
    Returns:
        str: Platform identifier string
    """
    system = platform.system().lower()
    machine = platform.machine().lower()
    
    # Map architecture names
    if machine in ('x86_64', 'amd64'):
        arch = 'x86-64'
    elif machine in ('aarch64', 'arm64'):
        arch = 'arm64'
    else:
        arch = machine
    
    return f'{system}-{arch}'

This simple function is the final handshake. It ensures that all the complex compilation work we did results in a single, simple "it just works" experience for the Blender artist.

This "invisible" work is the foundation of a professional tool. It’s a lot of effort, but it’s what’s necessary to turn a program that "works on my machine" into a reliable product that works for everyone.

With the compilation pipeline now stable, we are on the final approach for the release of Pivot 1.0. Keep an eye out

*Thanks for reading! Have questions or feedback? Find me on X @NickWierzbowski

Share this article

Published by Nick Wierzbowski

Last updated: 11/9/2025

Elbo Devlog

Join the Elbo Community

Get weekly devlogs, exclusive launch discounts, and direct access to the development journey.

You can unsubscribe at any time. For more information, please review our Privacy Policy.

Loading spam protection…

The Invisible Work Behind a Single "Download" Button | Elbo Studio Dev Blog