Stealth Chromium Development from Scratch Part 3: Building the Source

In this series, I demystify Chromium development by explaining everything step by step.

person Camille Louédoc-Eyries
|
calendar_today February 20, 2026
|
schedule 5 min read

In this blog series, I will explain how to develop your own Chromium fork with fingerprint mimicry.

Our goal is to be able to edit basic fingerprints, like the user-agent, the CPU core counts, but also more complicated entropy sources such as canvas fingerprinting.


If you followed our post series on Chromium development, you’ve surely now cloned the source code.

In this post, you will learn how to do your first build.

Prerequisites

Make sure that you installed depot_tools and ran hooks as previously explained.

Make sure that you checked out the right version of Chromium that you want to work on.

Building in Docker

In my case, I was getting a ton of weird errors when building with Arch Linux. I could’ve kept fixing them one by one, but instead, I decided to use Docker to do the actual building.

Basically, I just launch bash in the container in a separate window, and use it to run the build command.

It sounds complicated, but it’s actually quite simple; the way you need to think of it is “a hacky Docker image with the right dependencies that somehow makes my code build without crashing” rather than “the big complicated Docker container on which I will have to put all my development tools on, publish on a registry and maintain”.

Here is the Dockerfile that I am personally using:

Dockerfile
FROM ubuntu:24.04
# Install build dependencies as per: https://chromium.googlesource.com/chromium/src/+/main/docs/linux/build_instructions.md#Docker
RUN apt-get update
RUN apt-get install -y curl git python3 sudo file lsb-release fzf
WORKDIR /src
ENV PATH="/src/depot_tools:${PATH}"
# Install Chromium build deps.
# This requires having cloned Chromium in the `src` directory.
COPY ./src/chromium/src/build/install-build-deps.py ./src/chromium/src/build/install-build-deps.sh /src/install-build-deps/
RUN /src/install-build-deps/install-build-deps.sh
RUN rm -rf /src/install-build-deps
# Setup ccache
RUN apt-get install -y ccache
ADD ccache.conf /root/.config/ccache/ccache.conf

You can enter the container with this command:

Terminal window
podman build -t chromium-builder .
podman run \
-v $PWD/src/chromium:/src/chromium \
-v $PWD/src/depot_tools:/src/depot_tools \
-v $HOME/.config/ccache:/root/.config/ccache \
-v $HOME/.cache/ccache:/root/.cache/ccache \
-it chromium-builder /bin/bash

As you can see, we are binding the following volumes:

  1. chromium’s source code
  2. depot_tools source
  3. ccache’s config
  4. ccache’s cache

The goal is to have the Docker container acting as a slim layer on top of our existing machine, to help reduce build errors due to environment mismatches, rather than doing all our development in it.

We are going to keep clangd (C++ LSP server), our editor, actual ccache cache directory, etc. on the host machine and use this container with volume bindings to do the builds only.

Commands to issue in the container to start the build

Here’s what you can run in the container:

Terminal window
cd /src/chromium/src
# Make sure that depot_tools is initialized and happy
ensure_bootstrap
# Finalize the environment for building
gclient runhooks
# Generate build files for the Ninja (/Siso) build system.
gn gen out/Default
# Open args.gn for editing -- you can do that on your host machine as well.
vim out/Default/args.gn # the arguments to write inside are explained below
autoninja -C out/Default -j 12 # adjust 12 to be the number of cores you want to use for building

For the args.gn, here’s some clean defaults you can use and tweak later on:

args.gn
# Set build arguments here. See `gn help buildargs`.
is_debug = false
is_component_build = false
# Possible values are 0 | 1 | 2
symbol_level = 0
blink_symbol_level=0
v8_symbol_level=0
# Possible values are "x64" | "x86" | "arm"
target_cpu = "x64"
# Limit memory-heavy links
concurrent_links = 1
enable_vulkan = true
enable_swiftshader_vulkan = true
# Enable ccache
cc_wrapper="ccache"

Those are inspired from this blog post.

The goal with these initial args is not to showcase the most optimal configuration, but rather showing safe defaults to get a first build working.

ccache configuration

If you want to further speed up builds, you can use my ccache configuration:

ccache.conf
# ~/.config/ccache/ccache.conf
max_size = 300G
sloppiness = time_macros,modules
compression = true
compression_level = 1

Wrapping things up with a nice just script

just is just a command runner.

It’s the 2026 version of makefiles. It’s built in Rust and got auto-completion. What more to ask?

I defined this simple entry in my Justfile to automatically build my builder image and start the Chromium build:

justfile
builder_image_name := "chromium-builder"
# Make sure to adjust this depending on how many cores you
# want to dedicate to Chromium building!
build_threads := "12"
# Build the Chromium builder image
build-chromium-builder:
podman build -f Containerfile.build -t {{builder_image_name}} .
build: build-chromium-builder
podman run \
-v $PWD/src/chromium:/src/chromium \
-v $PWD/src/depot_tools:/src/depot_tools \
-v $HOME/.config/ccache:/root/.config/ccache \
-v $HOME/.cache/ccache:/root/.cache/ccache \
-it {{builder_image_name}} \
/bin/bash \
-c "cd /src/chromium/src/ && autoninja -C out/Default -j {{build_threads}}"

I can invoke it like so:

Terminal window
just build

Handling build errors

Mystical build errors are part of the process of learning Chromium.

The key is to keep it as vanilla as possible for a first build, and slowly introduce changes: using another linker, patching a file, etc.

If you are having an error on the first build, it probably means that you didn’t pick arguments and a configuration vanilla enough.

My advice:

  1. Use a Docker image for building
  2. Keep super vanilla args.gn
  3. Try building a known-good Chromium version (checking out one of those is explained in the previous post of this series)
  4. Use an x86_64 machine if it’s not already the case. I am repeating myself here, but I’ll say it again: building Chromium on ARM will lead you to a universe of pain and sorrow. Bonus points if you are running a CPU with an unconventional page size. (distant noises of Asahi Linux users reluctantly agreeing).

Launching Chromium after it has been built

After building with the autoninja command, you’ll get a binary in src/chromium/src/out/Default/ named chrome.

Just run it like so:

Terminal window
./src/chromium/src/out/Default/chrome

… and you should see your first Chromium instance popping up!

If you don’t, read the errors.

In my case, I had to delete existing Chromium files:

Terminal window
rm -rf ~/.config/chromium/; rm -rf ~/.cache/chromium/

Wow, that’s a lot of things to learn about

The truth with Chromium patching is that it’s a bit messy.

The codebase is huge, so copying it into a Docker image is not comfortable.

Sometimes, something gets corrupted, and you need to delete out/Default or your ccache cache.

You will encounter mystical build errors that you’ll need to fix by yourself.

With time, you’ll learn strategies to deal with this madness, but there is no silver bullet: Chromium patching is messy and unpredictable!

So, don’t take these posts as gospel: you need to think for yourself and try different things out, as opposed to following everything by the letter.

What’s next

So, you’ve got your first chrome binary. Congratulations!

Now, the goal is to patch what gets you detected by antibots; in the next post, we’ll do a first tweak to spoof the user agent, and set up helper commands in your justfile to generate and apply patches.

mail

Stay in the loop

Get notified when we publish new research. No spam, just posts like this.

rss_feed … or follow our RSS feed

Need help with data extraction?

We apply this level of rigor to every data extraction challenge. Let's discuss your project.

Get in touch