Skip to content

Conversation

@Barto22
Copy link
Contributor

@Barto22 Barto22 commented Jan 13, 2026

This commit updates the Linux CI Docker image to the latest GNU and Clang toolchains. This PR is related to: #17826

Summary

This change updates the GNU (gcc/g++) and Clang (clang/clang++) toolchains used by the Linux CI Docker image to their latest stable versions.
The motivation is to ensure CI builds track supported compilers, improve warning coverage, and detect issues earlier during development.
The update includes minor adjustments to package names and installation scripts to align with distribution changes.

Impact

  • CI builds will use newer compilers with expanded warnings and better C/C++23 support.
  • No impact on downstream users unless they explicitly rely on CI-provided compiler versions.
  • No functional changes to runtime behavior or board configurations.
  • Improves compatibility with modern host toolchains and prevents bit-rot for newer distributions.
  • No documentation changes needed.

Testing

CI pipeline was executed across standard Linux configurations. The following was verified:

  • Full NuttX build matrix completed without regressions.
  • GCC and Clang builds both succeeded for representative board targets.
  • ostest and common example applications executed successfully within CI environment (logs available in CI artifacts).

Host: GitHub Actions CI (Ubuntu Linux)
Boards tested in CI: sim, nsh, plus standard matrix defaults.

No build regressions observed.

This commit will update Linux Docker to latest GNU and Clang toolchains.

Signed-off-by: Bartosz <bartol2205@gmail.com>
@github-actions github-actions bot added Area: Tooling Area: CI Size: S The size of the change in this PR is small labels Jan 13, 2026
@simbit18
Copy link
Contributor

simbit18 commented Jan 13, 2026

Question for @Barto22 and everyone:

Does this bump resolve the known issues?

#16896

Before proceeding with the merge, I would recommend testing the new Nuttx Docker image by compiling all <board name>:<board configuration>.

We also need to check the toolchain update for the other operating systems (macOS and Windows - MSYS2/Native) present in the CI, so that the update does not leave anyone behind.

Otherwise, there is a risk that all attention will be focused on one OS at the expense of others.

For example macOS issues on NuttX Mirror
#17818

I remind myself and everyone else:

The Inviolable Principles of NuttX
https://nuttx.apache.org/docs/latest/introduction/inviolables.html#all-users-matter

Copy link
Member

@lupyuen lupyuen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Barto22 Do you have Build Logs of the updated Docker Image, compiling the various NuttX Targets? Thanks!

@lupyuen
Copy link
Member

lupyuen commented Jan 13, 2026

@Barto22: Here are the steps for building and testing an updated Dockerfile for NuttX

https://lupyuen.org/articles/pr#appendix-downloading-the-docker-image-for-nuttx-ci

@Barto22
Copy link
Contributor Author

Barto22 commented Jan 13, 2026

@Barto22 Do you have Build Logs of the updated Docker Image, compiling the various NuttX Targets? Thanks!

I will test it and provide logs.

@Barto22
Copy link
Contributor Author

Barto22 commented Jan 13, 2026

@Barto22 Do you have Build Logs of the updated Docker Image, compiling the various NuttX Targets? Thanks!

I will test it and provide logs.

Currently I'm testing builds, docker image build successfully, I will test all configurations, I see that on gcc 15 there is a problem with libcxx, my PR #17826 fix it because it switches to latest LLVM libcxx and libcxxabi but before we merge it I will try to repair it here.

@cederom
Copy link
Contributor

cederom commented Jan 13, 2026

Thank you @Barto22 :-) You may want to replicate our CI build process for Linux too that could be best benchmark for comparison :-)

If that works fine locally and here on github then it should be fine :-)

More Linux users welcome for testing on different distros, please call for action on our dev@ mailing list :-)

@Barto22
Copy link
Contributor Author

Barto22 commented Jan 14, 2026

Thank you @Barto22 :-) You may want to replicate our CI build process for Linux too that could be best benchmark for comparison :-)

If that works fine locally and here on github then it should be fine :-)

More Linux users welcome for testing on different distros, please call for action on our dev@ mailing list :-)

Of course. I'm testing all CI builds and I will fix the errors,if anyone want to join testing I can share the docker already builded image.

@Barto22
Copy link
Contributor Author

Barto22 commented Jan 14, 2026

Question for @Barto22 and everyone:

Does this bump resolve the known issues?

#16896

Before proceeding with the merge, I would recommend testing the new Nuttx Docker image by compiling all <board name>:<board configuration>.

We also need to check the toolchain update for the other operating systems (macOS and Windows - MSYS2/Native) present in the CI, so that the update does not leave anyone behind.

Otherwise, there is a risk that all attention will be focused on one OS at the expense of others.

For example macOS issues on NuttX Mirror

#17818

I remind myself and everyone else:

The Inviolable Principles of NuttX

https://nuttx.apache.org/docs/latest/introduction/inviolables.html#all-users-matter

I will update all CI scripts according to results from tests of Docker image. But I will really appreciate tests on other platforms cause I'm mainly working on Linux.

All tools are updated to most recent stable versions.

Signed-off-by: Bartosz <bartol2205@gmail.com>
@github-actions github-actions bot added the Size: M The size of the change in this PR is medium label Jan 14, 2026
@Barto22
Copy link
Contributor Author

Barto22 commented Jan 14, 2026

I have updated in Dockerfile all tools to most recent versions, compilers too, Ubuntu to 24.04 LTS from 22.04. GCC and G++ are in version 15.2 so after merging my PR #17826 there will be no need of using clang for SIM boards tests - but clang is also in latest stable version 21, so I will remove this PR: #17850 because there will be no need of using clang for SIM board tests with libcxx. Now I'm working on fixing the compile errors and I will get back with the results.

@cederom
Copy link
Contributor

cederom commented Jan 19, 2026

Thank you @Barto22 :-) Yes, small reasonable steps, measurable results :-)

We are quite conservative here, not necessarily follow bleeding edge fashions, quite the opposite we prefer "unix old-school" and what works best. For instance we prefer stable 14.2 gcc as most users use it and work with C. There are not that many C++ users here so updating compilers / libstdc++ should be well justified and must not break anything for anyone else. No problem to wait for new LTS release of build host OS. There is no need to switch at the day of release unless we are sure all works as expected and there are no known growing pains. We prefer long-term self-compatibility over short-lived modernity :-)

As you can see we have many limitations, including the GH Runners quotas, that were strongly exceeded by just few GCC 15 builds plus end of year in CN that usually results in increased CI usage.


leave_critical_section(flags);

return OK;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why code changes when PR is about Docker configuration update? This is out of scope and hidden change that impacts existing code. If this fixes some other issue this should be provided as separate PR.

Please give a careful read to: https://github.com/apache/nuttx/blob/master/CONTRIBUTING.md

RUN curl -s -L https://bitbucket.org/nuttx/tools/get/9ad3e1ee75c7.tar.gz \
| tar -C nuttx-tools --strip-components=1 -xz

# Bloaty
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is package unavailable? Using package would save us precious runner time :-)

&& cd /tools && rm -rf bloaty-src

# Kconfig Frontends
# Note: kconfig-frontends is sensitive to gperf versions, but generally works on 24.04
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about using package? I created FreeBSD port for kconfig-fontends using the same source location as Debian thus I know package is available..? :-)

&& ./configure --enable-mconf --disable-gconf --disable-qconf --enable-static --prefix=/tools/kconfig-frontends \
&& make install && cd /tools && rm -rf nuttx-tools

# GN
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

package?

RUN npm install -g n && n 20.10.0 node && hash -r
# Install Zap
# 24.04 has a recent node, but we ensure specific version management
RUN npm install -g n && n 24.13.0 node && hash -r
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lupyuen do we always need node? maybe a conditional?

###############################################################################
FROM nuttx-toolchain-base AS nuttx-toolchain-arm
# Download the latest ARM clang toolchain prebuilt by ARM
# ARM Clang 19.1.5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍🏻

# ARM GCC 14.3
RUN mkdir -p gcc-arm-none-eabi && \
curl -s -L "https://developer.arm.com/-/media/Files/downloads/gnu/13.2.Rel1/binrel/arm-gnu-toolchain-13.2.Rel1-x86_64-arm-none-eabi.tar.xz" \
curl -s -L "https://developer.arm.com/-/media/Files/downloads/gnu/14.3.Rel1/binrel/arm-gnu-toolchain-14.3.Rel1-x86_64-arm-none-eabi.tar.xz" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍🏻

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw are project source code changes caused by switch to gcc 14.3? are they here with 14.2?

# ARM64 GCC 14.3
RUN mkdir gcc-aarch64-none-elf && \
curl -s -L "https://developer.arm.com/-/media/Files/downloads/gnu/13.2.Rel1/binrel/arm-gnu-toolchain-13.2.Rel1-x86_64-aarch64-none-elf.tar.xz" \
curl -s -L "https://developer.arm.com/-/media/Files/downloads/gnu/14.3.Rel1/binrel/arm-gnu-toolchain-14.3.Rel1-x86_64-aarch64-none-elf.tar.xz" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍🏻

# Build image for tool required by Pinguino builds
###############################################################################
FROM nuttx-toolchain-base AS nuttx-toolchain-pinguino
# Download the pinguino compilers. Note this includes both 8bit and 32bit
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i would leave these comments, some insight provided.


###############################################################################
# Build image for tool required by Renesas builds
# CRITICAL: We use ubuntu:22.04 here because compiling GCC 8.3 sources
Copy link
Contributor

@cederom cederom Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ugh, no packages provided?

can we ask renesas / ubuntu for packages?

that would save us lots of gh runners time!

###############################################################################
FROM nuttx-toolchain-base AS nuttx-toolchain-esp32
# Download the latest ESP32, ESP32-S2 and ESP32-S3 GCC toolchain prebuilt by Espressif
# ESP 14.2.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they already seem to have 15.2.. but lets stick to 14.2 for now :-)

# This is used for the final images so make sure to not store apt cache
# Note: xtensa-esp32-elf-gdb is linked to libpython2.7

# Install dependencies
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍🏻

clang-tidy-21 \
&& rm -rf /var/lib/apt/lists/*

# Set Clang-21 as Default clang compiler
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to --set cc and --set c++ as before ?

# Configure Python Environment
# PEP 668 in Ubuntu 24.04 prevents global pip install.
# We explicitly allow breaking system packages for this CI container.
ENV PIP_BREAK_SYSTEM_PACKAGES=1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not using venv?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i know we save doubled disk space.. but python became self-incompatible.. just worried if that won't break the system?

what do you think @lupyuen ? :-)

# We explicitly allow breaking system packages for this CI container.
ENV PIP_BREAK_SYSTEM_PACKAGES=1
ENV PIP_DISABLE_PIP_VERSION_CHECK=true
# This disables the cache with value 0. We do not want caching as it
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please keep the comments, gave some insight what and why :-)

# Install pytest
RUN pip3 install cxxfilt
RUN pip3 install construct
RUN pip3 install esptool==4.8.dev4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we may want latest esptool (i.e. 4.1.0) as this is used to build images for espressif family mcu.. but we do NOT want version 5 (yet?) as it changed basic commands syntax ;-)

correct @tmedicci @fdcavalcanti ? :-)

@cederom
Copy link
Contributor

cederom commented Jan 19, 2026

Thank you @Barto22 amazing work! I left my remarks in the code :-)

  • Am I correct this docker is created each time CI is launched and we should keep it as light (time) as possible or it is created one time only and then reused by all following CI invocations @lupyuen @xiaoxiang781216 @simbit18 @acassis @raiden00pl @linguini1 @michallenc @jerpelea ?

  • Regarding the build tools compilation I am wondering how to offload the rest of source code build and just use pre-built packages?

    • The simplest way would be to have build host packages ready (Ubuntu 24LTS) but that may not be always possible.
    • How about adding missing tools to bigger projects like https://github.com/xpack-dev-tools ? :-)

@lupyuen
Copy link
Member

lupyuen commented Jan 21, 2026

@cederom Might be good to keep our Docker Image light, in case it incurs more Compile Time (which implies more GitHub Runners). I think it's good to measure the Compile Times, before and after updating the Docker Image, so we understand the potential impact on GitHub Runners.

In any case: We shouldn't change anything in the Docker Image within the next 4 weeks. (Because of this)

@simbit18 I recall we're currently running out of Disk Space while running NuttX Builds in GitHub CI? How can we be sure that this PR won't cause problems for Disk Space? How do we check the Available Disk Space in the Docker Image for this PR? Thanks!

@lupyuen
Copy link
Member

lupyuen commented Jan 21, 2026

@cederom This is a massive PR with changes across Code + Ubuntu OS + Python + GCC + Zig. Wonder if we should break into smaller PRs, then merge one PR per week, so we will know if something is messing up the NuttX Build or increasing the utilisation of GitHub Runners? Maybe we could upgrade Ubuntu OS as the first PR?

This PR needs plenty of monitoring, I will have to standby and watch closely for build errors + GitHub utilisation.

@simbit18 Anything else we can do to reduce the risk of this PR?

@Barto22
Copy link
Contributor Author

Barto22 commented Jan 21, 2026

Ok, what do you propose as the next steps?
I’m finishing all tests locally and everything seems to be working. There will still be some fixes needed in NuttX and NuttX apps — for example, functions returning a value even though they’re declared void, and an issue with include guard naming in NuttX apps.

How do you want to handle merging? Should I separate all build-related fixes into a dedicated PR's, and then submit another PR later with the updated Dockerfile?

Also, in my other PR where I’m updating libcxx/libcxxabi, I’m reverting from version 21 to 19 so maybe it can build with the older toolchain. I’ll update that PR shortly and test it with the current NuttX Docker image.

Let me know what you suggest, because updating the Dockerfile toolchain requires some code adjustments as well.

@xiaoxiang781216 xiaoxiang781216 linked an issue Jan 21, 2026 that may be closed by this pull request
8 tasks
@xiaoxiang781216
Copy link
Contributor

Ok, what do you propose as the next steps? I’m finishing all tests locally and everything seems to be working. There will still be some fixes needed in NuttX and NuttX apps — for example, functions returning a value even though they’re declared void, and an issue with include guard naming in NuttX apps.

How do you want to handle merging? Should I separate all build-related fixes into a dedicated PR's, and then submit another PR later with the updated Dockerfile?

Yes, it's better to merge the nuttx/apps change first.

@lupyuen
Copy link
Member

lupyuen commented Jan 21, 2026

Yep I agree, we should first merge the code changes to NuttX Kernel and NuttX Apps. The Dockerfile will be merged later as one or more PRs.

@Barto22
Copy link
Contributor Author

Barto22 commented Jan 21, 2026

Ok, so I will close this PR then and put all fixes in separate PR.

@cederom
Copy link
Contributor

cederom commented Jan 21, 2026

You can keep this PR with changes related to Docker update, we will preserve discussion and history that way.

Code related changes and fixes should go to a separate PR that should be merged before this one dedicated to Docker update :-)

@Barto22
Copy link
Contributor Author

Barto22 commented Jan 22, 2026

So here are two PR with some smaller fixes: #18092 and #18094. I will provide fixes for nuttx-apps also. I will close that one to not make mess.

@Barto22 Barto22 closed this Jan 22, 2026
@simbit18
Copy link
Contributor

Hi everyone, here https://github.com/NuttX/nuttx-docker-testing/actions/runs/21285480349
you can check the impact on all jobs (Linux) with the NuttX Docker image created with Dockerfile @Barto22.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Arch: arm Issues related to ARM (32-bit) architecture Area: CI Area: Drivers Drivers issues Area: Networking Effects networking subsystem Area: Tooling Area: Video Board: arm Board: simulator Size: L The size of the change in this PR is large Size: M The size of the change in this PR is medium Size: S The size of the change in this PR is small

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Support for GCC15

5 participants