boards/am67/t3-gem-o1: Add support for T3 Gemstone O1 board
Summary
This PR introduces basic support for the T3 Gemstone O1 (t3-gem-o1) development board, including board configuration, linker scripts, and drivers for NSH. Currently only UART console is supported. All necessary files and configurations are added to enable building and running NuttX on this TI AM67-based board.
For more information about the board:
- Website: https://www.t3gemstone.org/en
- Board Specs: https://docs.t3gemstone.org/en/boards/o1/introduction
- Documentation: https://docs.t3gemstone.org/en/projects/nuttx
Impact
Add support for TI AM67 chip.
Add new TI AM67-based board named t3-gem-o1.
Provide defconfigs:
- nsh
Testing
Tested on host: Ubuntu 24.04 noble, with arm-none-eabi-gcc (15:13.2.rel1-2) 13.2.1 20231009 toolchain.
Board: t3-gem-o1
nsh
Configure and build:
❯ ./tools/configure.sh -E -l t3-gem-o1:nsh
❯ make -s -j$(nproc)
Open NuttShell on UART-MAIN1:
❯ picocom -b 115200 /dev/ttyACM0
NuttShell (NSH) NuttX-12.11.0
nsh> cat proc/version
NuttX version 12.11.0 8bdbb8c7d5-dirty Oct 22 2025 14:15:42 t3-gem-o1:nsh
nsh> help
help usage: help [-v] [<cmd>]
. cmp fdinfo mount rptun unset
[ dirname free mv set uptime
? df help pidof sleep usleep
alias dmesg hexdump printf source watch
unalias echo kill ps test xd
basename env pkill pwd time
break exec ln readlink true
cat exit ls rm truncate
cd expr mkdir rmdir uname
cp false mkrd rpmsg umount
Builtin Apps:
dd nsh ostest sh
nsh>
@linguini1
I see this has ARM64 cores and R5F cores, and this is just support for the R5F cores. I suppose that's why it lives under arch/arm instead of arch/arm64. Have you considered getting NuttX to run on the A53 cores (that's already supported on other boards)? I wonder what the approach would be for that scenario, two different NuttX images on a per-core-type basis? Just curious, nothing to do with this PR.
Real‑Time Linux is deployed on the four Cortex‑A53 cores. Those cores provide the high‑performance compute power needed for demanding workloads such as image‑processing pipelines, vision algorithms, and other application‑level tasks.
For safety‑critical functions—e.g., the autopilot control loop—we are planning on running NuttX on the dedicated Cortex‑R5F cores. The R5F cores are purpose‑built for deterministic, low‑latency execution, which makes them ideal for hard real‑time control code.
Communication between the two domains will be performed through the RPMsg framework. RPMsg provides a lightweight, message‑based inter‑processor communication (IPC) channel that lets the Linux side exchange data with the NuttX side while preserving isolation and real‑time guarantees.
@erkan-vatan please squash patch 4 to patch 3 and patch 5 to patch 2.
@erkan-vatan signature (git commit -s)
ff7c556d60 Merge 0ecaf44ac8a5086d025b9007aa07f31fbf96a772 into b2e823d0b981b10540acc8d1902b5adba4750357
0ecaf44ac8 docs/boards: Add documentation for t3-gem-o1 board.
c29a526de3 boards/arm/am67: Add support for t3-gem-o1 board.
0164001944 arch/arm/am67: Add support for TI AM67 chips.
../nuttx/tools/checkpatch.sh -c -u -m -g b2e823d0b981b10540acc8d1902b5adba4750357..HEAD
Missing Signed-off-by
Missing Signed-off-by
Missing Signed-off-by
https://github.com/apache/nuttx/actions/runs/18779303663/job/53582075265?pr=17229#logs
@linguini1
I ran into an issue while adding CMake build‑system support. In my .vectors section, _vector_start resides at address 0x0, whereas __start is placed at 0x40. My hardware requires execution to begin at _vector_start, but the linker keeps using __start as the default entry point regardless of my attempts to change it. With a Makefile‑based build I was able to fix the problem by defining:
LDFLAGS += --entry=_vector_start
inside arch/arm/src/am67/Make.defs file. I’m not sure if this is the proper solution, but I couldn’t locate any alternative method. Declaring ENTRY(_vector_start) in the linker script has no effect. How can I replicate this behavior inside the CMakeLists.txt?
@simbit18 any ideas?
@linguini1
I added CMake build‑system support. To resolve the problem, I defined the link option directly in arch/arm/src/am67/CMakeLists.txt:
target_link_options(nuttx PRIVATE -Wl,--entry=_vector_start)
@erkan-vatan The chip supports MPU, so why isn't it enabled in the configuration (defconfig) ?
In this case, it might be better to proceed as follows:
-
enable CONFIG_ARM_MPU=y https://github.com/t3gemstone/nuttx/blob/c0eedfef556bc709229b6624d3612f62c6ccee65/boards/arm/am67/t3-gem-o1/configs/nsh/defconfig
-
add in the arch/arm/src/armv7-r/CMakeLists.tx
if(CONFIG_ARM_MPU OR CONFIG_BUILD_PROTECTED)
list(APPEND SRCS arm_mpu.c)
endif()
https://github.com/t3gemstone/nuttx/blob/c0eedfef556bc709229b6624d3612f62c6ccee65/arch/arm/src/armv7-r/CMakeLists.txt#L58
-
add in the arch/arm/src/armv7-r/Make.defs
ifneq ($(filter y,$(CONFIG_ARM_MPU) $(CONFIG_BUILD_PROTECTED)),) CMN_CSRCS += arm_mpu.c endif
https://github.com/t3gemstone/nuttx/blob/c0eedfef556bc709229b6624d3612f62c6ccee65/arch/arm/src/armv7-r/Make.defs#L49C1-L51C6
-
remove this in the arch/arm/src/am67/CMakeLists.tx
../${ARCH_SUBDIR}/arm_mpu.c
https://github.com/t3gemstone/nuttx/blob/c0eedfef556bc709229b6624d3612f62c6ccee65/arch/arm/src/am67/CMakeLists.txt#L30
-
remove this in the arch/arm/src/am67/Make.defs
CHIP_CSRCS += arm_mpu.c
https://github.com/t3gemstone/nuttx/blob/c0eedfef556bc709229b6624d3612f62c6ccee65/arch/arm/src/am67/Make.defs#L33C1-L33C24
@simbit18 When the MPU was enabled in the configuration, the build inserted the MPU enable routine at the very beginning of the program, preventing execution from reaching that line (https://github.com/t3gemstone/nuttx/blob/c0eedfef556bc709229b6624d3612f62c6ccee65/arch/arm/src/am67/am67_mpuinit.c#L67 ) . Since we couldn't disable the MPU once it was already enabled, we had to disable it in the configuration first to adjust the settings. After configuring it properly, we re enable the MPU later in the flow. I know this is a bit of a shortcut. Is it okay or do you have any suggestion ?
@AbduNaber Ok, but the change I described above only affects the Make and CMake build. Have you tried it? I ran a test locally and the build is fine. Of course, I can't test it on a board.