Skip to content

Commit

Permalink
Merge #323
Browse files Browse the repository at this point in the history
323: link.x.in: put most __[se] symbols back into sections r=adamgreig a=jordens

This puts most start/end address symbols back into the sections.

Only `__ebss` and `__edata` are kept outside their sections so that
potential user code with external libraries can inject stuff using
`INSERT AFTER .bss/.data` and profit from the .bss/.data zeroing/loading
mechanism. This also leads to the `__sbss` and `__veneer_base` symbols
having the right section type (bss is B not D in nm).

Also the trust zone start and end address are aligned to 32 bytes as per
the requirements. That section does cost up to 28 byte of padding at the end of FLASH due to
that alignment even if empty. But since flash is typically a multiple of 32 bytes and since the padding is at the end, there is no downside.

The .rodata start is kept free for the linker to allocate it after .text.
This enables users to inject sections between .text and .rodata and
removes the chance to get overlapping address errors. With this the
linker will by default place .rodata after .text as before.

This commit also adds and exposes a few more stable address start/end symbols
(__[se]uninit, __stext, __srodata) that are usefull for debugging and hooking
into.

See
rust-embedded/cortex-m-rt#287 (comment)
for discussion of the issues and description of this compromise solution.

Tested:

* [x] [stm32h7 ITCM](quartiq/stabilizer#322)
* [x] [sgstubs](rust-embedded/cortex-m-rt#323 (comment))
* [x] `INSERT AFTER` for bss/data still works (rust-embedded/cortex-m-rt#323 (comment))

Topics:

* [x] `sgstubs` moved to be the last section in FLASH to minimize the impact of the 32 byte alignment. (Padding flash to 32 byte is considered benign.)
* [ ] `INSERT AFTER` with binutils ld doesn't work. But that's independent of these changes. This is the `sgstubs`-in `memory.x` use case. Currently the `sgstubs` section is kept in `link.x`.

Co-authored-by: Robert Jördens <[email protected]>
  • Loading branch information
bors[bot] and jordens authored Apr 15, 2021
2 parents bbdab5a + 2b0baa6 commit 3421816
Showing 1 changed file with 33 additions and 24 deletions.
57 changes: 33 additions & 24 deletions cortex-m-rt/link.x.in
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,7 @@ SECTIONS
/* ### .text */
.text _stext :
{
__stext = .;
*(.Reset);

*(.text .text.*);
Expand All @@ -96,34 +97,22 @@ SECTIONS
*(.HardFault.*);

. = ALIGN(4); /* Pad .text to the alignment to workaround overlapping load section bug in old lld */
__etext = .;
} > FLASH
. = ALIGN(4); /* Ensure __etext is aligned if something unaligned is inserted after .text */
__etext = .; /* Define outside of .text to allow using INSERT AFTER .text */

/* ### .rodata */
.rodata __etext : ALIGN(4)
.rodata : ALIGN(4)
{
. = ALIGN(4);
__srodata = .;
*(.rodata .rodata.*);

/* 4-byte align the end (VMA) of this section.
This is required by LLD to ensure the LMA of the following .data
section will have the correct alignment. */
. = ALIGN(4);
__erodata = .;
} > FLASH
. = ALIGN(4); /* Ensure __erodata is aligned if something unaligned is inserted after .rodata */
__erodata = .;

/* ### .gnu.sgstubs
This section contains the TrustZone-M veneers put there by the Arm GNU linker. */
. = ALIGN(32); /* Security Attribution Unit blocks must be 32 bytes aligned. */
__veneer_base = ALIGN(4);
.gnu.sgstubs : ALIGN(4)
{
*(.gnu.sgstubs*)
. = ALIGN(4); /* 4-byte align the end (VMA) of this section */
} > FLASH
. = ALIGN(4); /* Ensure __veneer_limit is aligned if something unaligned is inserted after .gnu.sgstubs */
__veneer_limit = .;

/* ## Sections in RAM */
/* ### .data */
Expand All @@ -134,35 +123,55 @@ SECTIONS
*(.data .data.*);
. = ALIGN(4); /* 4-byte align the end (VMA) of this section */
} > RAM AT>FLASH
. = ALIGN(4); /* Ensure __edata is aligned if something unaligned is inserted after .data */
/* Allow sections from user `memory.x` injected using `INSERT AFTER .data` to
* use the .data loading mechanism by pushing __edata. Note: do not change
* output region or load region in those user sections! */
. = ALIGN(4);
__edata = .;

/* LMA of .data */
__sidata = LOADADDR(.data);

/* ### .gnu.sgstubs
This section contains the TrustZone-M veneers put there by the Arm GNU linker. */
/* Security Attribution Unit blocks must be 32 bytes aligned. */
/* Note that this pads the FLASH usage to 32 byte alignment. */
.gnu.sgstubs : ALIGN(32)
{
. = ALIGN(32);
__veneer_base = .;
*(.gnu.sgstubs*)
. = ALIGN(32);
__veneer_limit = .;
} > FLASH

/* ### .bss */
. = ALIGN(4);
__sbss = .; /* Define outside of section to include INSERT BEFORE/AFTER symbols */
.bss (NOLOAD) : ALIGN(4)
{
. = ALIGN(4);
__sbss = .;
*(.bss .bss.*);
*(COMMON); /* Uninitialized C statics */
. = ALIGN(4); /* 4-byte align the end (VMA) of this section */
} > RAM
. = ALIGN(4); /* Ensure __ebss is aligned if something unaligned is inserted after .bss */
/* Allow sections from user `memory.x` injected using `INSERT AFTER .bss` to
* use the .bss zeroing mechanism by pushing __ebss. Note: do not change
* output region or load region in those user sections! */
. = ALIGN(4);
__ebss = .;

/* ### .uninit */
.uninit (NOLOAD) : ALIGN(4)
{
. = ALIGN(4);
__suninit = .;
*(.uninit .uninit.*);
. = ALIGN(4);
__euninit = .;
} > RAM

/* Place the heap right after `.uninit` */
. = ALIGN(4);
__sheap = .;
/* Place the heap right after `.uninit` in RAM */
PROVIDE(__sheap = __euninit);

/* ## .got */
/* Dynamic relocations are unsupported. This section is only used to detect relocatable code in
Expand Down

0 comments on commit 3421816

Please sign in to comment.