Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Boot mem handling #7039

Merged
merged 20 commits into from
Dec 12, 2024
Merged

Boot mem handling #7039

merged 20 commits into from
Dec 12, 2024

Conversation

jenswi-linaro
Copy link
Contributor

This pull request handles previously unused memory during boot—currently mostly temporary memory allocations, but that will be extended in the next pull requests.

The commit "core: mm: allocate temporary memory map array" demonstrates this by using dynamic allocation instead of hard-coded allocation of static_mmap_regions[].

Copy link
Contributor

@jforissier jforissier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For "mk/clang.mk: -Wno-gnu-alignof-expression":

s/dereferensing/dereferencing/
s/CLANG/Clang/

Reviewed-by: Jerome Forissier <[email protected]>

Copy link
Contributor

@jforissier jforissier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • For "core: arm64: increase thread stack size for debug":
    Reviewed-by: Jerome Forissier <[email protected]>

Copy link
Contributor

@jforissier jforissier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@jforissier jforissier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jenswi-linaro
Copy link
Contributor Author

OK to squash in the fixes and move up the last commits a bit?

@jforissier
Copy link
Contributor

OK to squash in the fixes and move up the last commits a bit?

Fine with me

@jenswi-linaro
Copy link
Contributor Author

Tags applied, squashed in fixes, and moved the last added commits a bit earlier to fix problems before they occur.

Copy link
Contributor

@jforissier jforissier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • For "core: add VCORE_FREE_{PA,SZ,END_PA}":
    s/CVORE/VCORE/
    s/paractise/practice/
    Acked-by: Jerome Forissier <[email protected]>

Copy link
Contributor

@jforissier jforissier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • For "core: mm: allow unmapping VCORE_FREE":
    Reviewed-by: Jerome Forissier <[email protected]>

Copy link
Contributor

@jforissier jforissier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on "core: arm,pager: make __vcore_init_ro_start follow __vcore_init_rx_end".

Comment on lines 504 to 505
ASSERT(__vcore_init_ro_start == __vcore_init_ro_end,
"__vcore_init_ro_start should follow __vcore_init_ro_end")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/__vcore_init_ro_end/__vcore_init_rx_end/? And in the commit description "after __vcore_init_rx_end"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, I'll update the commit.

@jenswi-linaro
Copy link
Contributor Author

Force push to update "core: arm,pager: make __vcore_init_ro_start follow __vcore_init_rx_end" and applied the tags.

Copy link
Contributor

@etienne-lms etienne-lms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very minor review comments.

I was looking at the CFG_BOOT_MEM part and experienced the patches on my Armv7 boards. Commit "core: arm: enable CFG_BOOT_MEM unconditionally" makes my platforms to segfault at primary core init (only when pager is disabled). I dumped the traces below, in case it helps:

I/TC: Early console on UART#4
D/TC:0   add_phys_mem:750 VCORE_UNPG_RX_PA type TEE_RAM_RX 0xde000000 size 0x00076000
D/TC:0   add_phys_mem:750 VCORE_UNPG_RW_PA type TEE_RAM_RW 0xde076000 size 0x0005a000
D/TC:0   add_phys_mem:750 VCORE_UNPG_RW_PA type SEC_RAM_OVERALL 0xde076000 size 0x0005a000
D/TC:0   add_phys_mem:750 VCORE_FREE_PA type TEE_RAM_RW 0xde0d0000 size 0x00130000
D/TC:0   merge_mmaps:703 Merging 0xde076000..0xde0cffff and 0xde0d0000..0xde1fffff
D/TC:0   add_phys_mem:750 VCORE_FREE_PA type SEC_RAM_OVERALL 0xde0d0000 size 0x00130000
D/TC:0   merge_mmaps:703 Merging 0xde076000..0xde0cffff and 0xde0d0000..0xde1fffff
D/TC:0   add_phys_mem:750 next_pa type SEC_RAM_OVERALL 0xde200000 size 0x01e00000
D/TC:0   merge_mmaps:703 Merging 0xde076000..0xde1fffff and 0xde200000..0xdfffffff
D/TC:0   add_phys_mem:750 GIC_BASE type IO_SEC 0xa0000000 size 0x00200000
...
D/TC:0   add_phys_mem:750 APB1_BASE type IO_NSEC 0x40000000 size 0x00200000
D/TC:0   add_va_space:794 type RES_VASPACE size 0x00a00000
D/TC:0   add_va_space:794 type SHM_VASPACE size 0x02000000
D/TC:0   dump_mmap_table:920 type SEC_RAM_OVERALL va 0xd76ec000..0xd9675fff pa 0xde076000..0xdfffffff size 0x01f8a000 (pgdir)
D/TC:0   dump_mmap_table:920 type IO_SEC       va 0xd9800000..0xd99fffff pa 0xa0000000..0xa01fffff size 0x00200000 (pgdir)
...
D/TC:0   dump_mmap_table:920 type IO_NSEC      va 0xdb000000..0xdb1fffff pa 0x40000000..0x401fffff size 0x00200000 (pgdir)
D/TC:0   dump_mmap_table:920 type RES_VASPACE  va 0xdb200000..0xdbbfffff pa 0x00000000..0x009fffff size 0x00a00000 (pgdir)
D/TC:0   dump_mmap_table:920 type SHM_VASPACE  va 0xdbe00000..0xdddfffff pa 0x00000000..0x01ffffff size 0x02000000 (pgdir)
D/TC:0   dump_mmap_table:920 type TEE_RAM_RX   va 0xde000000..0xde075fff pa 0xde000000..0xde075fff size 0x00076000 (smallpg)
D/TC:0   dump_mmap_table:920 type TEE_RAM_RW   va 0xde076000..0xde1fffff pa 0xde076000..0xde1fffff size 0x0018a000 (smallpg)
D/TC:0   core_mmu_xlat_table_alloc:527 xlat tables used 1 / 4
D/TC:0   core_mmu_xlat_table_alloc:527 xlat tables used 2 / 4
D/TC:0   core_mmu_xlat_table_alloc:527 xlat tables used 3 / 4
D/TC:0   carve_out_core_mem:2654 0xde000000 .. 0xde200000
D/TC:0   boot_mem_release_unused:152 Allocated 0 bytes at va 0xde0d0000 pa 0xde0d0000
D/TC:0   boot_mem_release_unused:156 Tempalloc 4380 bytes at va 0xde1feee4
D/TC:0   boot_mem_release_unused:178 Carving out 0xde000000..0xde0cffff
D/TC:0   boot_mem_release_unused:187 Releasing 1236992 bytes from va 0xde0d0000
I/TC: 
I/TC: Embedded DTB found
I/TC: OP-TEE version: 4.3.0-141-g0e3fb1edc (gcc version 11.3.1 20220712 (Arm GNU Toolchain 11.3.Rel1)) #36 Thu Sep 19 07:43:17 UTC 2024 arm
...
I/TC: Primary CPU initializing
...
D/TC:0 0 boot_mem_release_tmp_alloc:227 Releasing 8192 bytes from va 0xde1fe000
...
I/TC: Primary CPU switching to normal world boot
E/TC:0 0 
E/TC:0 0 Core data-abort at address 0xde0d0000 (translation fault)
E/TC:0 0  fsr 0x00002a07  ttbr0 0xde0ba000  ttbr1 0x00000000  cidr 0x0
E/TC:0 0  cpu #0          cpsr 0x800001d3
E/TC:0 0  r0 0xde0d0000      r4 0x2fff2d84    r8 0xde078828   r12 0x00000000
E/TC:0 0  r1 0xde1ff018      r5 0x00000000    r9 0xde0cf5c0    sp 0xde0cf5c0
E/TC:0 0  r2 0x00000040      r6 0xc0400000   r10 0xafaeadac    lr 0xde00021c
E/TC:0 0  r3 0x0000003f      r7 0x00000000   r11 0x00000000    pc 0xde000d24
E/TC:0 0 TEE load address @ 0xde000000
E/TC:0 0 Call stack:
E/TC:0 0  0xde000d24 dcache_cleaninv_range at core/arch/arm/kernel/cache_helpers_a32.S:53
E/TC:0 0  0xde00021c clear_bss at core/arch/arm/kernel/entry_a32.S:588

The data-abort occurs in reset_primary before switch to non-secure world, fluhsing vmem that is not mapped:
core/arch/arm/kernel/entry_a32.S:

	/*
	 * In case we've touched memory that secondary CPUs will use before
	 * they have turned on their D-cache, clean and invalidate the
	 * D-cache before exiting to normal world.
	 */
588:	flush_cache_vrange(cached_mem_start, cached_mem_end)

Obviously not reproduced on vexpress-qemu_virt.

*/
if (IS_ENABLED(CFG_NS_VIRTUALIZATION)) {
paddr_t b1 = 0;
paddr_size_t s1 = 0;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commit "core: merge core_mmu_init_phys_mem() and core_mmu_init_virtualization()":

  • Intentionally removed the info level trace IMSG("Initializing virtualization support"); ?
    (previously printed from core/arch/arm/kernel/boot.c)
  • inline comment above ... TA are loaded/executedNsec ... is pasted from previous implementation but is is quite buggy. I think it's time to fix it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commit "core: merge core_mmu_init_phys_mem() and core_mmu_init_virtualization()":

  • Intentionally removed the info level trace IMSG("Initializing virtualization support"); ?
    (previously printed from core/arch/arm/kernel/boot.c)

Yes, I saw no point in keeping it.

  • inline comment above ... TA are loaded/executedNsec ... is pasted from previous implementation but is is quite buggy. I think it's time to fix it.

Right, I'll remove it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commit "core: merge core_mmu_init_phys_mem() and core_mmu_init_virtualization()":

  • Intentionally removed the info level trace IMSG("Initializing virtualization support"); ?
    (previously printed from core/arch/arm/kernel/boot.c)

Yes, I saw no point in keeping it.

Do we have another way of knowing that the OP-TEE binary was compiled with virtualization support? Linux driver traces perhaps?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added "core: arm: add CFG_NS_VIRTUALIZATION boot log" to take care of that.

@@ -15,6 +15,10 @@ include mk/cc-option.mk
# Only applicable when paging is enabled.
CFG_CORE_TZSRAM_EMUL_SIZE ?= 458752

# Mandatory for arch/arm to avoid ifdefs in the arch specific code
CFG_BOOT_MEM=y
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is needed or we'll get an error:

core/arch/arm/arm.mk:19: *** CFG_BOOT_MEM is set to 'n' (from file) but its value must be 'y'.  Stop.

This is due to the default value assigned in mk/config.mk

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should rather have in mk/config.mk something like:

ifeq ($(ARCH),arm)
$(call force,CFG_BOOT_MEM,y)
else
CFG_BOOT_MEM ?= n
endif

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

#ifdef TEE_SDP_TEST_MEM_BASE
if (TEE_SDP_TEST_MEM_SIZE) {
pa = vaddr_to_phys(TEE_SDP_TEST_MEM_BASE);
carve_out_core_mem(pa, pa + TEE_SDP_TEST_MEM_BASE);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/TEE_SDP_TEST_MEM_BASE/TEE_SDP_TEST_MEM_SIZE/

{
tee_mm_entry_t *mm __maybe_unused = NULL;

DMSG("%#"PRIxPA" .. %#"PRIxPASZ, pa, end_pa);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/PRIxPASZ/PRIxPA/

@etienne-lms
Copy link
Contributor

To sumup, the cache flush operation at entry_a32.S:588 above flushes a VA range (TEE RAM) that includes VA areas not mapped (released boot-mem areas IIUC), they trigger a data-abort exception in the CPU. Testing a bit, it seems Qemu (at least the tool/config we use) does not trap such accesses as illegals.

@jenswi-linaro
Copy link
Contributor Author

To sumup, the cache flush operation at entry_a32.S:588 above flushes a VA range (TEE RAM) that includes VA areas not mapped (released boot-mem areas IIUC), they trigger a data-abort exception in the CPU. Testing a bit, it seems Qemu (at least the tool/config we use) does not trap such accesses as illegals.

Thanks for the detailed analysis, I hope the update fixes the problem.

@jenswi-linaro
Copy link
Contributor Author

Update, comments addressed.

@jforissier
Copy link
Contributor

For "core: arm: add CFG_NS_VIRTUALIZATION boot log":

Reviewed-by: Jerome Forissier <[email protected]>

@@ -934,7 +936,15 @@ static void init_primary(unsigned long pageable_part, unsigned long nsec_entry)

core_mmu_save_mem_map();
core_mmu_init_phys_mem();
boot_mem_release_unused();
va = boot_mem_release_unused();
if (IS_ENABLED(CFG_ARM32_core) && !IS_ENABLED(CFG_WITH_PAGER)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, my comment was maybe ambiguous. Don't you expect the same kind of issues on 64 bit platforms?
I've tested on my 64bi platform and indeed the same issue arises: data abort on the data cache flush at core/arch/arm/kernel/entry_a64.S:432 on the VA range unmapped from boot_mem.c.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, I've pushed an update for that.

@@ -540,17 +536,17 @@ static void init_runtime(unsigned long pageable_part)
asan_memcpy_unchecked(hashes, tmp_hashes, hash_size);

/*
* Need physical memory pool initialized to be able to allocate
* secure physical memory below.
* The pager is about the be enabled below, evenual temporary boot
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/evenual/eventual/ ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I'll fix.

__vcore_free_start))
#define VCORE_FREE_END_PA ((unsigned long)__vcore_free_end)
#else
#define VCORE_FREE_PA PADDR_MAX
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add an explicit inline comment like /* Pager consumes all core physical memory: no free area */?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I'll add a comment.

@jenswi-linaro
Copy link
Contributor Author

Update. Let me know if I should squash in the updates.

@etienne-lms
Copy link
Contributor

I'm fine you squash the fixup commits.

@jenswi-linaro
Copy link
Contributor Author

Squashed in the updates.

Copy link
Contributor

@etienne-lms etienne-lms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments on commit "core: mm: replace MEM_AREA_TA_RAM".

case MEM_AREA_NEX_RAM_RW:
case MEM_AREA_TEE_ASAN:
case MEM_AREA_TA_RAM:
case MEM_AREA_SEC_RAM_OVERALL:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Asan memory is still referenced with MEM_AREA_TEE_ASAN memory area ID when CFG_WITH_PAGER is enabled. Won't this switch case need to be supported?

MEM_AREA_TEE_RAM is still used when CFG_CORE_RWDATA_NOEXEC is disabled. IS that an issue? Maybe it's a bit inconsistent to enable CFG_MEMTAG but not CFG_CORE_RWDATA_NOEXEC.

Is is OK here to no more clear memtags for MEM_AREA_TEE_RAM_RW and MEM_AREA_NEX_RAM_RW areas?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Asan memory is still referenced with MEM_AREA_TEE_ASAN memory area ID when CFG_WITH_PAGER is enabled. Won't this switch case need to be supported?

mk/config.mk:999: *** CFG_CORE_SANITIZE_KADDRESS and CFG_MEMTAG are not compatible.  Stop.

I don't think we need to care about MEM_AREA_TEE_ASAN since it's not used with memory tagging enabled.

MEM_AREA_TEE_RAM is still used when CFG_CORE_RWDATA_NOEXEC is disabled. IS that an issue? Maybe it's a bit inconsistent to enable CFG_MEMTAG but not CFG_CORE_RWDATA_NOEXEC.

That shouldn't be a problem, everything should be covered by MEM_AREA_SEC_RAM_OVERALL nonetheless.

Is is OK here to no more clear memtags for MEM_AREA_TEE_RAM_RW and MEM_AREA_NEX_RAM_RW areas?

The same here, is covered by MEM_AREA_SEC_RAM_OVERALL.

However, while looking at logs etc to double-check I noticed:

D/TC:0   dump_mmap_table:923 type SEC_RAM_OVERALL va 0x12100000..0x122fffff pa 0x0e100000..0x0e2fffff size 0x00200000 (pgdir)
...
D/TC:0   mmap_clear_memtag:392 Clearing tags for VA 0x12000000..0x121fffff

We may have this error in more than one place, see commit f01690c ("core: fix mapping init debug trace") for the dump_mmap_table() print. I'll investigate further.

@@ -66,7 +66,7 @@ static struct tee_mmap_region static_mmap_regions[CFG_MMAP_REGIONS
#if defined(CFG_CORE_ASLR) || defined(CFG_CORE_PHYS_RELOCATABLE)
+ 1
#endif
+ 2] __nex_bss;
+ 3] __nex_bss;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(commit "core: mm,pager: map remaining physical memory")
Since this is needed only when pager is enabled, I suggest an explicit case like:

#if defined(CFG_CORE_ASLR) || defined(CFG_CORE_PHYS_RELOCATABLE)
						+ 1
#endif
#ifdef CFG_WITH_PAGER
						/* Entry to cover the whole physical RAM */
						+ 1
#endif
						2] __nex_bss;

*/
ADD_PHYS_MEM(MEM_AREA_TEE_RAM, ram_start,
VCORE_UNPG_RX_PA - ram_start);
ADD_PHYS_MEM(MEM_AREA_SEC_RAM_OVERALL, ram_start,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This changes to mapping of the physical memory range from TEE_RAM_START to TEE_LOAD_ADDR, from read-only/tagged to read-write/tagged.
I see that some platforms (at least plat-ti with pager enabled) has a 4k page between TEE_RAM_START and TEE_LOAD_ADDR. Most platforms default config also allow to define a CFG_TEE_LOAD_ADDR value different from TEE_RAM_START.

Maybe to discuss with @glneo.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make more sense only to map the gap between TEE_RAM_START and TEE_LOAD_ADDR in MEM_AREA_SEC_RAM_OVERALL?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would, unless some platforms load a TEE core image made on core binary image prepend with some data expected to be read-only mapped. Could be strange but current implementation allows that.
Maybe this old scheme could be deprecated. In that case, at least the commit message should explicitly state it IMHO.

@@ -66,7 +66,7 @@ static struct tee_mmap_region static_mmap_regions[CFG_MMAP_REGIONS
#if defined(CFG_CORE_ASLR) || defined(CFG_CORE_PHYS_RELOCATABLE)
+ 1
#endif
+ 1] __nex_bss;
+ 2] __nex_bss;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change adds 1, 3 or 4 cells in static_mmap_regions (depending on CFG_NS_VIRTUALIZATION and CFG_CORE_RWDATA_NOEXEC).
Should this be reflected here? Or maybe add 4 cells, unconditionally to cover all cases? Or use explicit config cases (below, not that very helpful I guess)?

 static struct tee_mmap_region static_mmap_regions[CFG_MMAP_REGIONS
 #if defined(CFG_CORE_ASLR) || defined(CFG_CORE_PHYS_RELOCATABLE)
 						+ 1
 #endif
+#ifdef CFG_CORE_RWDATA_NOEXEC
+#ifdef CFG_NS_VIRTUALIZATION
+ 						+ 4
+#else
+						+ 3
+#endif
+#else
+						+ 1
+#endif /*CFG_CORE_RWDATA_NOEXEC*/
 						+ 1] __nex_bss;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, let's do that to keep things simple.

Copy link
Contributor

@etienne-lms etienne-lms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Few review tags: Reviewed-by: Etienne Carriere <[email protected]> for commits:
"mk/clang.mk: -Wno-gnu-alignof-expression",
"core: arm64: increase thread stack size for debug",
"core: mm: add vaddr_to_phys()",
"core: remove CORE_MEM_TA_RAM",
"core: add VCORE_FREE_{PA,SZ,END_PA}",
"core: mm: allow unmapping VCORE_FREE",
"core: mm: unify secure core and TA memory",
"core: virt: phys_mem_core_alloc() use both pools",
"core: arm: core_mmu_v7.c: increase MAX_XLAT_TABLES by 2" and
"core: arm,pager: make __vcore_init_ro_start follow __vcore_init_rx_end"

A concern with commit "core: mm: map memory using requested block size".

panic("Impossible memory alignment");

if (map_is_tee_ram(mem_map->map + n))
mem_map->map[n].region_size = SMALL_PAGE_SIZE;
else
mem_map->map[n].region_size = CORE_MMU_PGDIR_SIZE;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previous implementation allowed to page map cacheable memory outside TEE RAM, (MEM_AREA_RAM_NSEC, MEM_AREA_RAM_SEC, MEM_AREA_ROM_SEC). I fear memory firewall permission violation on speculative accesses for such memories, and likely few other corner issues on some platform specific memories.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the requested granularity. It will not mean mapping more memory than specified. So some ranges will be mapped by of mix of small-page and pgdir sized mappings (level 3 and level 2 with LPAE).

Take QEMU with hafnium as an example:

TEE_RAM_RWX  va 0x0e304000..0x0e503fff pa 0x0e304000..0x0e503fff size 0x00200000 (smallpg)
RES_VASPACE  va 0x0e600000..0x0effffff pa 0x00000000..0x009fffff size 0x00a00000 (pgdir)
SHM_VASPACE  va 0x0f000000..0x10ffffff pa 0x00000000..0x01ffffff size 0x02000000 (pgdir)
IO_SEC       va 0x11000000..0x1a080fff pa 0x09000000..0x12080fff size 0x09081000 (pgdir)
SEC_RAM_OVERALL va 0x1a300000..0x1affffff pa 0x0e300000..0x0effffff size 0x00d00000 (pgdir)

IO_SEC is then mapped as:

D/TC:0        [LVL2] VA:0x0011000000 PA:0x0009000000 DEV-RW-XN-S
D/TC:0        [LVL2] VA:0x0011200000 PA:0x0009200000 DEV-RW-XN-S
...
D/TC:0        [LVL2] VA:0x001a000000 TBL:0x000e3fa000 
D/TC:0          [LVL3] VA:0x001a000000 PA:0x0012000000 DEV-RW-XN-S
D/TC:0          [LVL3] VA:0x001a001000 PA:0x0012001000 DEV-RW-XN-S
...
D/TC:0          [LVL3] VA:0x001a080000 PA:0x0012080000 DEV-RW-XN-S
D/TC:0          [LVL3] VA:0x001a081000 INVALID

SEC_RAM_OVERALL as:

D/TC:0          [LVL3] VA:0x001a300000 PA:0x000e300000 MEM-RW-XN-S
D/TC:0          [LVL3] VA:0x001a301000 PA:0x000e301000 MEM-RW-XN-S
...
D/TC:0          [LVL3] VA:0x001a3ff000 PA:0x000e3ff000 MEM-RW-XN-S
D/TC:0        [LVL2] VA:0x001a400000 PA:0x000e400000 MEM-RW-XN-S
...
D/TC:0        [LVL2] VA:0x001ae00000 PA:0x000ee00000 MEM-RW-XN-S
D/TC:0        [LVL2] VA:0x001b000000 INVALID

So pgdir-sized region_size uses as few translation tables as possible, but it will not result in mapping beyond the requested range.

@jenswi-linaro
Copy link
Contributor Author

Can I squash in the updates?

@etienne-lms
Copy link
Contributor

Can I squash in the updates?

Fine with me.

@jenswi-linaro
Copy link
Contributor Author

Tags applied

@jenswi-linaro
Copy link
Contributor Author

Squashed and tag applied.

Copy link
Contributor

@etienne-lms etienne-lms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Acked-by: Etienne Carriere <[email protected]> for commit
"core: arm: enable CFG_BOOT_MEM unconditionally" with a minor comment.

Comments on commit "core: mm: allocate temporary memory map array",
otherwise LGTM: Reviewed-by: Etienne Carriere <[email protected]> with the alloc_count issue addressed (in case my fix proposal is valid).

Reviewed-by: Etienne Carriere <[email protected]> for commits
"core: initialize guest physical memory early",
"core: merge core_mmu_init_phys_mem() and core_mmu_init_virtualization()" (with minor comments addressed) and
"core: arm: add CFG_NS_VIRTUALIZATION boot log".

boot_cached_mem_end = va;
}

IMSG_RAW("\n");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe move before at line 939 so that empty trace message remains the 1st OP-TEE trace when CFG_NS_VIRTUALIZATION=y (I.e. before IMSG("Initializing virtualization support")).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

@@ -1583,7 +1620,16 @@ void __weak core_init_mmu_map(unsigned long seed, struct core_mmu_config *cfg)

check_sec_nsec_mem_config();

mem_map = static_memory_map;
if (IS_ENABLED(CFG_BOOT_MEM)) {
mem_map.alloc_count = MEM_AREA_MAXTYPE;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/MEM_AREA_MAXTYPE/CFG_MMAP_REGIONS/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack

@@ -295,6 +295,7 @@ extern const unsigned long core_mmu_tee_load_pa;

void core_init_mmu_map(unsigned long seed, struct core_mmu_config *cfg);
void core_init_mmu_regs(struct core_mmu_config *cfg);
void core_mmu_save_mem_map(void);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybee add a short description: Copy static memory map from boot_mem to heap when CFG_BOOT_MEM is enabled

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack


if (!IS_ENABLED(CFG_NS_VIRTUALIZATION)) {
static_assert(ARRAY_SIZE(secure_only) <= 2);
if (ARRAY_SIZE(secure_only) == 2) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An an empty line above.

  • I think it would be worth restoring the info level trace message
    IMSG("Initializing virtualization support");

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I'll add an empty line.

We don't want it in this file. We're adding a new debug level message in "core: arm: add CFG_NS_VIRTUALIZATION boot log". Do you mean this print is important enough that the level should be info rather than debug? Debug level is the right level IMO.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, i'm fine if the old trace message is removed.

@jenswi-linaro
Copy link
Contributor Author

Addressed comments and added tags, except for "core: merge core_mmu_init_phys_mem() and core_mmu_init_virtualization()". Please let me know if anything more is needed for that commit.

Copy link
Contributor

@etienne-lms etienne-lms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed-by: Etienne Carriere <[email protected]> for commit
"core: merge core_mmu_init_phys_mem() and core_mmu_init_virtualization()".


if (!IS_ENABLED(CFG_NS_VIRTUALIZATION)) {
static_assert(ARRAY_SIZE(secure_only) <= 2);
if (ARRAY_SIZE(secure_only) == 2) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, i'm fine if the old trace message is removed.

Add -Wno-gnu-alignof-expression to the warnings flag for Clang in order to
avoid warnings like:
'_Alignof' applied to an expression is a GNU extension [-Werror,-Wgnu-alignof-expression]
when alignof() is applied on an expression like dereferencing a pointer
to get the alignment of type.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Jerome Forissier <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Increase STACK_THREAD_SIZE when CFG_CORE_DEBUG_CHECK_STACKS=y.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Jerome Forissier <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Add a wrapper function for virt_to_phys() using vaddr_t instead of a
void pointer.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Jerome Forissier <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
The buffer attribute CORE_MEM_TA_RAM isn't used to query the status of a
buffer anywhere. So remove the attribute to allow future
simplifications.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Jerome Forissier <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Add VCORE_FREE_{PA,SZ,END_PA} defines to identify the unused and free
memory range at the end of TEE_RAM_START..(TEE_RAM_START +
TEE_RAM_VA_SIZE).

VCORE_FREE_SZ is 0 in a pager configuration since all the memory is
used by the pager.

The VCORE_FREE range is excluded from the TEE_RAM_RW area for
CFG_NS_VIRTUALIZATION=y and instead put in a separate NEX_RAM_RW area.
This makes each partition use a bit less memory and leaves the
VCORE_FREE range available for the Nexus.

The VCORE_FREE range is added to the TEE_RAM_RW area for the normal
configuration with CFG_NS_VIRTUALIZATION=n and CFG_WITH_PAGER=n. It's in
practice unchanged behaviour in this configuration.

Signed-off-by: Jens Wiklander <[email protected]>
Acked-by: Jerome Forissier <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Allow unmapping core memory in the VCORE_FREE range when the original
boot mapping isn't needed any more.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Jerome Forissier <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Replace MEM_AREA_TA_RAM with MEM_AREA_SEC_RAM_OVERALL.

All read/write secure memory is covered by MEM_AREA_SEC_RAM_OVERALL,
sometimes using an aliased map. But secure read-only or execute core
memory is not covered as that would defeat the purpose of
CFG_CORE_RWDATA_NOEXEC.

Since the partition TA memory isn't accessed via MEM_AREA_TA_RAM any
longer, don't map it using the partition specific map.

This is needed later where unification of OP-TEE core and physical TA
memory is possible.

Signed-off-by: Jens Wiklander <[email protected]>
Acked-by: Etienne Carriere <[email protected]>
In configurations where secure core and TA memory is allocated from the
same contiguous physical memory block, carve out the memory needed by
OP-TEE core and make the rest available as TA memory.

This is needed by later patches where more core memory is allocated as
needed from the pool of TA memory.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
With CFG_NS_VIRTUALIZATION=y let phys_mem_core_alloc() allocate from
both the core_pool and ta_pool since both pools keep equally secure
memory. This is needed in later patches when some translation tables are
dynamically allocated from spare physical core memory.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Increase MAX_XLAT_TABLES by 2 to be able to map all TEE memory with 4k
pages.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
TEE memory is always supposed to be mapped with 4k pages for maximum
flexibility, but can_map_at_level() doesn't check the requested block
size for a region, so fix that. However, assign_mem_granularity()
assigns smaller than necessary block sizes on page aligned regions, so
fix that by only requesting 4k granularity for TEE memory and PGDIR
granularity for the rest.

This is needed in later patches where some TEE memory is unmapped.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
This concerns configurations with CFG_WITH_PAGER=y. Until this patch,
even if __vcore_init_ro_size (VCORE_INIT_RO_SZ) is 0 for
CFG_CORE_RODATA_NOEXEC=n, __vcore_init_ro_start was using some value
smaller than __vcore_init_rx_end. To simplify code trying to find the
end of VCORE_INIT_RX and VCORE_INIT_RO parts of the binary, make sure
that __vcore_init_ro_start follows right after __vcore_init_rx_end.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
For CFG_WITH_PAGER=y map the remaining memory following the
VCORE_INIT_RO memory to make sure that all physical TEE memory is mapped
even if VCORE_INIT_RO doesn't cover it entirely.

This will be used in later patches to use the temporarily unused memory
while booting.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Adds CFG_BOOT_MEM to support stack-like memory allocations during boot
before a heap has been configured.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Add boot_cached_mem_end in C code, replacing the previous read-only
mapped cached_mem_end. This allows updates to boot_cached_mem_end after
MMU has been enabled.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Enable CFG_BOOT_MEM unconditionally and call the boot_mem_*() functions
as needed from entry_*.S and boot.c.

The pager will reuse all boot_mem memory internally when configured.
The non-pager configuration will unmap the memory and make it available
for TAs if needed.

__FLATMAP_PAGER_TRAILING_SPACE is removed from the link script,
collect_mem_ranges() in core/mm/core_mmu.c maps the memory following
VCORE_INIT_RO automatically.

Signed-off-by: Jens Wiklander <[email protected]>
Acked-by: Etienne Carriere <[email protected]>
With CFG_BOOT_MEM enabled, allocate a temporary memory map array using
boot_mem_alloc_tmp() instead of using the global static_mmap_regions[].
core_mmu_save_mem_map() is added and called from
boot_init_primary_late() before the temporary memory is reused.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Initialize guest physical memory in virt_guest_created() before the
first entry into the guest from normal world. This replaces the call to
core_mmu_init_phys_mem() in init_tee_runtime().

Remove unused code in core_mmu_init_phys_mem() and the now unused
functions core_mmu_get_ta_range() and virt_get_ta_ram().

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Moves the implementation of core_mmu_init_virtualization() into
core_mmu_init_phys_mem().

This simplifies init_primary() in core/arch/arm/kernel/boot.c.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Add a log entry when CFG_NS_VIRTUALIZATION is enabled, for example:
D/TC:0 0   boot_init_primary_late:1028 NS-Virtualization enabled, supporting 2 guests

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Jerome Forissier <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
@jenswi-linaro
Copy link
Contributor Author

Tag applied. Rebased on master to resolve the conflict. All commits are reviewed. Thanks a lot for the help.

@jforissier jforissier merged commit 3ab39d2 into OP-TEE:master Dec 12, 2024
9 of 10 checks passed
@jenswi-linaro jenswi-linaro deleted the boot-mem branch December 12, 2024 15:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants