Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MMIO regions are artificially constrained to 2^24 bytes #182

Open
sleffler opened this issue Mar 11, 2024 · 5 comments
Open

MMIO regions are artificially constrained to 2^24 bytes #182

sleffler opened this issue Mar 11, 2024 · 5 comments

Comments

@sleffler
Copy link
Contributor

sleffler commented Mar 11, 2024

Specify a large mmio region, e.g. in the board spec:

        "plic": {
            "start"  : 0x60000000,
            "length" :  0x8000000
        },

The result generates an import table entry for the builtin plic support:

        {
          "kind": "MMIO",
          "length": 134217728,
          "permits_load": true,
          "permits_load_mutable": false,
          "permits_load_store_capabilities": false,
          "permits_store": true,
          "start": 1610612736
        },

But the loader reports:

loader: Import table: 0x80001e70, 0x88 bytes
loader: Building mmio capability 0x10000000 + 0x100 (permissions: G RW---- -- ---)
loader: Building mmio capability 0x2000000 + 0x10000 (permissions: G RW---- -- ---)
loader: Skipping sealing type import
loader: Building mmio capability 0x60000000 + 0x0 (permissions: G RW---- -- ---)

which results in an invalid cap/ptr being constructed for plicPrios & co in StandardPlic::StandardPlic.

Looks like this is due to:

      /**
       * The size and permissions.  The size is currently used for static
       * sealed objects and MMIO imports.  The permissions part is used
       * only for MMIO regions.
       *
       * The size is stored in the low 24 bits.  We cannot provide
       * fine-grained capabilities beyond that size, leaving the top 8
       * bits free.  These encode a subset of permissions.  MMIO regions
       * cannot be local or executable.
       */
      size_t sizeAndPermissions;

but I don't see where the data are written to identify why the length/size reads back as zero (I assume they are read directly on startup).

@davidchisnall
Copy link
Collaborator

There's a slight disagreement between the linker and loader here. The loader reserves the top 8 bits for future expansion, the linker treats bits 24-27 as part of the length, so that assigning them back to the length does not require a toolchain revision (assigning them to permissions is, but will require more fields in the JSON report anyway).

There's no reason that we couldn't support larger sizes in the loader, but I wanted to reserve space if we wanted to add more fine-grained permissions.

For the PLIC specifically, it appears that you have a PLIC with over 32K HART contexts. This is almost certainly a lot more than you need. The PLIC that we have has a lot fewer contexts and so is much smaller.

The masking happens here. This is called here to build the capability. Masking the low 24 bits gives a size() of 0, so we construct a 0-length capability.

Do you have a use case for MMIO regions over 2^24 bytes? On systems with a lot of memory, we may want the heap to be in that space. We could potentially reclaim some of the unused permission bits. I'd like to keep at least 1-2 bits in reserve, but I don't think we will need all of the 4 that I've currently reserved. I we introduced a new relocation, we should keep the size + an exponent in a fixed number of bits, but that's a bit more work.

@sleffler
Copy link
Contributor Author

sleffler commented Mar 18, 2024 via email

@davidchisnall
Copy link
Collaborator

Not supporting a size is fine but the behaviour (silently trapping) is not.

Completely agreed on the second part. We should at least have an Invariant in the size() that checks that none of the reserved bits are set.

I expect that anyone doing bring-up on a new board will run with loader debug turned on (as you are doing), at least until first successful boot, and the loader should complain about invalid things, rather than just giving 0-length / untagged capabilities.

This is real hw. I will double-check the size.

The PLIC has different regions for different HART contexts. My expectation was that a multi-core scheduler (note: ours is not currently multi-HART-aware) would import the different contexts as different MMIO capabilities. Our existing targets that use a RISC-V standard PLIC import it with a 0x400000-byte length.

At the moment I'm just trying to understand the issues of porting code from an existing platform to one w/ just cheri(ot) added in the simplest way. A real system with cheri support will likely be rather different so it's hard to say whether or not 2^24 is sufficient

Please also file issues against the book if there are areas where the documentation is lacking (I am aware that there are quite a few, but help prioritising the ones that people hit first would be very useful). This is still quite work-in-progress, but it's a priority for me to kick it into a usable state in the next few months.

davidchisnall added a commit that referenced this issue Mar 19, 2024
This addresses the 'shouldn't silently fail' part of #182.  A subsequent
change may allow it to also work, depending on use cases.
davidchisnall added a commit that referenced this issue Mar 19, 2024
This addresses the 'shouldn't silently fail' part of #182.  A subsequent
change may allow it to also work, depending on use cases.
@davidchisnall
Copy link
Collaborator

I've merged a fix that stops this being a silent failure (if you compile with loader-debug enabled), but I'm leaving this open to track whether we should relax these requirements.

@davidchisnall davidchisnall changed the title large mmio region is mis-handled MMIO regions are artificially constrained to 2^24 bytes Mar 21, 2024
@davidchisnall
Copy link
Collaborator

Updating the title to reflect the unaddressed part of this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants