diff --git a/CHANGELOG.md b/CHANGELOG.md index 00186d6..33c6ba9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,23 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [1.24.2] - +- Documentation consistency cleanup, changes are mostly aesthetic, the content is not changed: + - Expanding use of fixed width fonts for file, function, argument, variable, macro, ... names. + - Fixed missing capitalization and punctuation, mostly for lists. + - Capitalization of acronyms and proper nouns: + - python => Python, + - riscof => RISCOF, + - isa => ISA, + - yaml => YAML, + - verilator => Verilator, + - cli => CLI, + - makefile => Makefile (depending on context), + - ... + - Few spelling and grammar fixes. + - Few instances where names in documentation did not match names in the source code. + This were either copy/paste errors or code changes not propagated to documentation. + ## [1.24.1] - 2022-07-19 - Account for the same test to be included with both XLEN variants in the isa generation. - Add markdown report for coverage statistics. diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst index ea99f49..929a02b 100644 --- a/CONTRIBUTING.rst +++ b/CONTRIBUTING.rst @@ -68,7 +68,7 @@ be followed while assigning a new version number : Note: You can have either a patch or minor or major update. Note: In case of a conflict, the maintainers will decide the final version to be assigned. -To update the version of the python package for deployment you can use the following:: +To update the version of the Python package for deployment you can use the following:: $ bumpversion --no-tag --config-file setup.cfg patch # possible: major / minor / patch diff --git a/PLUGINS.rst b/PLUGINS.rst index a7682f8..03d6f87 100644 --- a/PLUGINS.rst +++ b/PLUGINS.rst @@ -3,9 +3,9 @@ List of Reference RISCOF Plugins ================================ -This section provides a list of pre-built riscof-plugins which users can refer -to, to build plugins for their own DUT +This section provides a list of pre-built RISCOF plugins which users can refer +to, to build plugins for their own DUT: -- Spike: https://gitlab.com/incoresemi/riscof-plugins/-/tree/master/spike_parallel -- SAIL_cSim: https://gitlab.com/incoresemi/riscof-plugins/-/blob/master/sail_cSim/README.md -- InCore Plugins: https://gitlab.com/incoresemi/riscof-plugins (This is a collection of riscof based plugins for various targets hosted purely for reference.) +- `Spike `_, +- `SAIL_cSim `_, +- `InCore Plugins `_ (This is a collection of RISCOF based plugins for various targets hosted purely for reference.) diff --git a/docs/source/TestFormatSpec.adoc b/docs/source/TestFormatSpec.adoc index 901f339..0073c97 100644 --- a/docs/source/TestFormatSpec.adoc +++ b/docs/source/TestFormatSpec.adoc @@ -96,7 +96,7 @@ significant amount of the framework shall depend on the existence of these macro `RVTEST_ISA(isa_str)`:: - defines the Test Virtual Machine (TVM, the ISA being tested) + - - empty macro to specify the isa required for compilation of the test. + + - empty macro to specify the ISA required for compilation of the test. + - this is mandated to be present at the start of the test. diff --git a/docs/source/arch-tests.rst b/docs/source/arch-tests.rst index d0ad856..9d4c8bf 100644 --- a/docs/source/arch-tests.rst +++ b/docs/source/arch-tests.rst @@ -7,7 +7,7 @@ Running RISCV-ARCH-TESTS The following guide provides a walkthrough on how to run the tests available at the `riscv-arch-test `_ repository. -The following assumes you have installed riscof as a cli on your system. If not, then please refer +The following assumes you have installed RISCOF as a CLI on your system. If not, then please refer to the :ref:`install_riscof` section for the same. @@ -17,7 +17,7 @@ Setup all the DUT and Ref Plugins 1. You will first need to install the SAIL C-emulator on your system. You can refer to the :ref:`plugin_models` section for steps on installing the SAIL C-emulator. - 2. You will then need to download/clone the ``sail_cSim`` riscof plugin. You can do this with the + 2. You will then need to download/clone the ``sail_cSim`` RISCOF plugin. You can do this with the following command: .. code-block:: console @@ -28,13 +28,13 @@ Setup all the DUT and Ref Plugins You will need the path of the `riscof-plugins` directory from the above repo for the next steps. - 3. You will also need to create a riscof-plugin for you own DUT. If you haven't already done so, + 3. You will also need to create a RISCOF plugin for you own DUT. If you haven't already done so, please refer to the :ref:`plugins` section for details on building one. -Create a config.ini file ------------------------- +Create a ``config.ini`` file +---------------------------- - 1. You will need to create a `config.ini` file with the following contents. + 1. You will need to create a ``config.ini`` file with the following contents. .. code-block:: ini @@ -77,7 +77,7 @@ Running Tests with RISCOF The above step will first create a database of the all tests from the ``rv32i_m`` directory (recursively). This database can be found in the `riscof_work/database.yaml` file that is - generated. From this database, RISCOF selects the applicable test depending on the ISA yaml + generated. From this database, RISCOF selects the applicable test depending on the ISA YAML provided and then runs them first on the DUT and then on the REFERENCE plugins. The end, it compares the signatures and provides an html report of the result. diff --git a/docs/source/commands.rst b/docs/source/commands.rst index 8585ab1..f482e1c 100644 --- a/docs/source/commands.rst +++ b/docs/source/commands.rst @@ -8,81 +8,81 @@ RISCOF Commands This section provides an overview and working of the various sub commands available in RISCOF. The current list of subcommands includes: -- arch-tests -- coverage -- gendb -- setup -- validateyaml -- testlist -- run - -arch-tests ----------- +- ``arch-tests`` +- ``coverage`` +- ``gendb`` +- ``setup`` +- ``validateyaml`` +- ``testlist`` +- ``run`` + +``arch-tests`` +-------------- + This command is used to clone and update the tests from the official `riscv-arch-test `_ repository. -This command requires one of the following flags to be specified from the cli. +This command requires one of the following flags to be specified from the CLI. -- show-version: Display the current version of the official suite present at the specified directory path. -- clone: Clone the suite from github. -- update: Update the suite to reflect latest changes from github. +- ``--show-version``: Display the current version of the official suite present at the specified directory path. +- ``--clone``: Clone the suite from github. +- ``--update``: Update the suite to reflect latest changes from github. -Optional arguments from the cli: +Optional arguments from the CLI: -- get-version: The specific version of the tests to be fetched. Can be used with both the clone and +- ``--get-version``: The specific version of the tests to be fetched. Can be used with both the clone and update flags. The latest release is fetched if skipped. -- dir: The path to the directory where the suite is to be cloned to. Defaults to +- ``--dir``: The path to the directory where the suite is to be cloned to. Defaults to ``./riscv-arch-test`` if skipped. -coverage --------- +``coverage`` +------------ This command is used to collect the ISA coverage metrics of a given test-suite and generate a coverage report in html. -This command will require the following inputs from the cli: +This command will require the following inputs from the CLI: -- suite: The test suite path on which coverage needs to be run -- env: The path to the environment directory containing the suite-specific header files. -- cgf: list of covergroup-format files specifying the coverpoints that need to be covered by the - the suite +- ``--suite``: The test suite path on which coverage needs to be run. +- ``--env``: The path to the environment directory containing the suite-specific header files. +- ``--cgf``: The list of covergroup-format files specifying the coverpoints that need to be covered by the suite. -Optional arguments from the cli: +Optional arguments from the CLI: -- config: path to the ``config.ini`` file. Defaults to ``./config.ini`` if skipped. -- work-dir: path to the working directory where all artifacts need to be dumped. Defaults to - ``./riscof_work`` -- no-browser: when used, RISCOF skips automatically opening the html report in the default web +- ``--config``: The path to the ``config.ini`` file. Defaults to ``./config.ini`` if skipped. +- ``--work-dir``: The path to the working directory where all artifacts need to be dumped. Defaults to + ``./riscof_work``. +- ``--no-browser``: When used, RISCOF skips automatically opening the html report in the default web browser. The coverage command simply passes the cgf files to the reference plugin's runTests function. The -Reference plugin is responsible to generating a yaml based coverage report for each test using ``riscv-isac``. -The yaml file should be named ``coverage.rpt``. The ``riscv-isac`` run will also generate a data-propagation +Reference plugin is responsible to generating a YAML based coverage report for each test using ``riscv-isac``. +The YAML file should be named ``coverage.rpt``. The ``riscv-isac`` run will also generate a data-propagation report which should be named as ``ref.md``. Once the coverage files for each test has been generated, RISCOF will parse through the working -directories and merge all the ``coverage.rpt`` files to create a single yaml coverage report: +directories and merge all the ``coverage.rpt`` files to create a single YAML coverage report: ``suite_coverage.rpt``. RISCOF then also converts this to an HTML based reports and open it on the default web-browser. For a example on using this feature please refer to the :ref:`coverage` section. -gendb ------ +``gendb`` +--------- -This command is used to generate a database yaml file for all tests available in the test-suite. The -commands requires the following inputs from the cli: +This command is used to generate a database YAML file for all tests available in the test-suite. The +commands requires the following inputs from the CLI: -- suite: The test suite path for which database needs to be generated. -- work-dir: path to the working directory where all artifacts need to be dumped. Defaults to - ``./riscof_work`` +- ``--suite``: The test suite path for which database needs to be generated. +- ``--work-dir``: The path to the working directory where all artifacts need to be dumped. Defaults to + ``./riscof_work``. This utility parses the ``suite`` directory and collects all the .S files. For each .S file, the utility will parse the test and collect informations from various macros such as RVTEST_ISA, RVTEST_CASE, etc. For each test the utility will create a new entry in a dictionary which captures the different parts of the tests, the enabling conditions of each part, the coverage contributions -of each part, any compile macros required for each part and muc more. +of each part, any compile macros required for each part and much more. -The generated database yaml will follow the syntax described in section :ref:`database`. +The generated database YAML will follow the syntax described in section :ref:`database`. The output of this utility is a ``database.yaml`` located in the ``work_dir`` directory. This file is used by RISCOF to select and filter tests based on input DUT configuration. @@ -91,95 +91,94 @@ used by RISCOF to select and filter tests based on input DUT configuration. `_ set forth by the riscv-arch-test SIG. -setup ------ +``setup`` +--------- The setup command is used to generate a series of Template files that are required by RISCOF. These files are meant to provide ease to users integrating their DUT to RISCOF for the first time and should be modified by the users. -The setup utility takes in the following optional inputs from the cli: +The setup utility takes in the following optional inputs from the CLI: -- dutname: name of the dut for running the tests on. The utility will use this name to create a +- ``--dutname``: The name of the DUT for running the tests on. The utility will use this name to create a template plugin directory with all the relevant files. These files will have to be modified by - the user. Defaults to "spike" when skipped. -- refname: name of the reference plugin to be used in RISCOF. The utility will use this name to + the user. Defaults to ``spike`` when skipped. +- ``--refname``: The name of the reference plugin to be used in RISCOF. The utility will use this name to create a reference plugin directory with all the relevant files. The setup utility will also create a sample config.ini file using the above inputs. -validateyaml ------------- +``validateyaml`` +---------------- -This command simply performs a validation of the isa spec and the platform pspec yamls of the DUT -as mentioned in the ``config.ini`` using riscv-config. The outputs are checked version of the yamls in -the directory pointed by ``work_dir`` +This command simply performs a validation of the ISA ``ispec`` and the platform ``pspec`` YAMLs of the DUT +as mentioned in the ``config.ini`` using riscv-config. The outputs are checked for the version of the YAMLs in +the directory pointed to by ``work_dir``. -testlist --------- +``testlist`` +------------ -This command is used to filter tests from the database.yaml based on the configuration of DUT -present in the isa and platform yamls as mentioned in the ``config.ini``. This command will require -the following inputs from the cli: +This command is used to filter tests from the ``database.yaml`` based on the configuration of DUT +present in the ISA and platform YAMLs as mentioned in the ``config.ini``. This command will require +the following inputs from the CLI: -- suite: The test suite from which the test need to filtered. +- ``--suite``: The test suite from which the tests need to be filtered. -This command takes the following optional inputs from cli +This command takes the following optional inputs from CLI: -- config: path to the ``config.ini`` file. Defaults to ``./config.ini`` if skipped. -- work-dir: path to the working directory where all artifacts need to be dumped. Defaults to - ``./riscof_work`` +- ``--config``: The path to the ``config.ini`` file. Defaults to ``./config.ini`` if skipped. +- ``--work-dir``: The path to the working directory where all artifacts need to be dumped. Defaults to + ``./riscof_work``. -The utility first creates a ``database.yaml`` for the input suite. For each test in the database yaml, -this utility will check if the conditions of any parts of a test are enabled based on the isa and -platform yaml specs of the DUT. If any part is enabled, then the corresponding test is entered into +The utility first creates a ``database.yaml`` for the input suite. For each test in the database YAML, +this utility will check if the conditions of any parts of a test are enabled based on the ISA and +platform YAML specs of the DUT. If any part is enabled, then the corresponding test is entered into the teslist along with the respective coverage labels and compile macros. The utility will dump the test list in the ``testlist.yaml`` file in the ``work_dir`` directory. This -yaml will follow the same syntax as defined in the :ref:`testlist` section. +YAML will follow the same syntax as defined in the :ref:`testlist` section. -run ---- +``run`` +------- This is probably the primary command of RISCOF which is going to be widely used. This command is -currently responsible for first validating the inputs yamls, +currently responsible for first validating the inputs YAMLs, creating a database of the tests in the ``suite`` directory, generate a filtered test-list, run the tests on the DUT and then the Reference Plugins, and finally compare the generated signatures and present an html report of the findings. -The following inputs are required on the cli by this command: +The following inputs are required on the CLI by this command: -- suite: The test suite path on which coverage needs to be run -- env: The path to the environment directory containing the suite-specific header files. +- ``--suite``: The test suite path on which coverage needs to be run +- ``--env``: The path to the environment directory containing the suite-specific header files. -Optional arguments from the cli: +Optional arguments from the CLI: -- config: path to the ``config.ini`` file. Defaults to ``./config.ini`` if skipped. -- work-dir: path to the working directory where all artifacts need to be dumped. Defaults to - ``./riscof_work`` -- no-browser: when used, RISCOF skips automatically opening the html report in the default web - browser. -- dbfile: The path to the database file, from which testlist will be generated -- testfile: The path to the testlist file on which tests will be run -- no-ref-run: when used, RISCOF will not run tests on Reference and will quit before signatures comparison -- no-dut-run: when used, RISCOF will not run tests on DUT and will quit before signatures comparison -- no-clean: when used, RISCOF will not remove the ``work_dir``, if it exists. +- ``--config``: The path to the ``config.ini`` file. Defaults to ``./config.ini`` if skipped. +- ``--work-dir``: The path to the working directory where all artifacts need to be dumped. Defaults to + ``./riscof_work``. +- ``--no-browser``: When used, RISCOF skips automatically opening the html report in the default web browser. +- ``--dbfile``: The path to the database file, from which testlist will be generated. +- ``--testfile``: The path to the testlist file on which tests will be run. +- ``--no-ref-run``: When used, RISCOF will not run tests on Reference and will quit before signatures comparison. +- ``--no-dut-run``: When used, RISCOF will not run tests on DUT and will quit before signatures comparison. +- ``--no-clean``: When used, RISCOF will not remove the ``work_dir``, if it exists. The ``work_dir`` is cleaned by default. However, if one of ``no-clean``, ``testfile`` or ``dbfile`` are specified, it is preserved as is. -All artifacts of this command are generated in the ``work_dir`` directory. Typicall artifacts will +All artifacts of this command are generated in the ``work_dir`` directory. Typically artifacts will include: -==================== ============================================================= -Artifact Description -==================== ============================================================= -database.yaml This is the database of all the tests in the suite directory -Makefile.DUT* This is the Makefile generated by the DUT Plugin. -Makefile.Reference* This is the Makefile generated by the Reference Plugin. -report.html The final report generated at the end of the run after signature comparison -yaml files verified and checked yaml versions of the input isa and platform yamls -test_list.yaml This list of filtered tests from the database.yaml -src directory this will include a directory for each test in the test_list.yaml. Each test-directory will include the test, compiled-binaries, signatures from both the DUT and the Reference Plugin. -==================== ============================================================= +======================== ============================================================= +Artifact Description +======================== ============================================================= +``database.yaml`` This is the database of all the tests in the suite directory. +``Makefile.DUT*`` This is the Makefile generated by the DUT Plugin. +``Makefile.Reference*`` This is the Makefile generated by the Reference Plugin. +``report.html`` The final report generated at the end of the run after signature comparison. + YAML files Verified and checked YAML versions of the input ISA and Platform YAMLs. +``test_list.yaml`` This list of filtered tests from the ``database.yaml``. + src directory This will include a directory for each test in the ``test_list.yaml``. Each test-directory will include the test, compiled-binaries, signatures from both the DUT and the Reference Plugin. +======================== ============================================================= diff --git a/docs/source/database.rst b/docs/source/database.rst index e3e8998..8826c88 100644 --- a/docs/source/database.rst +++ b/docs/source/database.rst @@ -13,18 +13,18 @@ in them and constructs a dictionary of sorts, for the framework. The tests in the directory are identified by their relative path from the repository home. Each test in the database is defined as follows: -* **file path**: the absolute path of the test on the said system +* **file path**: the absolute path of the test on the said system: - * **commit_id**: Contains the recent commit id of the commit in which the test was modified. + * ``commit_id``: Contains the recent commit id of the commit in which the test was modified. - * **isa**: Contains the isa required for the compilation of the test. This field is extracted from the *RVTEST_ISA* macro. + * ``isa``: Contains the ISA required for the compilation of the test. This field is extracted from the ``RVTEST_ISA`` macro. - * **parts**: Contains the individual parts present in the test and the conditions and macros required by each of them. The parts are identified by unique names as specified in the test. A test must contain at-least one part for it to be included in the database. + * ``parts``: Contains the individual parts present in the test and the conditions and macros required by each of them. The parts are identified by unique names as specified in the test. A test must contain at-least one part for it to be included in the database. - * **part name**: This node is extracted from the *RVTEST_CASE_START* macro in the test. + * **part name**: This node is extracted from the ``RVTEST_CASE_START`` macro in the test. - * **check**: A list of the check statements for the part as specified in the test. These translate to the conditions which need to be satisfied for this part to be included. - * **define**: A list of define statements for the part as specified in the test. These translate to the macros required by this part to run. + * ``check``: A list of the check statements for the part as specified in the test. These translate to the conditions which need to be satisfied for this part to be included. + * ``define``: A list of define statements for the part as specified in the test. These translate to the macros required by this part to run. Example: @@ -52,18 +52,18 @@ Usage Reasons of Failure ^^^^^^^^^^^^^^^^^^ -Possible scenarios where database is not generated +Possible scenarios where database is not generated: * There does not exist at-least one part in the test. - * Any part which has started does not end before another part starts or the code ends i.e. *RVTEST_CASE_START* exists for that part but *RVTEST_CASE_END* doesn't. - * The part names given in a *RVTEST_CASE_START*-*RVTEST_CASE_END* pair doesn't match. - * *RVTEST_ISA* macro isn't present in the test. + * Any part which has started does not end before another part starts or the code ends i.e. ``RVTEST_CASE_START`` exists for that part but ``RVTEST_CASE_END`` doesn't. + * The part names given in a ``RVTEST_CASE_START``-``RVTEST_CASE_END`` pair doesn't match. + * ``RVTEST_ISA`` macro isn't present in the test. Notes ^^^^^ -1. The database is always alphabetically ordered +1. The database is always alphabetically ordered. 2. The database checks for macro sanity - i.e. certain macros exists and in the correct order. -3. Each time a new test is added to the ``suite`` directory, the database utility has to be run manually and the database.yaml +3. Each time a new test is added to the ``suite`` directory, the database utility has to be run manually and the ``database.yaml`` has to be up-streamed manually to the repository. diff --git a/docs/source/inputs.rst b/docs/source/inputs.rst index c8355f8..46cd481 100644 --- a/docs/source/inputs.rst +++ b/docs/source/inputs.rst @@ -4,26 +4,26 @@ Understanding RISCOF Inputs ########################### -There are three major inputs that are required by most of the subcommand of riscof listed in the +There are three major inputs that are required by most of the subcommand of ``riscof`` listed in the :ref:`commands` section: -1. The ``config.ini`` file -2. The ``DUT plugin directory`` -3. The ``Reference plugin directory`` +1. The ``config.ini`` file, +2. The **DUT plugin directory**, +3. The **Reference plugin directory**. -This section will discuss each of the above requirements in detail +This section will discuss each of the above requirements in detail. .. _config_syntax: -Config.ini Syntax -================= +``config.ini`` syntax +===================== The ``config.ini`` file follows the `ini `_ syntax and is used to specify the name of the dut and reference plugins, path of the model plugins, plugin -specific parameters and paths to the DUT's riscv-config based isa and platform yamls. +specific parameters and paths to the DUT's riscv-config based ISA and platform YAMLs. -A generic format of the ``config.ini`` file required by riscof is presented below. A similar +A generic format of the ``config.ini`` file required by ``riscof`` is presented below. A similar template file can be generated using the ``--setup`` command of RISCOF. .. code-block:: ini @@ -42,15 +42,15 @@ template file can be generated using the ``--setup`` command of RISCOF. PATH= #OPTIONAL [ref-name] - pluginpath= + pluginpath= jobs= #OPTIONAL PATH= #OPTIONAL The config file also allows you to define specific nodes/fields which can be used by the respective model plugins. For e.g., in the above template the -`pluginpath` variable under the `[dut-name]` header is available to the DUT python plugin file -via RISCOF. The plugin may use this pluginpath to detect the ``env`` files, scripts and other +```pluginpath``` variable under the ``[dut-name]`` header is available to the DUT Python plugin file +via RISCOF. The plugin may use this ``pluginpath`` to detect the ``env`` files, scripts and other collaterals that may be required during execution. Similarly one can define more variables and prefixes here which can directly be @@ -85,21 +85,21 @@ successful execution. A typical DUT plugin directory has the following structure:: - ├──dut-name/ # DUT plugin templates - ├── env + ├── dut-name/ # DUT plugin templates + ├── env/ │   ├── link.ld # DUT linker script │   └── model_test.h # DUT specific header file - ├── riscof_dut-name.py # DUT python plugin - ├── dut-name_isa.yaml # DUT ISA yaml based on riscv-config - └── dut-name_platform.yaml # DUT Platform yaml based on riscv-config + ├── riscof_dut-name.py # DUT Python plugin + ├── dut-name_isa.yaml # DUT ISA YAML based on riscv-config + └── dut-name_platform.yaml # DUT platform YAML based on riscv-config A typical Reference directory has the following structure:: - ├──ref-name/ # Reference plugin templates - ├── env + ├── ref-name/ # Reference plugin templates + ├── env/ │   ├── link.ld # Reference linker script │   └── model_test.h # Reference specific header file - ├── riscof_ref-name.py # Reference python plugin + ├── riscof_ref-name.py # Reference Python plugin env directory @@ -118,7 +118,7 @@ logs, signatures, elfs, etc. YAML specs ---------- -The yaml specs in the DUT plugin directory are the most important inputs to the RISCOF framework. +The YAML specs in the DUT plugin directory are the most important inputs to the RISCOF framework. All decisions of filtering tests depend on the these YAML files. The files must follow the syntax/format specified by `riscv-config `_. These YAMLs are validated in RISCOF using riscv-config. @@ -133,11 +133,11 @@ for its configuration and execution. Python Plugin ------------- -The python files prefixed with ``riscof_`` are the most important component of the model plugins. -These python files define how the particular model compiles a test, runs it on the DUT and extracts the +The Python files prefixed with ``riscof_`` are the most important component of the model plugins. +These Python files define how the particular model compiles a test, runs it on the DUT and extracts the signature. -To provide a standardized interface for all models, the python plugins must define all actions of +To provide a standardized interface for all models, the Python plugins must define all actions of the model under specific functions defined by the :ref:`abstract_class` specified by RISCOF. A more detailed explanation on how to build this file for you model can be found in the :ref:`plugin_def` section. diff --git a/docs/source/installation.rst b/docs/source/installation.rst index 2d1cf72..887c66a 100644 --- a/docs/source/installation.rst +++ b/docs/source/installation.rst @@ -10,7 +10,7 @@ Quickstart This section is meant to serve as a quick-guide to setup RISCOF and perform a sample validation check between ``spike`` (DUT in this case) and ``SAIL-RISCV`` (Reference model in this case). This guide -will help you setup all the required tooling for running riscof on your system. +will help you setup all the required tooling for running RISCOF on your system. If you would like to know how to build a plugin for your DUT please refer to the :ref:`plugins` section for more details. @@ -26,18 +26,18 @@ Install Python .. tab:: Ubuntu - Ubuntu 17.10 and 18.04 by default come with python-3.6.9 which is sufficient for using riscv-config. + Ubuntu 17.10 and 18.04 by default come with Python 3.6.9 which is sufficient for using riscv-config. - If you are are Ubuntu 16.10 and 17.04 you can directly install python3.6 using the Universe - repository + If you are are Ubuntu 16.10 and 17.04 you can directly install ``python3.6`` using the Universe + repository: .. code-block:: shell-session $ sudo apt-get install python3.6 $ pip3 install --upgrade pip - If you are using Ubuntu 14.04 or 16.04 you need to get python3.6 from a Personal Package Archive - (PPA) + If you are using Ubuntu 14.04 or 16.04 you need to get ``python3.6`` from a Personal Package Archive + (PPA): .. code-block:: shell-session @@ -46,8 +46,8 @@ Install Python $ sudo apt-get install python3.6 -y $ pip3 install --upgrade pip - You should now have 2 binaries: ``python3`` and ``pip3`` available in your $PATH. - You can check the versions as below + You should now have 2 binaries: ``python3`` and ``pip3`` available in your ``$PATH``. + You can check the versions as below: .. code-block:: shell-session @@ -59,7 +59,7 @@ Install Python .. tab:: CentOS7 The CentOS 7 Linux distribution includes Python 2 by default. However, as of CentOS 7.7, Python 3 - is available in the base package repository which can be installed using the following commands + is available in the base package repository which can be installed using the following commands: .. code-block:: shell-session @@ -67,7 +67,7 @@ Install Python $ sudo yum install -y python3 $ pip3 install --upgrade pip - For versions prior to 7.7 you can install python3.6 using third-party repositories, such as the + For versions prior to 7.7 you can install ``python3.6`` using third-party repositories, such as the IUS repository .. code-block:: shell-session @@ -90,16 +90,16 @@ Install Python Using Virtualenv for Python --------------------------- -Many a times users face issues in installing and managing multiple python versions. This is actually -a major issue as many gui elements in Linux use the default python versions, in which case installing -python3.6 using the above methods might break other software. We thus advise the use of **pyenv** to -install python3.6. +Many a times users face issues in installing and managing multiple Python versions. This is actually +a major issue as many gui elements in Linux use the default Python versions, in which case installing +``python3.6`` using the above methods might break other software. We thus advise the use of ``pyenv`` to +install ``python3.6``. For Ubuntu and CentosOS, please follow the steps here: https://github.com/pyenv/pyenv#basic-github-checkout RHEL users can find more detailed guides for virtual-env here: https://developers.redhat.com/blog/2018/08/13/install-python3-rhel/#create-env -Once you have pyenv installed do the following to install python 3.6.0:: +Once you have ``pyenv`` installed do the following to install Python 3.6.0:: $ pyenv install 3.6.0 $ pip3 install --upgrade pip @@ -260,7 +260,7 @@ With this you should now have all the following available as command line argume Install Plugin Models ===================== -This section will walk your throguh installing 2 important RISC-V reference models: Spike and SAIL. +This section will walk your through installing 2 important RISC-V reference models: Spike and SAIL. These are often used as reference models in RISCOF. .. tabs:: @@ -345,27 +345,27 @@ These are often used as reference models in RISCOF. -Create Neccesary Env Files +Create Necessary Env Files ========================== In order to run tests via RISCOF you will need to provide the following items : - - **config.ini**: This file is a basic configuration file following the `ini` syntax. This file - will capture information like: name of the dut/reference plugins, path to the plugins, path to - the riscv-config based yamls, etc. For more information on the contents and syntax please refer - to the :ref:`config_syntax` section - - **dut-plugin directory**: RISCOF requires that the DUT model for testing is presented in the - form of a python plugin. The python plugin is nothing more than a python file which includes + - ``config.ini``: This file is a basic configuration file following the INI syntax. This file + will capture information like: name of the DUT/reference plugins, path to the plugins, path to + the riscv-config based YAMLs, etc. For more information on the contents and syntax please refer + to the :ref:`config_syntax` section. + - **DUT plugin directory**: RISCOF requires that the DUT model for testing is presented in the + form of a Python plugin. The Python plugin is nothing more than a Python file which includes certain standard and defined functions to carry out the activities of test-compilation, - execution and signature extraction. This python file name needs to be prefixed with ``riscof_`` - and must be present in the dut-plugin directory. One can refer to the :ref:`plugin_def` section - for more details on how to write this python file. + execution and signature extraction. This Python file name needs to be prefixed with ``riscof_`` + and must be present in the DUT plugin directory. One can refer to the :ref:`plugin_def` section + for more details on how to write this Python file. - The directory will also need to contain the `riscv-config` based isa and platform yamls which provide - a definition of the DUT. These yamls will be used to filter tests that need to be run on the + The directory will also need to contain the `riscv-config` based ISA and platform YAMLs which provide + a definition of the DUT. These YAMLs will be used to filter tests that need to be run on the DUT. - Finally, an env directory will also need to be present in the dut-plugin directory, which + Finally, an env directory will also need to be present in the DUT plugin directory, which contains the environment files like the ``model_test.h`` that is required to compile and run the tests on the DUT. Refer to the `TestFormat spec `_ for definition of macros that can be used in the @@ -374,7 +374,7 @@ In order to run tests via RISCOF you will need to provide the following items : - **reference-plugin directory**: Similar to the DUT plugin, RISCOF also expects a reference model plugin. The structure of the directory and files is the same as that of the DUT. However, the - isa and platform yamls are not required since RISCOF will always pick the yamls from the DUT + ISA and platform YAMLs are not required since RISCOF will always pick the YAMLs from the DUT plugin for all purposes. .. For sample templates of pre-built plugins please refer to : `riscof-plugins `_. @@ -388,20 +388,20 @@ Models for the user via the ``setup`` command as shown below:: The above command will generate the following files and directories in the current directory:: - ├──config.ini # configuration file for riscof - ├──spike/ # DUT plugin templates - ├── env + ├── config.ini # configuration file for RISCOF + ├── spike/ # DUT plugin templates + ├── env/ │   ├── link.ld # DUT linker script │   └── model_test.h # DUT specific header file - ├── riscof_spike.py # DUT python plugin - ├── spike_isa.yaml # DUT ISA yaml based on riscv-config - └── spike_platform.yaml # DUT Platform yaml based on riscv-config - ├──sail_cSim/ # reference plugin templates - ├── env + ├── riscof_spike.py # DUT Python plugin + ├── spike_isa.yaml # DUT ISA YAML based on riscv-config + └── spike_platform.yaml # DUT Platform YAML based on riscv-config + ├── sail_cSim/ # reference plugin templates + ├── env/ │   ├── link.ld # Reference linker script │   └── model_test.h # Reference model specific header file ├── __init__.py - └── riscof_sail_cSim.py # Reference model python plugin. + └── riscof_sail_cSim.py # Reference model Python plugin. The generate template ``config.ini`` will look something like this by default:: @@ -441,9 +441,9 @@ By default the ``model_test.h`` files and the ``link.ld`` file will work out of ``spike`` and ``sail`` models. .. note:: Custom DUTs can go through the various ``#TODO`` comments to figure out what changes need to be - made in the respective python file. + made in the respective Python file. -The configuration of spike we will be using is available in the ``spike/spike_isa.yaml``. Modifying +The configuration of Spike we will be using is available in the ``spike/spike_isa.yaml``. Modifying this will change the tests applicable for the DUT. For now let's leave it as is. For more information on creating and modifying your plugins can be found in :ref:`plugins` @@ -454,8 +454,8 @@ We are now ready to run the architectural tests on the DUT via RISCOF. .. tip:: By default RISCOF resorts to using RISC-V's SAIL C Emulator as a reference model. To generate templates for a reference model add the argument '--refname myref' to the setup command above. This - will generate a *myref* directory containing template files for defining a reference model plugin. - Lookout for the #TODO in the python file for places where changes will be required. + will generate a ``myref`` directory containing template files for defining a reference model plugin. + Lookout for the ``#TODO`` in the Python file for places where changes will be required. .. tip:: For details on the various configuration options supported by the *sail_cSim* plugin refer `here `_. @@ -464,22 +464,21 @@ We are now ready to run the architectural tests on the DUT via RISCOF. Cloning the Architectural Tests =============================== -We will be running the tests from the official riscv-arch-test repository on the DUT and Reference -models. To create a copy of the latest tests from the riscv-arch-test repository do the following: +We will be running the tests from the official ``riscv-arch-test`` repository on the DUT and Reference +models. To create a copy of the latest tests from the ``riscv-arch-test`` repository do the following: .. code-block:: console - $ riscof --verbose info arch-tests --clone + $ riscof --verbose info arch-test --clone -This will create a riscv-arch-test in the current working directory. - +This will create a `riscv-arch-test` folder in the current working directory. Running RISCOF ============== The RISCOF run is divided into three steps as shown in the overview Figure. -The first step is to check if the input yaml files are configured correctly. This step internally calls -the ``riscv-config`` on both the isa and platform yaml files indicated in the ``config.ini`` file. +The first step is to check if the input YAML files are configured correctly. This step internally calls +the ``riscv-config`` on both the ISA and platform YAML files indicated in the ``config.ini`` file. .. code-block:: bash @@ -614,5 +613,3 @@ The run will also open an HTML page with all the information. [INFO] : suite/rv32i_m/I/I-SW-01.S : d50921ef64708678832770fd842355aa2b0684af : Passed [INFO] : suite/rv32i_m/I/I-XOR-01.S : d50921ef64708678832770fd842355aa2b0684af : Passed [INFO] : suite/rv32i_m/I/I-XORI-01.S : d50921ef64708678832770fd842355aa2b0684af : Passed - - diff --git a/docs/source/intro.rst b/docs/source/intro.rst index 17b3d5f..f595b46 100644 --- a/docs/source/intro.rst +++ b/docs/source/intro.rst @@ -2,7 +2,7 @@ Introduction ############ -**RISCOF** - The RISC-V Compatibility Framework is a python based framework which enables testing of a RISC-V +**RISCOF** - The RISC-V Compatibility Framework is a Python based framework which enables testing of a RISC-V target (hard or soft implementations) against a standard RISC-V golden reference model using a suite of RISC-V architectural assembly tests. diff --git a/docs/source/overview.rst b/docs/source/overview.rst index a71cf9b..f12e31d 100644 --- a/docs/source/overview.rst +++ b/docs/source/overview.rst @@ -19,7 +19,7 @@ As can be seen in the image above, the framework requires 2 specific inputs from 1. A RISCV-CONFIG based YAML specification of the ISA choices made by the user. Details on writing the specific YAML spec can be found here : `Spec Documentation `_ 2. A Python plugin which can be used by the framework to compile the test, simulate the test and - extract the signature of each test. Steps to define the python plugin is available in the + extract the signature of each test. Steps to define the Python plugin is available in the :ref:`plugins` section. External Dependencies @@ -60,7 +60,7 @@ internal utilities under consideration. This list is presented as a YAML file and more information on this format is available in :ref:`testlist`. - This utility is currently internal to RISCOF and is not available as a separate cli (command line + This utility is currently internal to RISCOF and is not available as a separate CLI (command line interface). Neither users or contributors should need to deal with this utility as a separate module. @@ -77,7 +77,7 @@ validating a RISC-V target against a golden reference model. required tooling please refer to :ref:`quickstart` The flow starts with the user providing a YAML specification which captures the choices made in the -implementation and also providing a python plugin (a python code written with certain defined +implementation and also providing a Python plugin (a Python code written with certain defined constraints) which can enable compilation and simulation of a test on the implementation. The input YAML is first validated using the RISCV-CONFIG tool to confirm the implementation choices @@ -88,15 +88,15 @@ The normalized YAML is then fed to the *Test Selector* utility to filter and sel test-pool which are applicable to the implementation of the user. These selected tests are written out in a YAML file and represent the *test-list*. -The normalized YAML is also fed into the reference model's python plugin to configure the model to +The normalized YAML is also fed into the reference model's Python plugin to configure the model to mimic the implementation as close as possible. -The *test-list* is next forwarded to both, the user and reference defined python plugins, to +The *test-list* is next forwarded to both, the user and reference defined Python plugins, to initiate compilation and execution of the tests on the respective platforms. One should note the each test in the architectural test suite adheres to the :ref:`test_format_spec` and thus produces a signature in the memory region of the test which captures the essence that -particular test. Thus, it is also the job of the python plugins to extract this signature to a file +particular test. Thus, it is also the job of the Python plugins to extract this signature to a file on the host system. RISCOF, thus declares a test to have passed on the implementation only when the its signature diff --git a/docs/source/plugins.rst b/docs/source/plugins.rst index 7d5fa8c..1118125 100644 --- a/docs/source/plugins.rst +++ b/docs/source/plugins.rst @@ -6,7 +6,6 @@ Building your Model Plugin ########################## - As mentioned in the :ref:`inputs` section, the DUT and Reference plugin directories (and their items) are the most crucial components required by the RISCOF framework for successful execution. This section will walk you through in detail on how to build the various items of the DUT plugin @@ -14,15 +13,15 @@ directories. A typical DUT plugin directory has the following structure:: - ├──dut-name/ # DUT plugin templates - ├── env + ├── dut-name/ # DUT plugin templates + ├── env/ │   ├── link.ld # DUT linker script │   └── model_test.h # DUT specific header file - ├── riscof_dut-name.py # DUT python plugin - ├── dut-name_isa.yaml # DUT ISA yaml based on riscv-config - └── dut-name_platform.yaml # DUT Platform yaml based on riscv-config + ├── riscof_dut-name.py # DUT Python plugin + ├── dut-name_isa.yaml # DUT ISA YAML based on riscv-config + └── dut-name_platform.yaml # DUT platform YAML based on riscv-config -The ``env`` directory in must contain: +The ``env`` directory must contain: - ``model_test.h`` header file which provides the model specific macros as described in the `TestFormat Spec @@ -35,16 +34,16 @@ The ``env`` directory in must contain: The ``env`` folder can also contain other necessary plugin specific files for pre/post processing of logs, signatures, elfs, etc. -The yaml specs in the DUT plugin directory are the most important inputs to the RISCOF framework. +The YAML specs in the DUT plugin directory are the most important inputs to the RISCOF framework. All decisions of filtering tests depend on the these YAML files. The files must follow the syntax/format specified by `riscv-config `_. These YAMLs are validated in RISCOF using riscv-config. -The python plugin files capture the behavior of model for compiling tests, executing them on the DUT +The Python plugin files capture the behavior of the model for compiling tests, executing them on the DUT and finally extracting the signature for each test. The following sections provide a detailed -explanation on how to build the python files for your model. +explanation on how to build the Python files for your model. -.. hint:: All paths provided by riscof are absolute and it is advised to always use absolute paths while executing/generating commands to avoid errors. +.. hint:: All paths provided by RISCOF are absolute and it is advised to always use absolute paths while executing/generating commands to avoid errors. Start with Templates @@ -55,15 +54,15 @@ using the following command:: $ riscof setup --refname=sail_cSim --dutname=spike -.. note:: You can change the name from spike to the name of your target +.. note:: You can change the name from ``spike`` to the name of your target -This above command should generate a spike folder with the following contents: +This above command should generate a ``spike`` folder with the following contents: .. code-block:: bash :linenos: env # contains sample header file and linker file - riscof_spike.py # sample spike plugin for RISCOF + riscof_spike.py # sample Spike plugin for RISCOF spike_isa.yaml # sample ISA YAML configuration file spike_platform.yaml # sample PLATFORM YAML configuration file @@ -89,14 +88,14 @@ The command will also generate a sample ``config.ini`` file with the following c The following changes need to be made: -1. Fix the paths in the ``config.ini`` to point to the folder containing the respective riscof_*.py files. +1. Fix the paths in the ``config.ini`` to point to the folder containing the respective ``riscof_*.py`` files. 2. The macros in the ``spike/env/model_test.h`` can be updated/replaced based on the model. Definitions of the macros and their use is available in the :ref:`test_format_spec`. 3. Update the ``riscof_.py`` with respective functions as described in the following paragraphs. -The plugin file in the ``spike`` folder: riscof_spike.py is the one that needs to be -changed and updated for each model as described in the following sections +The plugin file in the ``spike`` folder: ``riscof_spike.py`` is the one that needs to be +changed and updated for each model as described in the following sections. Please note the user is free to add more custom functions in this file which are called within the @@ -105,23 +104,23 @@ three base functions (as mentioned above). Why Python Based Plugins ? ========================== -- Since the entire RISCOF framework is in python it did not make sense to have the +- Since the entire RISCOF framework is in Python it did not make sense to have the user-DUT in a separate environment. It would then cause issues in transferring data across these environments/domains. - While many prefer the conventional *Makefile/autoconf* approach, transferring the *test-list* in YAML to be used by another Makefile-environment seemed like a bad and an unscalable idea. -- Expecting initial hesitation, we have tried to ensure that the python plugins can be made extremely +- Expecting initial hesitation, we have tried to ensure that the Python plugins can be made extremely simple (as crude as writing out bash instructions using shellCommand libraries). - Considering there would be a few backlashes in these choices, we have given enough pit-stops in the - flow: ``validation, test-list, coverage, etc`` so one can stop at any point in the flow and move + flow: ``validation``, ``test-list``, ``coverage``, etc so one can stop at any point in the flow and move to their custom domain. -- Having a python plugin **does not change your test-bench** in anyway. The plugins only act as a common +- Having a Python plugin **does not change your test-bench** in any way. The plugins only act as a common interface between your environment and RISCOF. All you need to do is call the respective sim - commands from within the python plugin. + commands from within the Python plugin. If you do feel the flow can be further improved or changed please do drop in an issue on the official repository. @@ -132,7 +131,7 @@ official repository. Python Plugin file ================== -As can be seen from the above generated template python file, it creates a Metaclass for the plugins +As can be seen from the above generated template Python file, it creates a Metaclass for the plugins supported by the :ref:`abstract_class`. This class basically offers the users three basic functions: ``initialize`` , ``build`` and ``runTests``. For each model RISCOF calls these functions in the following order: @@ -148,76 +147,76 @@ We now define the various arguments and possible functionality of each of the ab mentioned functions. Please note, this is not a strict guide and the users can choose to perform different actions in different functions as long as they comply with the order of the functions being called and the signatures are generated in their -respective directories at the end of the `runTests` function. +respective directories at the end of the ``runTests`` function. .. note:: The contents of the signature file must conform to specification mentioned in the - TestFormat Spec `here `_ + TestFormat Spec `here `_. -__init__ (self, *args, **kwargs) --------------------------------- +``__init__ (self, *args, **kwargs)`` +------------------------------------ -.. hint:: **PYTHON-HINT**: The self variable is used to represent the instance of the class which +.. hint:: **PYTHON-HINT**: The ``self`` variable is used to represent the instance of the class which is often used in object-oriented programming. It works as a reference to the object. Python uses the self parameter to refer to instance attributes and methods of the class. In this guide we use the self parameter to create and access methods declared across the functions within the same class. -This is the constructor function for the pluginTemplate class. The configuration dictionary of the -dut plugin, as specified in the ``config.ini``, is passed to the plugin via the ``**kwargs`` argument. +This is the constructor function for the ``pluginTemplate`` class. The configuration dictionary of the +DUT plugin, as specified in the ``config.ini``, is passed to the plugin via the ``**kwargs`` argument. The typical action in this function would be to capture as much information about the DUT from the -`config.ini` as possible, since the config will not be available as arguments to the remaining +``config.ini`` as possible, since the config will not be available as arguments to the remaining functions. -.. hint:: **PYTHON-HINT**: In Python we use *args and **kwargs as an argument when we are unsure about the number - of arguments to pass in the functions. *args allow us to pass the variable number of non +.. hint:: **PYTHON-HINT**: In Python we use ``*args`` and ``**kwargs`` as an argument when we are unsure about the number + of arguments to pass in the functions. ``*args`` allow us to pass the variable number of non keyword arguments to a function. The arguments are passed as a tuple and these passed arguments - make tuple inside the function with same name as the parameter excluding asterisk ``*``. + make a tuple inside the function with same name as the parameter excluding the asterisk ``*``. - **kwargs allows us to pass the variable length of keyword + ``**kwargs`` allows us to pass the variable length of keyword arguments to the function. The double asterisk is used to indicate a variable length keyword - argument. The arguments are passed as a dictionary and these arguments make a dictionary inside - function with name same as the parameter excluding double asterisk ``**``. + argument. The arguments are passed as a dictionary and these arguments make a dictionary inside + the function with the same name as the parameter excluding the double asterisk ``**``. - As is seen below, we access the config node as ``kwargs.get('config')`` + As seen below, we access the config node as ``kwargs.get('config')``. Refer to this `blog - `_ for more information + `_ for more information. -As mentioned, in the :ref:`config_syntax` section, the config.ini file can be used to pass some -common or specific parameters to the python plugin. This makes it easy for users to modify the -parameters in the config.ini file itself, instead of having to change it in the python file. +As mentioned, in the :ref:`config_syntax` section, the ``config.ini`` file can be used to pass some +common or specific parameters to the Python plugin. This makes it easy for users to modify the +parameters in the ``config.ini`` file itself, instead of having to change it in the Python file. -At minimum, the DUT node of the ``config.ini`` must contain paths to the ISA and Platform yaml specs. +At minimum, the DUT node of the ``config.ini`` must contain paths to the ISA and platform YAML specs. If the DUT node is missing or is empty in the ``config.ini`` this function should throw an error and exit. This is done in lines 8-10 in the snippet below. One of the parameters we should capture here would be the path to the simulation executable of the DUT. In case of an RTL based DUT, this would be point to the final binary executable of your -test-bench produced by a simulator (like verilator, vcs, incisive, etc). In case of an ISS or +test-bench produced by a simulator (like Verilator, ModelSim, VCS, Xcelium, etc). In case of an ISS or Emulator, this variable could point to where the ISS binary is located. This is shown in line-16 in the below snippet. Another variable of interest would be the number of parallel jobs that can be spawned off by RISCOF for various actions performed in later functions, specifically to run the tests in parallel on the -DUT executable. This variable is captured in as the variable ``num_jobs`` in line-21 below. If the -`config.ini` does not have the ``jobs`` variable specified then we default to the value of 1. +DUT executable. This variable is captured as the variable ``num_jobs`` in line-21 below. If the +``config.ini`` does not have the ``jobs`` variable specified then we default to the value of 1. The ``target_run`` parameter is used to control if the user would like to stop after compilation of the tests or continue running the tests on the target and -go on to signature comparison. When set to '0' the plugin must only compile the -tests and exit (using ``raise SystemExit`` in python). When set to ``1`` the +go on to signature comparison. When set to ``0`` the plugin must only compile the +tests and exit (using ``raise SystemExit`` in Python). When set to ``1`` the plugin will compile and run the tests on the target. This parameter is captured in lines 34-37. Finally, the mandatory parameters that must be present in the ``config.ini`` for the DUT are the -paths to the riscv-config based ISA and Platform YAML files. These paths are collected in lines -28-29. Remember these are paths to the unchecked version of the yaml and are only captured here to +paths to the riscv-config based ISA and platform YAML files. These paths are collected in lines +28-29. Remember these are paths to the unchecked version of the YAML and are only captured here to send them across to the RISCOF framework, where RISCOF will validate them with riscv-config , send -it to the reference model for configuration and also use it filter the tests. -The verified/checked versions of the YAMLs will be provided to the build function. +it to the reference model for configuration and also use it to filter the tests. +The validated/checked versions of the YAMLs will be provided to the build function. -The above yaml file paths and other arguments are captured in the class methods and returned back to +The above YAML file paths and other arguments are captured in the class methods and returned back to the RISCOF framework in line 40. .. code-block:: python @@ -253,9 +252,9 @@ the RISCOF framework in line 40. self.isa_spec = os.path.abspath(config['ispec']) self.platform_spec = os.path.abspath(config['pspec']) - #We capture if the user would like the run the tests on the target or - #not. If you are interested in just compiling the tests and not running - #them on the target, then following variable should be set to False + # We capture if the user would like the run the tests on the target or + # not. If you are interested in just compiling the tests and not running + # them on the target, then following variable should be set to False if 'target_run' in config and config['target_run']=='0': self.target_run = False else: @@ -264,10 +263,10 @@ the RISCOF framework in line 40. # Return the parameters set above back to RISCOF for further processing. return sclass -.. warning:: if the config is empty or if the isa and platform yamls are not available in the +.. warning:: If the config is empty or if the ISA and platform YAMLs are not available in the specified paths, the above function shall generate an error and exit. -.. note:: It is not necessary for your config.ini to pass any of these parameters. And one could +.. note:: It is not necessary for your ``config.ini`` to pass any of these parameters. And one could instead hardwire the paths in this function itself. For eg. .. code-block:: python @@ -276,7 +275,7 @@ the RISCOF framework in line 40. self.num_jobs = 7 Between lines 38-40 one can still add and capture many more DUT specific parameters which could be -useful later. For example, +useful later. For example: .. code-block:: python @@ -288,11 +287,11 @@ useful later. For example, self.build_path = '/scratch/mybuild/' Compared to a conventional Makefile flow, this phase would be similar to capturing and setting some -of the DUT specific parameters in a Makefile.include. Many of those variables can be set here and +of the DUT specific parameters in a ``Makefile.include``. Many of those variables can be set here and used later in different contexts. -initialize (self, suite, workdir, archtest_env) ------------------------------------------------ +``initialize (self, suite, workdir, archtest_env)`` +--------------------------------------------------- The primary action here would be to create the templates for the compile and any other pre/post processing commands that will be required later here. This function provides the following @@ -300,7 +299,7 @@ arguments which can be used in this function: 1. `suite`: This argument holds the absolute path of the directory where the architectural test suite exists. -2. `workdir`: This argument holds the absolute path of the work directory where all the execution +2. `work_dir`: This argument holds the absolute path of the work directory where all the execution and meta files/states should be dumped as part of running RISCOF. 3. `archtest_env`: This argument holds the absolute path of the directory where all the architectural test header files (``arch_test.h``) are located. This should be used to initialize @@ -308,7 +307,7 @@ arguments which can be used in this function: Since we have access to the test environment directory here, it would make sense to build a generic template of the command that we will be using to compile the tests. For example consider the -following python code which sets the compile command to use the riscv-gcc compiler. +following Python code which sets the compile command to use the riscv-gcc compiler. .. code-block:: python @@ -323,19 +322,19 @@ following python code which sets the compile command to use the riscv-gcc compil to be peformed and then use the ``.format(var)`` syntax to assign those values. Curly braces with integers in them indicate the argument number which should be used for replacement. - For example, + For example: .. code-block:: python 'My name is {0} and age is {1}'.format('John','20') - In python one can also use the ``+`` symbol to concatenate strings as is shown in the above - snippet code, where the include directories are appended at the end + In Python one can also use the ``+`` symbol to concatenate strings as is shown in the above + snippet code, where the include directories are appended at the end. -Some folks might build a `riscv32-` toolchain or a `riscv64-` toolchain depending on +Some folks might build a ``riscv32-`` toolchain or a ``riscv64-`` toolchain depending on their DUT. To be agnostic of this choice, in the above snippet we have left the integer following -`riscv` string to be a variable (defined by ``{1}``. see below hint for python syntax details) +``riscv`` string to be a variable (defined by ``{1}``, see below hint for Python syntax details) which will be fixed in the later functions. Based on the DUT one can even hard-code it here and remove the variable dependence. @@ -345,12 +344,12 @@ changes from test to test. Hence, we leave it as a variable in the above snippet The variable ``{2}`` indicates the assembly file of the test that needs to be compiled. The variables ``{3}`` and ``{4}`` are used to indicate the output elf name and any compile macros -that need to be assigned respectively. Both of which will be set in the runTests function later. -Remember here, we are assigning this string template to a method in the `self` instance of the class +that need to be assigned respectively. Both of which will be set in the ``runTests`` function later. +Remember here, we are assigning this string template to a method in the ``self`` instance of the class which can be accessed in other functions as well. -Similar to the compile command above, one can choose to build template for many other commands that +Similar to the compile command above, one can choose to build templates for many other commands that may be required to be executed for each test. For example, some common utilities would be: .. code-block:: python @@ -392,7 +391,7 @@ add the above utility snippets after line 20 below. # add more utility snippets here -This phase is much similar to the setting up command variables in a Makefile. These commands are +This phase is very similar to the setting up of command variables in a Makefile. These commands are generic and parameterized and can be applied to any test. An example of a more complex compile command is provided below, @@ -408,10 +407,10 @@ An example of a more complex compile command is provided below, riscv32-unknown-elf-objdump {4} --source > {4}.debug;\ riscv32-unknown-elf-readelf -a {4} > {4}.readelf;' -In the above snippet the compile command has 6 variables ( indicated by ``{0}`` to ``{5}``). To +In the above snippet the compile command has 6 variables (indicated by ``{0}`` to ``{5}``). To assign values to these variables in the later stages, one can use the following syntax. Remember the order of the arguments in the ``format()`` function below must match the order of variables used -above. Here the arguments of the format function are strings or variable holding the specified +above. Here the arguments of the format function are strings or variables holding the specified information. .. code-block:: python @@ -419,7 +418,7 @@ information. self.compile_cmd.format(march_str, testsuite_env, dut_env, dut_link.ld, output_elf, input_asm) -If the integer numbering feels uncomfortable, python also allows name-based substitution which would +If the integer numbering feels uncomfortable, Python also allows name-based substitution which would like the following: .. code-block:: python @@ -436,33 +435,33 @@ like the following: self.compile_cmd.format(testmarch=march_str, testenv=testsuite_env, dutenv=dut_env, dutlink=dut_link.ld, outputelf=output_elf, inputasm=input_asm) -build(self, isa_yaml, platform_yaml) ------------------------------------- +``build(self, isa_yaml, platform_yaml)`` +---------------------------------------- This function is primarily meant for building or configuring the DUT (or its runtime arguments) if required. This is particularly useful when working with core-generators. This stage can be used to generate a specific configuration of the DUT leveraging the specs available in the checked -ISA and Platform yamls. For example in the case of spike, we can use the ISA yaml to create the -appropriate value of the ``--isa`` argument used by spike. +ISA and platform YAMLs. For example in the case of Spike, we can use the ISA YAML to create the +appropriate value of the ``--isa`` argument used by Spike. Apart, from configuring the DUT this stage can also be used to check if all the commands required by the DUT for successful execution are available or not. For example checking if the compiler is -installed, the dut_exe executable is available, etc. +installed, the ``dut_exe`` executable is available, etc. -To enable the above actions the `build` function provides the following arguments to the user: +To enable the above actions the ``build`` function provides the following arguments to the user: -1. `isa_spec`: This argument holds the absolute path to the validated ISA config YAML. This can be used to extract +1. ``isa_yaml``: This argument holds the absolute path to the validated ISA config YAML. This can be used to extract various fields from the YAML (e.g. ISA) and configure the DUT accordingly. -2. `platform_spec`: This argument holds the absolute path to the validated PLATFORM config YAML and can be used +2. ``platform_yaml``: This argument holds the absolute path to the validated platform config YAML and can be used similarly as above. -Some of the parameters of interest that can be captured in this stage using the isa yaml are: +Some of the parameters of interest that can be captured in this stage using the ISA YAML are: -- the xlen value: this can be obtained from the max value in the ``supported_xlen`` field of the - yaml. This is particularly useful in setting the compiler integer number we discussed before and - also for setting other DUT specific parameters (like the ``--isa`` argument of spike). Shown in +- the ``xlen`` value: this can be obtained from the max value in the ``supported_xlen`` field of the + YAML. This is particularly useful in setting the compiler integer number we discussed before and + also for setting other DUT specific parameters (like the ``--isa`` argument of Spike). Shown in line 9 below. -- the isa string: for simulators like spike, we can parse this to generate the string for the +- the ``isa`` string: for simulators like Spike, we can parse this to generate the string for the ``--isa`` argument. Shown in lines 13-19 below. .. hint:: **PYTHON-HINT**: one can access dictionary elements using the square braces ``[]``. @@ -470,9 +469,9 @@ Some of the parameters of interest that can be captured in this stage using the .. note:: For pre-compiled/configured RTL targets this phase is typically empty and no actions are required. Though, one could choose to compile the RTL in this phase if required using simulators - like verilator, vcs, etc. + like Verilator, VCS, etc. -An example of this function for an ISS like spike is show below: +An example of this function for an ISS like Spike is show below: .. code-block:: python :linenos: @@ -501,12 +500,12 @@ An example of this function for an ISS like spike is show below: # not please change appropriately self.compile_cmd = self.compile_cmd+' -mabi='+('lp64 ' if 64 in ispec['supported_xlen'] else 'ilp32 ') -runTests(self, testlist) ------------------------- +``runTests(self, testList)`` +---------------------------- -This function is responsible for compiling and executing each test on the DUT and produce individual +This function is responsible for compiling and executing each test on the DUT and producing individual signature files, which can later be used for comparison. The function provides a single argument -which is the ``testList``. This argument is available as a python based dictionary and follows the +which is the ``testList``. This argument is available as a Python based dictionary and follows the syntax presented in the :ref:`testlist` section. The only outcome of this function should be a signature file generated for each test. These @@ -517,9 +516,9 @@ and then appending the string ``.signature`` to it. Also note, the contents of the signature file must conform to specification mentioned in the TestFormat Spec `here -`_ +`_. -There are multiple ways of defining this function. We will start with the most simplest version and +There are multiple ways of defining this function. We will start with the simplest version and move on to more involved variants. Using Shell Commands @@ -530,14 +529,14 @@ compile the test, run the test and collect/post-process the signature of each te this script is provided below. .. hint:: **PYTHON-HINT**: To display progress on the terminal it is often good to have some print - statements in the code. In this plugin we use the logger library from python to achieve this. + statements in the code. In this plugin we use the logger library from Python to achieve this. Syntax for usage is:: logger.debug('My Progress here') - The keyword 'debug' above indicates that the above statement will be displayed on the terminal - only when the ``--verbose`` cli argument is set to "debug". Similarly one can create warning and - error statements (which will be printed in different colors and enabled via the cli):: + The keyword ``debug`` above indicates that the above statement will be displayed on the terminal + only when the ``--verbose`` CLI argument is set to ``debug``. Similarly one can create ``warning`` and + ``error`` statements (which will be printed in different colors and enabled via the CLI):: logger.warning('This is enabled when verbose is debug or warning') logger.error('This is enabled when verbose is debug, warning or error') @@ -610,42 +609,42 @@ this script is provided below. if not self.target_run: raise SystemExit -As mentioned earlier, the `-march` string is test-specific and needs to be collected from the -testList fields. Line-30 above, shows that ``testentry['isa']`` provides this information. +As mentioned earlier, the ``-march`` string is test-specific and needs to be collected from the +``testList`` fields. Line-30 above, shows that ``testentry['isa']`` provides this information. -.. hint:: **PYTHON-HINT**: the lower() function in line-30 above is used to reduce all the - characters of a string to lowercase +.. hint:: **PYTHON-HINT**: the ``lower()`` function in line-30 above is used to reduce all the + characters of a string to lowercase. -Note, that as the toolchain and tests evolves, one might need to manipulate this string -before assigning it to the march argument of the compiler. +Note, that as the toolchain and tests evolve, one might need to manipulate this string +before assigning it to the ``-march`` argument of the compiler. At times, for debug purposes or initial bring up purposes one might want to just compile the tests and not run them on the DUT. In order to achieve this, one can set the -``target_run`` parameter in the ``config.ini`` file to 0. This will cause lines -47-55 to be skipped and thereby skip from running tests on the target. +``target_run`` parameter in the ``config.ini`` file to ``0``. This will cause lines +47-55 to be skipped and thereby skip running tests on the target. -.. hint:: **PYTHON-HINT**: Note in python we use ``#`` for comments. Also note, that python uses +.. hint:: **PYTHON-HINT**: Note in Python we use ``#`` for comments. Also note, that Python uses indentation to indicate a block of code (hence the indentation of lines 7 through 58). Makefile Flow (Recommended) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ -While the previous solution is small and precise, it offers very less debug artifacts. In this +While the previous solution is small and precise, it offers far fewer debug artifacts. In this variant we will be generating a single Makefile which can be used outside RISCOF as well to run a particular or a collection of tests. -The Makefile generated here will have as many targets as there are tests, and each make-target will -correspond to having commands which will compile the test, run on the dut and collect the signature. -To provide ease in creating such a Makefile, RISCOF provides a makeUtility which can be used in this +The ``Makefile`` generated here will have as many targets as there are tests, and each target will +correspond to having commands which will compile the test, run on the DUT and collect the signature. +To provide ease in creating such a ``Makefile``, RISCOF provides a ``makeUtil`` class which can be used in this function. -.. tip:: if one is more well-versed with python, you can choose to create the Makefile differently +.. tip:: If one is more well-versed with Python, you can choose to create the ``Makefile`` differently with more custom targets. However, note that the make utility provided from RISCOF might not work for custom Makefiles. -An example of the runTests function which uses the ``makeUtil`` utility is shown below. -Here a Makefile is first generated where every test is a make target. The utility +An example of the ``runTests`` function which uses the ``makeUtil`` utility is shown below. +Here a ``Makefile`` is first generated where every test is a make target. The utility automatically creates the relevant targets and only requires the user to define what should occur under each target. @@ -727,28 +726,29 @@ the ``make.makeCommand``. More details of this utility are available at: :ref:`u .. include:: ../../PLUGINS.rst -Using the Target files from existing framework with riscof +Using the Target files from existing framework with RISCOF ========================================================== + To ease transition from the old framework, the ``makeplugin`` is provided in the IncorePlugins repository. Setup ----- -1. Clone the repository using the following command. +1. Clone the repository using the following command: .. code-block:: shell git clone https://gitlab.com/incoresemi/riscof-plugins.git -2. Modify the following values in the ``config.ini`` +2. Modify the following values in the ``config.ini``: .. code-block:: ini DUTPlugin=makeplugin DUTPluginPath=/makeplugin -3. Add the following node to the ``config.ini``. +3. Add the following node to the ``config.ini``: .. code-block:: ini @@ -758,43 +758,45 @@ Setup ispec= pspec= -Modifying the makefile +Modifying the Makefile ---------------------- -The commands in the makefile need to be modified such that the variables from the following tables + +The commands in the Makefile need to be modified such that the variables from the following tables are used in the commands. These variables shall be replaced with the appropriate values in the ``RUN_TARGET`` and ``COMPILE_TARGET`` commands. .. list-table:: :header-rows: 1 + :widths: 20 80 * - Variable Name - Description * - ``${target_dir}`` - - The directory where the plugin file resides. (riscof_makeplugin.py)) + - The directory where the plugin file resides (``riscof_makeplugin.py``). * - ``${asm}`` - - Absolute path to the assemble test file i.e the .S file for the test. + - Absolute path to the assemble test file i.e. the ``.S`` file for the test. * - ``${work_dir}`` - The absolute path to the work directory for the test. * - ``${test_name}`` - - The name of the test, for example add-01 etc. Can be used for naming any intermediate files generated. + - The name of the test, for example ``add-01`` etc. Can be used for naming any intermediate files generated. * - ``${include}`` - The path to the directory which containts the test header files. This needs to be specified as an include path in the compile command. * - ``${march}`` - - The ISA to be used for compiling the test. This is in the format expected by march argument of gcc. + - The ISA to be used for compiling the test. This is in the format expected by ``-march`` argument of gcc. * - ``${mabi}`` - - The abi to be used for compiling the test. This is in the format expected by mabi argument of gcc. + - The abi to be used for compiling the test. This is in the format expected by ``-mabi`` argument of gcc. * - ${target_isa} - - This is the ISA specified in the input ISA yaml. The idea is that it can be used to configure the model at run time via cli arguments if necessary. + - This is the ISA specified in the input ISA YAML. The idea is that it can be used to configure the model at run time via CLI arguments if necessary. * - ``${test_bin}`` - The name of the binary file to be created after compilation. Can be ignored. Custom names can be used as long as the ``RUN_TARGET`` command picks up the correct binary to execute on the target. * - ``${signature_file}`` - - The absolute path to the signature file. This path cannot be changed and the signature file should be present at this path for riscof to verify at the end of testing. + - The absolute path to the signature file. This path cannot be changed and the signature file should be present at this path for RISCOF to verify at the end of testing. * - ``${macros}`` - - The macros to be defined while compilation. Currently they are in the format expected by gcc i.e. ``-D =`` + - The macros to be defined for compilation. Currently they are in the format expected by gcc i.e. ``-D =``. **Example**: -The Makefile.include for the SAIL C Simulator from +The ``Makefile.include`` for the SAIL C Simulator from `here `_ is used as a reference for this example. @@ -845,7 +847,7 @@ is used as a reference for this example. The first order of business is to move the ``COMPILE_CMD`` and ``RUN_CMD`` and define the contents in the ``COMPILE_TARGET`` and ``RUN_TARGET`` respectively as these are the only commands where the -values will be substituted by the python function. Hence the respective variables look like this: +values will be substituted by the Python function. Hence the respective variables look like this: .. code-block:: Makefile :linenos: @@ -861,17 +863,17 @@ values will be substituted by the python function. Hence the respective variable --test-signature=$(*).signature.output \ $(<) -Then these commands are rewritten to work with the python substitution variables. Hence variables +Then these commands are rewritten to work with the Python substitution variables. Hence variables such as ``$$(<)`` are replaced with ``${asm}`` in compile and ``$test_bin`` in the run commands. The ``$$@`` in compile is replaced with ``${test_bin}``. This ensures that the binary file is -appropriately created. The values for ``march`` and ``mabi`` was defied in the old framework in the -makefiles for the suite. These values are provided per target in riscof. Hence the ``$(1)`` is +appropriately created. The values for ``march`` and ``mabi`` were defied in the old framework in the +makefiles for the suite. These values are provided per target in RISCOF. Hence the ``$(1)`` is replaced with ``-march=${march} -mabi=${mabi}``. -The directory with the header files for the tests is also provided by riscof. Hence line 2 is +The directory with the header files for the tests is also provided by RISCOF. Hence line 2 is replaced with ``-I${include} \``. The paths in lines 3 and 4 are fixed to the appropriate ones by -using the directory where the plugin file is present as an anchor. Riscof also provides macro -definitions for the tests too and the plugin generates these macros in the format required by gcc. +using the directory where the plugin file is present as an anchor. RISCOF also provides macro +definitions for the tests and the plugin generates these macros in the format required by gcc. Hence ``${macro}`` is added to the end of the compile command. Similarly the path to the signature file in line 9 is also replaced with ``${signature_file}`` to @@ -919,13 +921,13 @@ to the following: .. note:: To ensure that a ``$`` is printed in the output Makefile (like ``$(RISCV_GCC)``) ensure that a - ``$$`` is present in the input makefile. + ``$$`` is present in the input ``Makefile``. Plugin Function Explanation --------------------------- -\_\_init\_\_(self, *args, **kwargs) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +``\_\_init\_\_(self, *args, **kwargs)`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: python :linenos: @@ -959,18 +961,18 @@ Plugin Function Explanation return sclass This function extracts the necessary fields from the node for the plugin in the config file given to -riscof. The plugin supports the following arguments. - - **makefiles** (*required*)- Comma separated paths to the makefiles. If multiple are specified, all will be - merged in the final output makefile. Note that only the varaibles in the makefiles are written +``riscof``. The plugin supports the following arguments: + - ``makefiles`` (*required*)- Comma separated paths to the makefiles. If multiple are specified, all will be + merged in the final output ``Makefile``. Note that only the varaibles in the makefiles are written out into the final makefiles. Any targets or includes will be left out. Such cases can be handled by editing the plugin to output the relevant lines as a part of the ``build`` function. - - **ispec** (*required*)- The path to the input ISA yaml specification of the target. - - **pspec** (*required*)- The path to the input platform yaml specification of the target. - - **make** - The make utility to use like make,bmake,pmake etc. (Default is ``make``) - - **jobs** - The number of threads to launch parallely. (Default is ``1``) + - ``ispec`` (*required*)- The path to the input ISA YAML specification of the target. + - ``pspec`` (*required*)- The path to the input platform YAML specification of the target. + - ``make`` - The make utility to use like ``make``, ``bmake``, ``pmake`` etc. (Default is ``make``) + - ``jobs`` - The number of threads to launch parallely. (Default is ``1``) -initialise(self, suite, work_dir, archtest_env) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +``initialise(self, suite, work_dir, archtest_env)`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: python :linenos: @@ -986,8 +988,8 @@ initialise(self, suite, work_dir, archtest_env) This function stores the necessary values as variables local to the instance. -build(self, isa_yaml, platform_yaml) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +``build(self, isa_yaml, platform_yaml)`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: python :linenos: @@ -1019,7 +1021,7 @@ build(self, isa_yaml, platform_yaml) self.var_dict[entry] = Template(self.var_dict[entry]) This function extracts and resolves the values of different fields needed while generating compile -commands. Line 8, the ISA of the model is extracted from the input ISA yaml. Lines 11 and 12 +commands. Line 8, the ISA of the model is extracted from the input ISA YAML. Lines 11 and 12 extract all variables from the input makefiles. Line 14 generates the absolute path for the makefile. The rest of the lines write out all the variables except the ones named ``*_TARGET`` to the output makefile. Line 19 writes out an extra variable ``TARGET_DIR`` which points to the @@ -1027,13 +1029,13 @@ directory where the plugin files exist. This variable can be used as an anchor t other necessary files (like linker scripts) in the commands. -runTests(self, testList,cgf_file=None) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +``runTests(self, testList, cgf_file=None)`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: python :linenos: - def runTests(self, testList,cgf_file=None): + def runTests(self, testList, cgf_file=None): # Initialise the Make Utility from riscof with the output path for the Makefile make = utils.makeUtil(makefilePath=self.makefilepath) # Modify the make command based on the input values in the config file. @@ -1100,13 +1102,13 @@ runTests(self, testList,cgf_file=None) # Execute all targets. make.execute_all(self.work_dir) -This function uses the ``makeUtil`` provided by ``riscof.utils`` to write out a Makefile with the +This function uses the ``makeUtil`` provided by ``riscof.utils`` to write out a makefile with the commands for each entry in the testlist. The format of the command for each target is ``cd ;substitute(COMPILE_TARGET);substitute(RUN_TARGET);``. Lines 9 to 49 extract and setup the values of the necessary variables for substitution. This function uses the `template substitution `_ provided by the -``string`` class of python. The values of the variables in the template strings are defined in a -dictionary(``substitute``) and the substitution is performed for the ``COMPILE_TARGET`` on line 58. +``string`` class of Python. The values of the variables in the template strings are defined in a +dictionary (``substitute``) and the substitution is performed for the ``COMPILE_TARGET`` on line 58. Similarly if ``RUN_TARGET`` is defined in the input makefile, the substitution for the same is done on line 61. Finally the target is added to the makefile and all targets are executed. @@ -1115,7 +1117,7 @@ Tips ==== 1. Avoid writing out multiple ``;`` simultaneously in the Makefiles. -2. Use the template substitution provided by the ``string`` class in python instead of string +2. Use the template substitution provided by the ``string`` class in Python instead of string operations to ease command generation and avoid formatting errors. `This `_ article provides a good overview on the same. 3. It is advisable to use the ``logger`` provided by ``riscof.utils`` for logging/printing @@ -1132,10 +1134,10 @@ Tips Other Utilities available ========================= -RISCOF also provides various standard and quick utilities that can be used by the plugins +RISCOF also provides various standard and quick utilities that can be used by the plugins. -logger ------- +``logger`` +---------- This utility is used for colored and prioritized printing on the terminal. It provides the following levels (in increasing order) @@ -1153,5 +1155,5 @@ Usage: Other utilities --------------- -More utilities like makeUtil and shellcommand execution are available to the users. Details can be +More utilities like ``makeUtil`` and shellcommand execution are available to the users. Details can be found here: :ref:`utils` diff --git a/docs/source/testformat.rst b/docs/source/testformat.rst index 928e595..8221d85 100644 --- a/docs/source/testformat.rst +++ b/docs/source/testformat.rst @@ -151,7 +151,7 @@ significant amount of the framework shall depend on the existence of these macro ``RVTEST_ISA(isa_str)`` : - defines the Test Virtual Machine (TVM, the ISA being tested) - - empty macro to specify the isa required for compilation of the test. + - empty macro to specify the ISA required for compilation of the test. - this is mandated to be present at the start of the test. ``RVTEST_CODE_BEGIN`` : @@ -362,7 +362,7 @@ There are two types of valid statements allowed. * keylist:=value The *keylist* specifies the path to the field in the ISA YAML dictionary whose value needs to be checked. - The *value* is the value against which the entry in the input yaml is checked. + The *value* is the value against which the entry in the input YAML is checked. The *value* can be a regular expression as well, in which case it should be specified as *regex("expression")* Example: @@ -386,7 +386,7 @@ There are two types of valid statements allowed. * function_call=Rval - The *function_call* specifies the function to be called along with the arguments to be specified to the function. The node from the yaml which has to be passed to the function can be specified using the *keylist*. + The *function_call* specifies the function to be called along with the arguments to be specified to the function. The node from the YAML which has to be passed to the function can be specified using the *keylist*. *Rval* is the value against which the return value of the function is checked. The list of different functions,arguments and their return values is listed below. **Function Signatures** diff --git a/riscof/Templates/coverage.html b/riscof/Templates/coverage.html index 130240e..83a5634 100644 --- a/riscof/Templates/coverage.html +++ b/riscof/Templates/coverage.html @@ -257,7 +257,7 @@

{{ name }}

Environment

- + @@ -272,7 +272,7 @@

Environment

Riscof VersionRISCOF Version {{ riscof_version }}
Riscv-arch-test Version/Commit Id Privilege Spec Version {{ psv }}
-

Yaml

+

YAML

diff --git a/riscof/Templates/report.html b/riscof/Templates/report.html index 32710e8..50e2096 100644 --- a/riscof/Templates/report.html +++ b/riscof/Templates/report.html @@ -269,7 +269,7 @@

{{ name }}

Environment

- + diff --git a/riscof/Templates/setup/model/riscof_model.py b/riscof/Templates/setup/model/riscof_model.py index 72c3ff7..f4075e4 100644 --- a/riscof/Templates/setup/model/riscof_model.py +++ b/riscof/Templates/setup/model/riscof_model.py @@ -51,9 +51,9 @@ def __init__(self, *args, **kwargs): self.isa_spec = os.path.abspath(config['ispec']) self.platform_spec = os.path.abspath(config['pspec']) - #We capture if the user would like the run the tests on the target or - #not. If you are interested in just compiling the tests and not running - #them on the target, then following variable should be set to False + # We capture if the user would like the run the tests on the target or + # not. If you are interested in just compiling the tests and not running + # them on the target, then following variable should be set to False if 'target_run' in config and config['target_run']=='0': self.target_run = False else:
Riscof VersionRISCOF Version {{ riscof_version }}
Riscv-arch-test Version/Commit Id