Skip to content

Latest commit

 

History

History
2149 lines (1825 loc) · 69 KB

BLISTypedAPI.md

File metadata and controls

2149 lines (1825 loc) · 69 KB

Contents

Operation index

This index provides a quick way to jump directly to the description for each operation discussed later in the Computational function reference section:

Introduction

This document summarizes one of the primary native APIs in BLIS--the "typed" API. Here, we also discuss BLIS-specific type definitions, header files, and prototypes to auxiliary functions. This document also includes APIs to key kernels which are used to accelerate and optimize various level-2 and level-3 operations, though the Kernels Guide goes into more detail, especially for level-3 microkernels.

There are many functions that BLIS implements that are not listed here, either because they are lower-level functions, or they are considered for use primarily by developers and experts.

For curious readers, the typed API was given its name (a) because it exposes the floating-point types in the names of its functions, and (b) to contrast it with the other native API in BLIS, the object API, which is documented here. (The third API supported by BLIS is the BLAS compatibility layer, which mimics conventional Fortran-77 BLAS.)

In general, this document should be treated more as a reference than a place to learn how to use BLIS in your application. Thus, we highly encourage all readers to first study the example code provided within the BLIS source distribution.

BLIS types

The following tables list various types used throughout the BLIS typed API.

Integer-based types

BLIS integer type Type definition Used to represent...
gint_t int32_t or int64_t general-purpose signed integer; used to define signed integer types.
dim_t gint_t matrix and vector dimensions.
inc_t gint_t matrix row/column strides and vector increments.
doff_t gint_t matrix diagonal offset: if k < 0, diagonal begins at element (-k,0); otherwise diagonal begins at element (0,k).

Floating-point types

BLIS type BLIS char Type definition Used to represent...
float s N/A single-precision real numbers
double d N/A double-precision real numbers
scomplex c struct { float real; float imag; } single-precision complex numbers
dcomplex z struct { double real; double imag; } double-precision complex numbers

Enumerated parameter types

trans_t Semantic meaning: Corresponding matrix operand...
BLIS_NO_TRANSPOSE will be used as given.
BLIS_TRANSPOSE will be implicitly transposed.
BLIS_CONJ_NO_TRANSPOSE will be implicitly conjugated.
BLIS_CONJ_TRANSPOSE will be implicitly transposed and conjugated.
conj_t Semantic meaning: Corresponding matrix/vector operand...
BLIS_NO_CONJUGATE will be used as given.
BLIS_CONJUGATE will be implicitly conjugated.
side_t Semantic meaning: Corresponding matrix operand...
BLIS_LEFT appears on the left.
BLIS_RIGHT appears on the right.
uplo_t Semantic meaning: Corresponding matrix operand...
BLIS_LOWER is stored in (and will be accessed only from) the lower triangle.
BLIS_UPPER is stored in (and will be accessed only from) the upper triangle.
BLIS_DENSE is stored as a full matrix (ie: in both triangles).
diag_t Semantic meaning: Corresponding matrix operand...
BLIS_NONUNIT_DIAG has a non-unit diagonal that should be explicitly read from.
BLIS_UNIT_DIAG has a unit diagonal that should be implicitly assumed (and not read from).

Basic vs expert interfaces

The functions listed in this document belong to the "basic" interface subset of the BLIS typed API. There is a companion "expert" interface that mirrors the basic interface, except that it also contains at least one additional parameter that is only of interest to experts and library developers. The expert interfaces use the same name as the basic function names, except for an additional "_ex" suffix. For example, the basic interface for gemm is

void bli_?gemm
     (
       trans_t transa,
       trans_t transb,
       dim_t   m,
       dim_t   n,
       dim_t   k,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb,
       ctype*  beta,
       ctype*  c, inc_t rsc, inc_t csc
     );

while the expert interface is:

void bli_?gemm_ex
     (
       trans_t transa,
       trans_t transb,
       dim_t   m,
       dim_t   n,
       dim_t   k,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb,
       ctype*  beta,
       ctype*  c, inc_t rsc, inc_t csc,
       cntx_t* cntx,
       rntm_t* rntm
     );

The expert interface contains two additional parameters: a cntx_t* and rntm_t*. Note that calling a function from the expert interface with the cntx_t* and rntm_t* arguments each set to NULL is equivalent to calling the corresponding basic interface. Specifically, a NULL value passed in for the cntx_t* results in a valid context being queried from BLIS, and a NULL value passed in for the rntm_t* results in the current global settings for multithreading to be used.

Context type

In general, it is permissible to pass in NULL for a cntx_t* parameter when calling an expert interface such as bli_dgemm_ex(). However, there are cases where NULL values are not accepted and may result in a segmentation fault. Specifically, the cntx_t* argument appears in the interfaces to the gemm, trsm, and gemmtrsm level-3 microkernels along with all level-1v and level-1f kernels. There, as a general rule, a valid pointer must be passed in. Whenever a valid context is needed, the developer may query a default context from the global kernel structure (if a context is not already available in the current scope):

cntx_t* bli_gks_query_cntx( void );

When BLIS is configured to target a configuration family (e.g. intel64, x86_64), bli_gks_query_cntx() will use cpuid or an equivalent heuristic to select and and return the appropriate context. When BLIS is configured to target a singleton sub-configuration (e.g. haswell, skx), bli_gks_query_cntx() will unconditionally return a pointer to the context appropriate for the targeted configuration.

Runtime type

When calling one of the expert interfaces, a rntm_t (runtime) object can be used to convey a thread-local request for parallelism to the underlying implementation. Runtime objects are thread-safe by nature when they are declared statically as a stack variable (or allocated via malloc()), initialized, and then passed into the expert interface of interest.

Notice that runtime objects have no analogue in most BLAS libraries, where you are forced to specify parallelism at a global level (usually via environment variables).

For more information on using rntm_t objects, please read the Multithreading documentation, paying close attention to the section on local setting of parallelism.

BLIS header file

All BLIS definitions and prototypes may be included in your C source file by including a single header file:

#include "blis.h"

Initialization and Cleanup

As of 9804adf, BLIS no longer requires explicit initialization and finalization at runtime. In other words, users do not need to call bli_init() before the application can make use of the library (and bli_finalize() after the application is finished with the library). Instead, all computational operations (and some non-computational functions) in BLIS will initialize the library on behalf of the user if it has not already been initialized. This change was made to simplify the user experience.

Application developers should keep in mind, however, that this new self-initialization regime implies the following: unless the library is explicitly finalized via bli_finalize(), it will, once initialized, remain initialized for the life of the application. This is likely not a problem in the vast majority of cases. However, a memory-constrained application that performs all of its DLA up-front, for example, may wish to explicitly finalize the library after BLIS is no longer needed in order to free up memory for other purposes.

Similarly, an expert user may call bli_init() manually in order to control when the overhead of library initialization is incurred, even though the library would have self-initialized.

The interfaces to bli_init() and bli_finalize() are quite simple; they require no arguments and return no values:

void bli_init( void );
void bli_finalize( void );

Computational function reference

Notes for interpreting the following prototypes:

  • Any occurrence of ? should be replaced with s, d, c, or z to form an actual function name.
  • Any occurrence of ctype should be replaced with the actual C type corresponding to the datatype instance in question, while rtype should be replaced by the real projection of ctype. For example:
    • If we consider the prototype for bli_zaxpyv() below, ctype refers to dcomplex.
    • If we consider the prototype for bli_znormfv() below, ctype refers to dcomplex while rtype refers to double.
  • Any occurrence of itype should be replaced with the general-purpose signed integer type, gint_t.
  • All vector arguments have associated increments that proceed them, typically listed as incX for a given vector x. The semantic meaning of a vector increment is "the distance, in units of elements, between any two adjacent elements in the vector."
  • All matrix arguments have associated row and column strides arguments that proceed them, typically listed as rsX and csX for a given matrix X. Row strides are always listed first, and column strides are always listed second. The semantic meaning of a row stride is "the distance, in units of elements, to the next row (within a column)," and the meaning of a column stride is "the distance, in units of elements, to the next column (within a row)." Thus, unit row stride implies column-major storage and unit column stride implies row-major storage.

Notes for interpreting function descriptions:

  • conjX() and transX() should be interpreted as predicates that capture the operand X with any value of conj_t or trans_t applied. For example:
    • conjx(x) refers to a vector x that is either conjugated or used as given.
    • transa(A) refers to a matrix A that is either transposed, conjugated and transposed, conjugated only, or used as given.
  • Any operand marked with conj() is unconditionally conjugated.
  • Any operand marked with ^T is unconditionally transposed. Similarly, any operand that is marked with ^H is unconditionally conjugate-transposed.
  • All occurrences of alpha, beta, and rho parameters are scalars.

Level-1v operations

Level-1v operations perform various level-1 BLAS-like operations on vectors (hence the v). Note: Most level-1v operations have a corresponding level-1v kernel through which it is primarily implemented.


addv

void bli_?addv
     (
       conj_t  conjx,
       dim_t   n,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy
     );

Perform

  y := y + conjx(x)

where x and y are vectors of length n.


amaxv

void bli_?amaxv
     (
       dim_t   n,
       ctype*  x, inc_t incx,
       dim_t*  index
     );

Given a vector of length n, return the zero-based index index of the element of vector x that contains the largest absolute value (or, in the complex domain, the largest complex modulus).

If NaN is encountered, it is treated as if it were a valid value that was smaller than any other value in the vector. If more than one element contains the same maximum value, the index of the latter element is returned via index.

Note: This function attempts to mimic the algorithm for finding the element with the maximum absolute value in the netlib BLAS routines i?amax().


axpyv

void bli_?axpyv
     (
       conj_t  conjx,
       dim_t   n,
       ctype*  alpha,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy
     );

Perform

  y := y + alpha * conjx(x)

where x and y are vectors of length n, and alpha is a scalar.


axpbyv

void bli_?axpbyv
     (
       conj_t  conjx,
       dim_t   n,
       ctype*  alpha,
       ctype*  x, inc_t incx,
       ctype*  beta,
       ctype*  y, inc_t incy
     )

Perform

  y := beta * y + alpha * conjx(x)

where x and y are vectors of length n, and alpha and beta are scalars.


copyv

void bli_?copyv
     (
       conj_t  conjx,
       dim_t   n,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy
     );

Perform

  y := conjx(x)

where x and y are vectors of length n.


dotv

void bli_?dotv
     (
       conj_t  conjx,
       conj_t  conjy,
       dim_t   n,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy,
       ctype*  rho
     );

Perform

  rho := conjx(x)^T * conjy(y)

where x and y are vectors of length n, and rho is a scalar.


dotxv

void bli_?dotxv
     (
       conj_t  conjx,
       conj_t  conjy,
       dim_t   n,
       ctype*  alpha,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy,
       ctype*  beta,
       ctype*  rho
     );

Perform

  rho := beta * rho + alpha * conjx(x)^T * conjy(y)

where x and y are vectors of length n, and alpha, beta, and rho are scalars.


invertv

void bli_?invertv
     (
       dim_t   n,
       ctype*  x, inc_t incx
     );

Invert all elements of an n-length vector x.


invscalv

void bli_?invscalv
     (
       conj_t  conjalpha,
       dim_t   n,
       ctype*  alpha,
       ctype*  x, inc_t incx
     );

Perform

  x := ( 1.0 / conjalpha(alpha) ) * x

where x is a vector of length n, and alpha is a scalar.


scalv

void bli_?scalv
     (
       conj_t  conjalpha,
       dim_t   n,
       ctype*  alpha,
       ctype*  x, inc_t incx
     );

Perform

  x := conjalpha(alpha) * x

where x is a vector of length n, and alpha is a scalar.


scal2v

void bli_?scal2v
     (
       conj_t  conjx,
       dim_t   n,
       ctype*  alpha,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy
     );

Perform

  y := alpha * conjx(x)

where x and y are vectors of length n, and alpha is a scalar.


setv

void bli_?setv
     (
       conj_t  conjalpha,
       dim_t   n,
       ctype*  alpha,
       ctype*  x, inc_t incx
     );

Perform

  x := conjalpha(alpha)

That is, set all elements of an n-length vector x to scalar conjalpha(alpha).


subv

void bli_?subv
     (
       conj_t  conjx,
       dim_t   n,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy
     );

Perform

  y := y - conjx(x)

where x and y are vectors of length n.


swapv

void bli_?swapv
     (
       dim_t   n,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy
     );

Swap corresponding elements of two n-length vectors x and y.


xpbyv

void bli_?xpbyv
     (
       conj_t  conjx,
       dim_t   n,
       ctype*  x, inc_t incx,
       ctype*  beta,
       ctype*  y, inc_t incy
     )

Perform

  y := beta * y + conjx(x)

where x and y are vectors of length n, and beta is a scalar.


Level-1d operations

Level-1d operations perform various level-1 BLAS-like operations on matrix diagonals (hence the d).

Most of these operations are similar to level-1m counterparts, except they only read and update matrix diagonals and therefore do not take any uplo arguments. Please see the descriptions for the corresponding level-1m operation for a description of the arguments.


addd

void bli_?addd
     (
       doff_t  diagoffa,
       diag_t  diaga,
       trans_t transa,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb
     );

axpyd

void bli_?axpyd
     (
       doff_t  diagoffa,
       diag_t  diaga,
       trans_t transa,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb
     );

copyd

void bli_?copyd
     (
       doff_t  diagoffa,
       diag_t  diaga,
       trans_t transa,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb
     );

invertd

void bli_?invertd
     (
       doff_t  diagoffa,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rsa, inc_t csa
     );

invscald

void bli_?invscald
     (
       conj_t  conjalpha,
       doff_t  diagoffa,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa
     );

scald

void bli_?scald
     (
       conj_t  conjalpha,
       doff_t  diagoffa,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa
     );

scal2d

void bli_?scal2d
     (
       doff_t  diagoffa,
       diag_t  diaga,
       trans_t transa,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb
     );

setd

void bli_?setd
     (
       conj_t  conjalpha,
       doff_t  diagoffa,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa
     );

setid

void bli_?setid
     (
       doff_t   diagoffa,
       dim_t    m,
       dim_t    n,
       ctype_r* alpha,
       ctype*   a, inc_t rsa, inc_t csa
     );

Set the imaginary components of every element along the diagonal of a, as specified by diagoffa, to a scalar alpha. Note that the datatype of alpha must be the real projection of the datatype of a.


shiftd

void bli_?shiftd
     (
       doff_t  diagoffa,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa
     );

Add a constant value alpha to every element along the diagonal of a, as specified by diagoffa.


subd

void bli_?subd
     (
       doff_t  diagoffa,
       diag_t  diaga,
       trans_t transa,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb
     );

xpbyd

void bli_?xpbyd
     (
       doff_t  diagoffa,
       diag_t  diaga,
       trans_t transa,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  beta,
       ctype*  b, inc_t rsb, inc_t csb
     );

Level-1m operations

Level-1m operations perform various level-1 BLAS-like operations on matrices (hence the m).


addm

void bli_?addm
     (
       doff_t  diagoffa,
       diag_t  diaga,
       uplo_t  uploa,
       trans_t transa,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb
     );

Perform

  B := B + transa(A)

where B is an m x n matrix, A is stored as a dense matrix, or lower- or upper-triangular/trapezoidal matrix, as specified by uploa, with the diagonal offset of A specified by diagoffa and unit/non-unit nature of the diagonal specified by diaga. If uploa indicates lower or upper storage, only that part of matrix A will be referenced and used to update B.


axpym

void bli_?axpym
     (
       doff_t  diagoffa,
       diag_t  diaga,
       uplo_t  uploa,
       trans_t transa,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb
     );

Perform

  B := B + alpha * transa(A)

where B is an m x n matrix, A is stored as a dense matrix, or lower- or upper-triangular/trapezoidal matrix, as specified by uploa, with the diagonal offset of A specified by diagoffa and unit/non-unit nature of the diagonal specified by diaga. If uploa indicates lower or upper storage, only that part of matrix A will be referenced and used to update B.


copym

void bli_?copym
     (
       doff_t  diagoffa,
       diag_t  diaga,
       uplo_t  uploa,
       trans_t transa,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb
     );

Perform

  B := transa(A)

where B is an m x n matrix, A is stored as a dense matrix, or lower- or upper-triangular/trapezoidal matrix, as specified by uploa, with the diagonal offset of A specified by diagoffa and unit/non-unit nature of the diagonal specified by diaga. If uploa indicates lower or upper storage, only that part of matrix A will be referenced and used to update B.


invscalm

void bli_?invscalm
     (
       conj_t  conjalpha,
       doff_t  diagoffa,
       uplo_t  uploa,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa
     );

Perform

  A := ( 1.0 / conjalpha(alpha) ) * A

where A is an m x n matrix stored as a dense matrix, or lower- or upper-triangular/trapezoidal matrix, as specified by uploa, with the diagonal offset of A specified by diagoffa. If uploa indicates lower or upper storage, only that part of matrix A will be updated.


scalm

void bli_?scalm
     (
       conj_t  conjalpha,
       doff_t  diagoffa,
       uplo_t  uploa,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa
     );

Perform

  A := conjalpha(alpha) * A

where A is an m x n matrix stored as a dense matrix, or lower- or upper-triangular/trapezoidal matrix, as specified by uploa, with the diagonal offset of A specified by diagoffa. If uploa indicates lower or upper storage, only that part of matrix A will be updated.


scal2m

void bli_?scal2m
     (
       doff_t  diagoffa,
       diag_t  diaga,
       uplo_t  uploa,
       trans_t transa,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb
     );

Perform

  B := alpha * transa(A)

where B is an m x n matrix, A is stored as a dense matrix, or lower- or upper-triangular/trapezoidal matrix, as specified by uploa, with the diagonal offset of A specified by diagoffa and unit/non-unit nature of the diagonal specified by diaga. If uploa indicates lower or upper storage, only that part of matrix A will be referenced and used to update B.


setm

void bli_?setm
     (
       conj_t  conjalpha,
       doff_t  diagoffa,
       diag_t  diaga,
       uplo_t  uploa,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa
     );

Set all elements of an m x n matrix A to conjalpha(alpha), where A is stored as a dense matrix, or lower- or upper- triangular/trapezoidal matrix, as specified by uploa, with the diagonal offset of A specified by diagoffa and unit/non-unit nature of the diagonal specified by diaga. If uploa indicates lower or upper storage, only that part of matrix A will be updated.


subm

void bli_?subm
     (
       doff_t  diagoffa,
       diag_t  diaga,
       uplo_t  uploa,
       trans_t transa,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb
     );

Perform

  B := B - transa(A)

where B is an m x n matrix, A is stored as a dense matrix, or lower- or upper-triangular/trapezoidal matrix, as specified by uploa, with the diagonal offset of A specified by diagoffa and unit/non-unit nature of the diagonal specified by diaga. If uploa indicates lower or upper storage, only that part of matrix A will be referenced and used to update B.


Level-1f operations

Level-1f operations implement various fused combinations of level-1 operations (hence the f). Note: Each level-1f operation has a corresponding level-1f kernel through which it is primarily implemented.

Level-1f kernels are employed when optimizing level-2 operations.


axpy2v

void bli_?axpy2v
     (
       conj_t  conjx,
       conj_t  conjy,
       dim_t   m,
       ctype*  alphax,
       ctype*  alphay,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy,
       ctype*  z, inc_t incz
     );

Perform

  z := y + alphax * conjx(x) + alphay * conjy(y)

where x, y, and z are vectors of length m. The kernel, if optimized, is implemented as a fused pair of calls to axpyv.


dotaxpyv

void bli_?dotaxpyv
     (
       conj_t  conjxt,
       conj_t  conjx,
       conj_t  conjy,
       dim_t   m,
       ctype*  alpha,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy,
       ctype*  rho,
       ctype*  z, inc_t incz
     );

Perform

  rho := conjxt(x^T) * conjy(y)
  y   := y + alpha * conjx(x)

where x, y, and z are vectors of length m and alpha and rho are scalars. The kernel, if optimized, is implemented as a fusion of calls to dotv and axpyv.


axpyf

void bli_?axpyf
     (
       conj_t  conja,
       conj_t  conjx,
       dim_t   m,
       dim_t   b,
       ctype*  alpha,
       ctype*  a, inc_t inca, inc_t lda,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy
     );

Perform

  y := y + alpha * conja(A) * conjx(x)

where A is an m x b matrix, and y and x are vectors. The kernel, if optimized, is implemented as a fused series of calls to axpyv where b is less than or equal to an implementation-dependent fusing factor specific to axpyf.


dotxf

void bli_?dotxf
     (
       conj_t  conjat,
       conj_t  conjx,
       dim_t   m,
       dim_t   b,
       ctype*  alpha,
       ctype*  a, inc_t inca, inc_t lda,
       ctype*  x, inc_t incx,
       ctype*  beta,
       ctype*  y, inc_t incy
     );

Perform

  y := y + alpha * conjat(A^T) * conjx(x)

where A is an m x b matrix, and y and x are vectors. The kernel, if optimized, is implemented as a fused series of calls to dotxv where b is less than or equal to an implementation-dependent fusing factor specific to dotxf.


dotxaxpyf

void bli_?dotxaxpyf
     (
       conj_t  conjat,
       conj_t  conja,
       conj_t  conjw,
       conj_t  conjx,
       dim_t   m,
       dim_t   b,
       ctype*  alpha,
       ctype*  a, inc_t inca, inc_t lda,
       ctype*  w, inc_t incw,
       ctype*  x, inc_t incx,
       ctype*  beta,
       ctype*  y, inc_t incy,
       ctype*  z, inc_t incz
     );

Perform

  y := beta * y + alpha * conjat(A^T) * conjw(w)
  z :=        z + alpha * conja(A)    * conjx(x)

where A is an m x b matrix, w and z are vectors of length m, x and y are vectors of length b, and alpha and beta are scalars. The kernel, if optimized, is implemented as a fusion of calls to dotxf and axpyf.

Level-2 operations

Level-2 operations perform various level-2 BLAS-like operations.


gemv

void bli_?gemv
     (
       trans_t transa,
       conj_t  conjx,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  x, inc_t incx,
       ctype*  beta,
       ctype*  y, inc_t incy
     );

Perform

  y := beta * y + alpha * transa(A) * conjx(x)

where transa(A) is an m x n matrix, and y and x are vectors.


ger

void bli_?ger
     (
       conj_t  conjx,
       conj_t  conjy,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy,
       ctype*  a, inc_t rsa, inc_t csa
     );

Perform

  A := A + alpha * conjx(x) * conjy(y)^T

where A is an m x n matrix, and x and y are vectors of length m and n, respectively.


hemv

void bli_?hemv
     (
       uplo_t  uploa,
       conj_t  conja,
       conj_t  conjx,
       dim_t   m,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  x, inc_t incx,
       ctype*  beta,
       ctype*  y, inc_t incy
     );

Perform

  y := beta * y + alpha * conja(A) * conjx(x)

where A is an m x m Hermitian matrix stored in the lower or upper triangle as specified by uploa, and y and x are vectors of length m.


her

void bli_?her
     (
       uplo_t  uploa,
       conj_t  conjx,
       dim_t   m,
       rtype*  alpha,
       ctype*  x, inc_t incx,
       ctype*  a, inc_t rsa, inc_t csa
     );

Perform

  A := A + alpha * conjx(x) * conjx(x)^H

where A is an m x m Hermitian matrix stored in the lower or upper triangle as specified by uploa, and x is a vector of length m.

Note: The floating-point type of alpha is always the real projection of the floating-point types of x and A.


her2

void bli_?her2
     (
       uplo_t  uploa,
       conj_t  conjx,
       conj_t  conjy,
       dim_t   m,
       ctype*  alpha,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy,
       ctype*  a, inc_t rsa, inc_t csa
     );

Perform

  A := A + alpha * conjx(x) * conjy(y)^H + conj(alpha) * conjy(y) * conjx(x)^H

where A is an m x m Hermitian matrix stored in the lower or upper triangle as specified by uploa, and x and y are vectors of length m.


symv

void bli_?symv
     (
       uplo_t  uploa,
       conj_t  conja,
       conj_t  conjx,
       dim_t   m,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  x, inc_t incx,
       ctype*  beta,
       ctype*  y, inc_t incy
     );

Perform

  y := beta * y + alpha * conja(A) * conjx(x)

where A is an m x m symmetric matrix stored in the lower or upper triangle as specified by uploa, and y and x are vectors of length m.


syr

void bli_?syr
     (
       uplo_t  uploa,
       conj_t  conjx,
       dim_t   m,
       ctype*  alpha,
       ctype*  x, inc_t incx,
       ctype*  a, inc_t rsa, inc_t csa
     );

Perform

  A := A + alpha * conjx(x) * conjx(x)^T

where A is an m x m symmetric matrix stored in the lower or upper triangle as specified by uploa, and x is a vector of length m.


syr2

void bli_?syr2
     (
       uplo_t  uploa,
       conj_t  conjx,
       conj_t  conjy,
       dim_t   m,
       ctype*  alpha,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy,
       ctype*  a, inc_t rsa, inc_t csa
     );

Perform

  A := A + alpha * conjx(x) * conjy(y)^T + conj(alpha) * conjy(y) * conjx(x)^T

where A is an m x m symmetric matrix stored in the lower or upper triangle as specified by uploa, and x and y are vectors of length m.


trmv

void bli_?trmv
     (
       uplo_t  uploa,
       trans_t transa,
       diag_t  diaga,
       dim_t   m,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  x, inc_t incx
     );

Perform

  x := alpha * transa(A) * x

where A is an m x m triangular matrix stored in the lower or upper triangle as specified by uploa with unit/non-unit nature specified by diaga, and x is a vector of length m.


trsv

void bli_?trsv
     (
       uplo_t  uploa,
       trans_t transa,
       diag_t  diaga,
       dim_t   m,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  y, inc_t incy
     );

Solve the linear system

  transa(A) * x = alpha * y

where A is an m x m triangular matrix stored in the lower or upper triangle as specified by uploa with unit/non-unit nature specified by diaga, and x and y are vectors of length m. The right-hand side vector operand y is overwritten with the solution vector x.


Level-3 operations

Level-3 operations perform various level-3 BLAS-like operations. Note: Each All level-3 operations are implemented through a handful of level-3 microkernels. Please see the Kernels Guide for more details.


gemm

void bli_?gemm
     (
       trans_t transa,
       trans_t transb,
       dim_t   m,
       dim_t   n,
       dim_t   k,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb,
       ctype*  beta,
       ctype*  c, inc_t rsc, inc_t csc
     );

Perform

  C := beta * C + alpha * transa(A) * transb(B)

where C is an m x n matrix, transa(A) is an m x k matrix, and transb(B) is a k x n matrix.


gemmt

void bli_?gemmt
     (
       uplo_t  uploc,
       trans_t transa,
       trans_t transb,
       dim_t   m,
       dim_t   k,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb,
       ctype*  beta,
       ctype*  c, inc_t rsc, inc_t csc
     );

Perform

  C := beta * C + alpha * transa(A) * transb(B)

where C is an m x m matrix, transa(A) is an m x k matrix, and transb(B) is a k x m matrix. This operation is similar to bli_?gemm() except that it only updates the lower or upper triangle of C as specified by uploc.


hemm

void bli_?hemm
     (
       side_t  sidea,
       uplo_t  uploa,
       conj_t  conja,
       trans_t transb,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb,
       ctype*  beta,
       ctype*  c, inc_t rsc, inc_t csc
     );

Perform

  C := beta * C + alpha * conja(A) * transb(B)

if sidea is BLIS_LEFT, or

  C := beta * C + alpha * transb(B) * conja(A)

if sidea is BLIS_RIGHT, where C and B are m x n matrices and A is a Hermitian matrix stored in the lower or upper triangle as specified by uploa. When sidea is BLIS_LEFT, A is m x m, and when sidea is BLIS_RIGHT, A is n x n.


herk

void bli_?herk
     (
       uplo_t  uploc,
       trans_t transa,
       dim_t   m,
       dim_t   k,
       rtype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       rtype*  beta,
       ctype*  c, inc_t rsc, inc_t csc
     );

Perform

  C := beta * C + alpha * transa(A) * transa(A)^H

where C is an m x m Hermitian matrix stored in the lower or upper triangle as specified by uploc and transa(A) is an m x k matrix.

Note: The floating-point types of alpha and beta are always the real projection of the floating-point types of A and C.


her2k

void bli_?her2k
     (
       uplo_t  uploc,
       trans_t transa,
       trans_t transb,
       dim_t   m,
       dim_t   k,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb,
       rtype*  beta,
       ctype*  c, inc_t rsc, inc_t csc
     );

Perform

  C := beta * C + alpha * transa(A) * transb(B)^H + conj(alpha) * transb(B) * transa(A)^H

where C is an m x m Hermitian matrix stored in the lower or upper triangle as specified by uploc and transa(A) and transb(B) are m x k matrices.

Note: The floating-point type of beta is always the real projection of the floating-point types of A and C.


symm

void bli_?symm
     (
       side_t  sidea,
       uplo_t  uploa,
       conj_t  conja,
       trans_t transb,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb,
       ctype*  beta,
       ctype*  c, inc_t rsc, inc_t csc
     );

Perform

  C := beta * C + alpha * conja(A) * transb(B)

if sidea is BLIS_LEFT, or

  C := beta * C + alpha * transb(B) * conja(A)

if sidea is BLIS_RIGHT, where C and B are m x n matrices and A is a symmetric matrix stored in the lower or upper triangle as specified by uploa. When sidea is BLIS_LEFT, A is m x m, and when sidea is BLIS_RIGHT, A is n x n.


syrk

void bli_?syrk
     (
       uplo_t  uploc,
       trans_t transa,
       dim_t   m,
       dim_t   k,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  beta,
       ctype*  c, inc_t rsc, inc_t csc
     );

Perform

  C := beta * C + alpha * transa(A) * transa(A)^T

where C is an m x m symmetric matrix stored in the lower or upper triangle as specified by uploa and transa(A) is an m x k matrix.


syr2k

void bli_?syr2k
     (
       uplo_t  uploc,
       trans_t transa,
       trans_t transb,
       dim_t   m,
       dim_t   k,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb,
       ctype*  beta,
       ctype*  c, inc_t rsc, inc_t csc
     );

Perform

  C := beta * C + alpha * transa(A) * transb(B)^T + alpha * transb(B) * transa(A)^T

where C is an m x m symmetric matrix stored in the lower or upper triangle as specified by uploa and transa(A) and transb(B) are m x k matrices.


trmm

void bli_?trmm
     (
       side_t  sidea,
       uplo_t  uploa,
       trans_t transa,
       diag_t  diaga,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb
     );

Perform

  B := alpha * transa(A) * B

if sidea is BLIS_LEFT, or

  B := alpha * B * transa(A)

if sidea is BLIS_RIGHT, where B is an m x n matrix and A is a triangular matrix stored in the lower or upper triangle as specified by uploa with unit/non-unit nature specified by diaga. When sidea is BLIS_LEFT, A is m x m, and when sidea is BLIS_RIGHT, A is n x n.


trmm3

void bli_?trmm3
     (
       side_t  sidea,
       uplo_t  uploa,
       trans_t transa,
       diag_t  diaga,
       trans_t transb,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb,
       ctype*  beta,
       ctype*  c, inc_t rsc, inc_t csc
     );

Perform

  C := beta * C + alpha * transa(A) * transb(B)

if sidea is BLIS_LEFT, or

  C := beta * C + alpha * transb(B) * transa(A)

if sidea is BLIS_RIGHT, where C and transb(B) are m x n matrices and A is a triangular matrix stored in the lower or upper triangle as specified by uploa with unit/non-unit nature specified by diaga. When sidea is BLIS_LEFT, A is m x m, and when sidea is BLIS_RIGHT, A is n x n.


trsm

void bli_?trsm
     (
       side_t  sidea,
       uplo_t  uploa,
       trans_t transa,
       diag_t  diaga,
       dim_t   m,
       dim_t   n,
       ctype*  alpha,
       ctype*  a, inc_t rsa, inc_t csa,
       ctype*  b, inc_t rsb, inc_t csb
     );

Solve the linear system with multiple right-hand sides

  transa(A) * X = alpha * B

if sidea is BLIS_LEFT, or

  X * transa(A) = alpha * B

if sidea is BLIS_RIGHT, where X and B are an m x n matrices and A is a triangular matrix stored in the lower or upper triangle as specified by uploa with unit/non-unit nature specified by diaga. When sidea is BLIS_LEFT, A is m x m, and when sidea is BLIS_RIGHT, A is n x n. The right-hand side matrix operand B is overwritten with the solution matrix X.


Utility operations


asumv

void bli_?asumv
     (
       dim_t   n,
       ctype*  x, inc_t incx,
       rtype*  asum
     );

Compute the sum of the absolute values of the fundamental elements of vector x. The resulting sum is stored to asum.

Note: The floating-point type of asum is always the real projection of the floating-point type of x. Note: This function attempts to mimic the algorithm for computing the absolute vector sum in the netlib BLAS routines *asum().


norm1m

normfm

normim

void bli_?norm[1fi]m
     (
       doff_t  diagoffa,
       doff_t  diaga,
       uplo_t  uploa,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rs_a, inc_t cs_a,
       rtype*  norm
     );

Compute the one-norm (bli_?norm1m()), Frobenius norm (bli_?normfm()), or infinity norm (bli_?normim()) of the elements in an m x n matrix A. If uploa is BLIS_LOWER or BLIS_UPPER then A is assumed to be lower or upper triangular, respectively, with the main diagonal located at offset diagoffa. The resulting norm is stored to norm.

Note: The floating-point type of norm is always the real projection of the floating-point type of x.


norm1v

normfv

normiv

void bli_?norm[1fi]v
     (
       dim_t   n,
       ctype*  x, inc_t incx,
       rtype*  norm
     );

Compute the one-norm (bli_?norm1v()), Frobenius norm (bli_?normfv()), or infinity norm (bli_?normiv()) of the elements in a vector x of length n. The resulting norm is stored to norm.

Note: The floating-point type of norm is always the real projection of the floating-point type of x.


mkherm

void bli_?mkherm
     (
       uplo_t  uploa,
       dim_t   m,
       ctype*  a, inc_t rs_a, inc_t cs_a
     );

Make an m x m matrix A explicitly Hermitian by copying the conjugate of the triangle specified by uploa to the opposite triangle. Imaginary components of diagonal elements are explicitly set to zero. It is assumed that the diagonal offset of A is zero.


mksymm

void bli_?mksymm
     (
       uplo_t  uploa,
       dim_t   m,
       ctype*  a, inc_t rs_a, inc_t cs_a
     );

Make an m x m matrix A explicitly symmetric by copying the triangle specified by uploa to the opposite triangle. It is assumed that the diagonal offset of A is zero.


mktrim

void bli_?mktrim
     (
       uplo_t  uploa,
       dim_t   m,
       ctype*  a, inc_t rs_a, inc_t cs_a
     );

Make an m x m matrix A explicitly triangular by preserving the triangle specified by uploa and zeroing the elements in the opposite triangle. It is assumed that the diagonal offset of A is zero.


fprintv

void bli_?fprintv
     (
       FILE*   file,
       char*   s1,
       dim_t   m,
       ctype*  x, inc_t incx,
       char*   format,
       char*   s2
     );

Print a vector x of length m to file stream file, where file is a file pointer returned by the standard C library function fopen(). The caller may also pass in a global file pointer such as stdout or stderr. The strings s1 and s2 are printed immediately before and after the output (respectively), and the format specifier format is used to format the individual elements. For valid format specifiers, please see documentation for the standard C library function printf().

Note: For complex datatypes, the format specifier is applied to both the real and imaginary components individually. Therefore, you should use format specifiers such as "%5.2f", but not "%5.2f + %5.2f".


fprintm

void bli_?fprintm
     (
       FILE*   file,
       char*   s1,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rs_a, inc_t cs_a,
       char*   format,
       char*   s2
     );

Print an m x n matrix A to file stream file, where file is a file pointer returned by the standard C library function fopen(). The caller may also pass in a global file pointer such as stdout or stderr. The strings s1 and s2 are printed immediately before and after the output (respectively), and the format specifier format is used to format the individual elements. For valid format specifiers, please see documentation for the standard C library function printf().

Note: For complex datatypes, the format specifier is applied to both the real and imaginary components individually. Therefore, you should use format specifiers such as "%5.2f", but not "%5.2f + %5.2f".


printv

void bli_?printv
     (
       char*   s1,
       dim_t   m,
       ctype*  x, inc_t incx,
       char*   format,
       char*   s2
     );

Print a vector x of length m to standard output. This function call is equivalent to calling bli_?fprintv() with stdout as the file pointer.


printm

void bli_?printm
     (
       char*   s1,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rs_a, inc_t cs_a,
       char*   format,
       char*   s2
     );

Print an m x n matrix a to standard output. This function call is equivalent to calling bli_?fprintm() with stdout as the file pointer.


randv

void bli_?randv
     (
       dim_t   n,
       ctype*  x, inc_t incx
     );

Set the elements of a vector x of length n to random values on the interval [-1,1).

Note: For complex datatypes, the real and imaginary components of each element are randomized individually and independently of one another.


randm

void bli_?randm
     (
       doff_t  diagoffa,
       uplo_t  uploa,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rs_a, inc_t cs_a
     );

Set the elements of an m x n matrix A to random values on the interval [-1,1). If uploa is BLIS_LOWER or BLIS_UPPER, then additional scaling occurs so that the resulting matrix is diagonally dominant. Specifically, the diagonal elements (identified by diagonal offset diagoffa) are shifted so that they lie on the interval [1,2) and the off-diagonal elements (in the triangle specified by uploa) are scaled by 1.0/max(m,n).

Note: For complex datatypes, the real and imaginary components of each off-diagonal element are randomized individually and independently of one another.


sumsqv

void bli_?sumsqv
     (
       dim_t   n,
       ctype*  x, inc_t incx,
       rtype*  scale,
       rtype*  sumsq
     );

Compute the sum of the squares of the elements in a vector x of length n. The result is computed in scaled form, and in such a way that it may be used repeatedly to accumulate the sum of the squares of several vectors.

The function computes scale_new and sumsq_new such that

  scale_new^2 * sumsq_new = x[0]^2 + x[1]^2 + ... x[m-1]^2 + scale_old^2 * sumsq_old

where, on entry, scale and sumsq contain scale_old and sumsq_old, respectively, and on exit, scale and sumsq contain scale_new and sumsq_new, respectively.

Note: This function attempts to mimic the algorithm for computing the Frobenius norm in the netlib LAPACK routine ?lassq().


getsc

void bli_getsc
     (
       ctype*  chi,
       double* zeta_r,
       double* zeta_i
     )

Copy the real and imaginary values from the scalar object chi to zeta_r and zeta_i. If chi is stored as a real type, then zeta_i is set to zero. (If chi is stored in single precision, the corresponding elements are typecast/promoted during the copy.)


getijv

err_t bli_?getijv
     (
       dim_t   i,
       ctype*  x, incx,
       double* ar,
       double* ai
     )

Copy the real and imaginary values at the ith element of vector x to ar and ai. For real domain invocations, only ar is overwritten and ai is left unchanged. (If x contains elements stored in single precision, the corresponding elements are typecast/promoted during the copy.) Note that the object-based analogue of getijv does bounds checking of the vector element offset i against the vector length while the typed functions specified above do not (since the vector length is not given).


getijm

err_t bli_?getijm
     (
       dim_t   i,
       dim_t   j,
       ctype*  b, inc_t rs_b, inc_t cs_b,
       double* ar,
       double* ai
     )

Copy the real and imaginary values at the (i,j) element of object b to ar and ai. For real domain invocations, only ar is overwritten and ai is left unchanged. (If b contains elements stored in single precision, the corresponding elements are typecast/promoted during the copy.) Note that the object-based analogue of getijm does bounds checking of the matrix element offsets (i,j) against the matrix dimensions while the typed functions specified above do not (since the matrix dimensions are not given).


setsc

void bli_setsc
     (
       double* zeta_r,
       double* zeta_i,
       ctype*  chi
     );

Copy real and imaginary values zeta_r and zeta_i to the scalar object chi. If chi is stored as a real type, then zeta_i is ignored. (If chi is stored in single precision, the contents are typecast/demoted during the copy.)


setijv

err_t bli_?setijv
     (
       double  ar,
       double  ai,
       dim_t   i,
       ctype*  x, incx
     );

Copy real and imaginary values ar and ai to the ith element of vector object x. For real domain invocations, only ar is copied and ai is ignored. (If x contains elements stored in single precision, the corresponding elements are typecast/demoted during the copy.) Note that the object-based analogue of setijv does bounds checking of the vector element offset i against the vector length while the typed functions specified above do not (since the vector length is not given).


setijm

err_t bli_?setijm
     (
       double  ar,
       double  ai,
       dim_t   i,
       dim_t   j,
       ctype*  b, inc_t rs_b, inc_t cs_b
     );

Copy real and imaginary values ar and ai to the (i,j) element of object b. For real domain invocations, only ar is copied and ai is ignored. (If b contains elements stored in single precision, the corresponding elements are typecast/demoted during the copy.) Note that the object-based analogue of setijm does bounds checking of the matrix element offsets (i,j) against the matrix dimensions while the typed functions specified above do not (since the matrix dimensions are not given).


eqsc

void bli_?eqsc
     (
       conj_t conjchi,
       ctype* chi,
       ctype* psi,
       bool*  is_eq
     );

Perform an element-wise comparison between scalars chi and psi and store the boolean result in the bool pointed to by is_eq. If conjchi indicates a conjugation, chi will be implicitly conjugated for purposes of the comparision.


eqv

void bli_?eqv
     (
       conj_t  conjx,
       dim_t   n,
       ctype*  x, inc_t incx,
       ctype*  y, inc_t incy,
       bool*   is_eq
     );

Perform an element-wise comparison between length n vectors x and y and store the boolean result in the bool pointed to by is_eq. If conjx indicates a conjugation, x will be implicitly conjugated for purposes of the comparision.


eqm

void bli_?eqm
     (
       doff_t  diagoffa,
       diag_t  diaga,
       uplo_t  uploa,
       trans_t transa,
       dim_t   m,
       dim_t   n,
       ctype*  a, inc_t rs_a, inc_t cs_a,
       ctype*  b, inc_t rs_b, inc_t cs_b,
       bool*   is_eq
     )

Perform an element-wise comparison between matrices A and B and store the boolean result in the bool pointed to by is_eq. Here, B is an m x n matrix, A is stored as a dense matrix, or lower- or upper-triangular/trapezoidal matrix with arbitrary diagonal offset and unit or non-unit diagonal. If diaga indicates a unit diagonal, the diagonals of both matrices will be ignored for purposes of the comparision. If uploa indicates lower or upper storage, only that part of matrix A will be referenced in the comparison. If transa indicates a conjugation and/or transposition, then A will be conjugated and/or transposed for purposes of the comparison.

Level-3 microkernels

Note: The * in level-3 microkernel function names shown below reflect that there is no exact naming convention required for the microkernels, except that they must begin with bli_?. We strongly recommend, however, that the microkernel function names include the name of the microkernel itself. For example, the gemm microkernel should be named with the prefix bli_?gemm_ and the trsm microkernels should be named with the prefixes bli_?trsm_l_ (lower triangular) and bli_?trsm_u_ (upper triangular).


gemm microkernel

void bli_?gemm_*
     (
       dim_t               k,
       ctype*     restrict alpha,
       ctype*     restrict a1,
       ctype*     restrict b1,
       ctype*     restrict beta,
       ctype*     restrict c11, inc_t rsc, inc_t csc,
       auxinfo_t* restrict data,
       cntx_t*    restrict cntx
     );

Perform

  C11 := beta * C11 + alpha * A1 * B1

where C11 is an MR x NR matrix, A1 is an MR x k "micropanel" matrix stored in packed (column-stored) format, B1 is a k x NR "micropanel" matrix in packed (row-stored) format, and alpha and beta are scalars. The storage of C11 is specified by its row and column strides, rsc and csc.

Please see the Kernel Guide for more information on the gemm microkernel.


trsm microkernels

void bli_?trsm_l_*
     (
       ctype*     restrict a11,
       ctype*     restrict b11,
       ctype*     restrict c11, inc_t rsc, inc_t csc
       auxinfo_t* restrict data,
       cntx_t*    restrict cntx
     );

void bli_?trsm_u_*
     (
       ctype*     restrict a11,
       ctype*     restrict b11,
       ctype*     restrict c11, inc_t rsc, inc_t csc
       auxinfo_t* restrict data,
       cntx_t*    restrict cntx
     );

Perform

  B11 := inv(A11) * B11
  C11 := B11

where A11 is an MR x MR lower or upper triangular matrix stored in packed (column-stored) format, B11 is an MR x NR matrix stored in packed (row-stored) format, and C11 is an MR x NR matrix stored according to row and column strides rsc and csc.

Please see the Kernel Guide for more information on the trsm microkernel.


gemmtrsm microkernels

void bli_?gemmtrsm_l_*
     (
       dim_t               k,
       ctype*     restrict alpha,
       ctype*     restrict a10,
       ctype*     restrict a11,
       ctype*     restrict b01,
       ctype*     restrict b11,
       ctype*     restrict c11, inc_t rs_c, inc_t cs_c,
       auxinfo_t* restrict data,
       cntx_t*    restrict cntx
     );

void bli_?gemmtrsm_u_*
     (
       dim_t               k,
       ctype*     restrict alpha,
       ctype*     restrict a12,
       ctype*     restrict a11,
       ctype*     restrict b21,
       ctype*     restrict b11,
       ctype*     restrict c11, inc_t rs_c, inc_t cs_c,
       auxinfo_t* restrict data,
       cntx_t*    restrict cntx
     );

Perform

  B11 := alpha * B11 - A10 * B01
  B11 := inv(A11) * B11
  C11 := B11

if A11 is lower triangular, or

  B11 := alpha * B11 - A12 * B21
  B11 := inv(A11) * B11
  C11 := B11

if A11 is upper triangular.

Please see the Kernel Guide for more information on the gemmtrsm microkernel.

Query function reference

BLIS allows applications to query information about how BLIS was configured. The bli_info_ API provides several categories of query routines. Most values are returned as a gint_t, which is a signed integer. The size of this integer can be queried through a special routine that returns the size in a character string:

char* bli_info_get_int_type_size_str( void );

Note: All of the bli_info_ functions are always thread-safe, no matter how BLIS was configured.

General library information

The following routine returns the address the full BLIS version string:

char* bli_info_get_version_str( void );

Specific configuration

The following routine returns a unique ID of type arch_t that identifies the current current active configuration:

arch_t bli_arch_query_id( void );

This is most useful when BLIS is configured with multiple configurations. (When linking to multi-configuration builds of BLIS, you don't know for sure which configuration will be used until runtime since the configuration-specific parameters are not loaded until after calling a hueristic to detect the hardware--usually based the CPUID instruction.)

Once the configuration's ID is known, it can be used to query a string that contains the name of the configuration:

char* bli_arch_string( arch_t id );

General configuration

The following routines return various general-purpose constants that affect the entire framework. All of these settings default to sane values, which can then be overridden by the configuration in bli_config.h. If they are absent from a particular configuration's bli_config.h header file, then the default value is used, as specified in frame/include/bli_config_macro_defs.h.

gint_t bli_info_get_int_type_size( void );
gint_t bli_info_get_num_fp_types( void );
gint_t bli_info_get_max_type_size( void );
gint_t bli_info_get_page_size( void );
gint_t bli_info_get_simd_num_registers( void );
gint_t bli_info_get_simd_size( void );
gint_t bli_info_get_simd_align_size( void );
gint_t bli_info_get_stack_buf_max_size( void );
gint_t bli_info_get_stack_buf_align_size( void );
gint_t bli_info_get_heap_addr_align_size( void );
gint_t bli_info_get_heap_stride_align_size( void );
gint_t bli_info_get_pool_addr_align_size( void );
gint_t bli_info_get_enable_stay_auto_init( void );
gint_t bli_info_get_enable_blas( void );
gint_t bli_info_get_blas_int_type_size( void );

Kernel information

Micro-kernel implementation type query

The following routines allow the caller to obtain a string that identifies the implementation type of each microkernel that is currently active (ie: part of the current active configuration, as identified bi bli_arch_query_id()).

char* bli_info_get_gemm_ukr_impl_string( ind_t method, num_t dt )
char* bli_info_get_gemmtrsm_l_ukr_impl_string( ind_t method, num_t dt )
char* bli_info_get_gemmtrsm_u_ukr_impl_string( ind_t method, num_t dt )
char* bli_info_get_trsm_l_ukr_impl_string( ind_t method, num_t dt )
char* bli_info_get_trsm_u_ukr_impl_string( ind_t method, num_t dt )

Possible implementation (ie: the ind_t method argument) types are:

  • BLIS_1M: Implementation based on the 1m method. (This is the default induced method when real domain kernels are present but complex kernels are missing.)
  • BLIS_NAT: Implementation based on "native" execution (ie: NOT an induced method).

Possible microkernel types (ie: the return values for bli_info_get_*_ukr_impl_string()) are:

  • BLIS_REFERENCE_UKERNEL ("refrnce"): This value is returned when the queried microkernel is provided by the reference implementation.
  • BLIS_VIRTUAL_UKERNEL ("virtual"): This value is returned when the queried microkernel is driven by a the "virtual" microkernel provided by an induced method. This happens for any method value that is not BLIS_NAT (ie: native), but only applies to the complex domain.
  • BLIS_OPTIMIZED_UKERNEL ("optimzd"): This value is returned when the queried microkernel is provided by an implementation that is neither reference nor virtual, and thus we assume the kernel author would deem it to be "optimized". Such a microkernel may not be optimal in the literal sense of the word, but nonetheless is intended to be optimized, at least relative to the reference microkernels.
  • BLIS_NOTAPPLIC_UKERNEL ("notappl"): This value is returned usually when performing a gemmtrsm or trsm microkernel type query for any method value that is not BLIS_NAT (ie: native). That is, induced methods cannot be (purely) used on trsm-based microkernels because these microkernels perform more a triangular inversion, which is not matrix multiplication.

Operation implementation type query

The following routines allow the caller to obtain a string that identifies the implementation (ind_t) that is currently active (ie: implemented and enabled) for each level-3 operation. Possible implementation types are listed in the section above covering microkernel implemenation query.

char* bli_info_get_gemm_impl_string( num_t dt );
char* bli_info_get_hemm_impl_string( num_t dt );
char* bli_info_get_herk_impl_string( num_t dt );
char* bli_info_get_her2k_impl_string( num_t dt );
char* bli_info_get_symm_impl_string( num_t dt );
char* bli_info_get_syrk_impl_string( num_t dt );
char* bli_info_get_syr2k_impl_string( num_t dt );
char* bli_info_get_trmm_impl_string( num_t dt );
char* bli_info_get_trmm3_impl_string( num_t dt );
char* bli_info_get_trsm_impl_string( num_t dt );

Clock functions


clock

double bli_clock
     (
       void
     );

Return the amount of time that has elapsed since some fixed time in the past. The return values of bli_clock() typically feature nanosecond precision, though this is not guaranteed.

Note: On Linux, bli_clock() is implemented in terms of clock_gettime() using the clockid_t value of CLOCK_MONOTONIC. On OS X, bli_clock is implemented in terms of mach_absolute_time(). And on Windows, bli_clock is implemented in terms of QueryPerformanceFrequency(). Please see frame/base/bli_clock.c for more details. Note: This function is returns meaningless values when BLIS is configured with --disable-system.


clock_min_diff

double bli_clock_min_diff
     (
       double time_prev_min,
       double time_start
     );

This function computes an intermediate value, time_diff, equal to bli_clock() - time_start, and then tentatively prepares to return the minimum value of time_diff and time_min. If that minimum value is extremely small (close to zero), the function returns time_min instead.

This function is meant to be used in conjuction with bli_clock() for performance timing within applications--specifically in loops where only the fastest timing is of interest. For example:

double t_save = DBL_MAX;
for( i = 0; i < 3; ++i )
{
   double t = bli_clock();
   bli_gemm( ... );
   t_save = bli_clock_min_diff( t_save, t );
}
double gflops = ( 2.0 * m * k * n ) / ( t_save * 1.0e9 );

This code calls bli_gemm() three times and computes the performance, in GFLOPS, of the fastest of the three executions.


Example code

BLIS provides lots of example code in the examples/tapi directory of the BLIS source distribution. The example code in this directory is set up like a tutorial, and so we recommend starting from the beginning. Topics include printing vectors and matrices and calling a representative subset of the computational level-1v, -1m, -2, -3, and utility operations documented above. Please read the README contained within the examples/tapi directory for further details.