This page was generated from tools/engine/tensor_sum.ipynb.

JIT Engine: Tensor Summation

This example will go over how to compile MLIR code to a function callable from Python.

The example MLIR code we’ll use here performs tensor summation.

Let’s first import some necessary modules and generate an instance of our JIT engine.


import mlir_graphblas
import numpy as np

engine = mlir_graphblas.MlirJitEngine()
Using development graphblas-opt: /Users/pnguyen/code/mlir-graphblas/mlir_graphblas/src/build/bin/graphblas-opt

We’ll use the same set of passes to optimize and compile all of our examples below.


passes = [
    "--graphblas-structuralize",
    "--graphblas-optimize",
    "--graphblas-lower",
    "--sparsification",
    "--sparse-tensor-conversion",
    "--linalg-bufferize",
    "--func-bufferize",
    "--tensor-bufferize",
    "--finalizing-bufferize",
    "--convert-linalg-to-loops",
    "--convert-scf-to-cf",
    "--convert-memref-to-llvm",
    "--convert-math-to-llvm",
    "--convert-openmp-to-llvm",
    "--convert-arith-to-llvm",
    "--convert-math-to-llvm",
    "--convert-std-to-llvm",
    "--reconcile-unrealized-casts"
]

We’ll use this MLIR code.


mlir_text = """
#trait_sum_reduction = {
  indexing_maps = [
    affine_map<(i,j,k) -> (i,j,k)>,
    affine_map<(i,j,k) -> ()>
  ],
  iterator_types = ["reduction", "reduction", "reduction"]
}

func @tensor_sum(%argA: tensor<2x3x5xf32>) -> f32 {
  %output_storage = linalg.init_tensor [] : tensor<f32>
  %reduction = linalg.generic #trait_sum_reduction
    ins(%argA: tensor<2x3x5xf32>)
    outs(%output_storage: tensor<f32>) {
      ^bb(%a: f32, %x: f32):
        %0 = arith.addf %x, %a : f32
        linalg.yield %0 : f32
  } -> tensor<f32>
  %answer = tensor.extract %reduction[] : tensor<f32>
  return %answer : f32
}"""

Let’s compile our MLIR code.


engine.add(mlir_text, passes)

['tensor_sum']

Let’s try out our compiled function.


# grab our callable
tensor_sum = engine.tensor_sum

# generate inputs
a = np.arange(30, dtype=np.float32).reshape([2,3,5])

# generate output
result = tensor_sum(a)

result

435.0

Let’s verify that our function works as expected.


result == a.sum()

True

The examples up to this point make it so that we now know how to perform element-wise operations and reduction operations (e.g. summation).

With this knowledge, it’s fairly straightforward to implement matrix multiplication and dot product calculation. We’ll leave this as an exercise for the reader. If help is needed, it’s useful to know that matrix multiplication and dot product calculation have already been directly implemented in the linalg dialect:

One useful skill worth practicing is seeing how these linalg operations lower to see what’s going on and comparing those to how we might’ve implemented things. The MLIR explorer can come in handy for this purpose.