This page was generated from tools/engine/sparse_vector_times_sparse_vector.ipynb.

JIT Engine: Sparse Vector x Sparse Vector

This example will go over how to compile MLIR code for multiplying sparse vectors in an element-wise fashion.

Accomplishing this task is mostly applying the knowledge from our previous tutorials on sparse tensors and dense tensors. Thus, this will be more of a demonstration or example than it will be a tutorial.

Let’s first import some necessary modules and generate an instance of our JIT engine.


import mlir_graphblas
import mlir_graphblas.sparse_utils
import numpy as np

engine = mlir_graphblas.MlirJitEngine()
Using development graphblas-opt: /Users/pnguyen/code/mlir-graphblas/mlir_graphblas/src/build/bin/graphblas-opt

This is the code we’ll use to multiply two sparse vectors.


mlir_text = """
#trait_mul_s = {
  indexing_maps = [
    affine_map<(i) -> (i)>,
    affine_map<(i) -> (i)>,
    affine_map<(i) -> (i)>
  ],
  sparse = [
    [ "S" ],
    [ "S" ],
    [ "D" ]
  ],
  iterator_types = ["parallel"],
  doc = "Sparse Vector Multiply"
}

#CV64 = #sparse_tensor.encoding<{
  dimLevelType = [ "compressed" ],
  pointerBitWidth = 64,
  indexBitWidth = 64
}>

func @sparse_vector_multiply(%arga: tensor<8xf32, #CV64>, %argb: tensor<8xf32, #CV64>) -> tensor<8xf32> {
  %output_storage = linalg.init_tensor [8] : tensor<8xf32>
  %0 = linalg.generic #trait_mul_s
    ins(%arga, %argb: tensor<8xf32, #CV64>, tensor<8xf32, #CV64>)
    outs(%output_storage: tensor<8xf32>) {
      ^bb(%a: f32, %b: f32, %x: f32):
        %0 = arith.mulf %a, %b : f32
        linalg.yield %0 : f32
  } -> tensor<8xf32>
  return %0 : tensor<8xf32>
}
"""

These are the passes we’ll use.


passes = [
    "--sparsification",
    "--sparse-tensor-conversion",
    "--linalg-bufferize",
    "--func-bufferize",
    "--tensor-bufferize",
    "--finalizing-bufferize",
    "--convert-linalg-to-loops",
    "--convert-scf-to-cf",
    "--convert-memref-to-llvm",
    "--convert-math-to-llvm",
    "--convert-openmp-to-llvm",
    "--convert-arith-to-llvm",
    "--convert-math-to-llvm",
    "--convert-std-to-llvm",
    "--reconcile-unrealized-casts"
]

Let’s generate our Python function.


engine.add(mlir_text, passes)
sparse_vector_multiply = engine.sparse_vector_multiply

Let’s generate our inputs.


# equivalent to np.array([0.0, 1.1, 2.2, 3.3, 0, 0, 0, 7.7], dtype=np.float32)

indices = np.array([
    [0],
    [1],
    [2],
    [3],
    [7],
], dtype=np.uint64) # Coordinates
values = np.array([0.0, 1.1, 2.2, 3.3, 7.7], dtype=np.float32)
sizes = np.array([8], dtype=np.uint64)
sparsity = np.array([True], dtype=np.bool8)

a = mlir_graphblas.sparse_utils.MLIRSparseTensor(indices, values, sizes, sparsity)

# equivalent to np.array([0, 0, 0, 3.3, 4.4, 0, 0, 7.7], dtype=np.float32)

indices = np.array([
    [3],
    [4],
    [7],
], dtype=np.uint64) # Coordinates
values = np.array([3.3, 4.4, 7.7], dtype=np.float32)
sizes = np.array([8], dtype=np.uint64)
sparsity = np.array([True], dtype=np.bool8)

b = mlir_graphblas.sparse_utils.MLIRSparseTensor(indices, values, sizes, sparsity)

Let’s grab our result.


answer = sparse_vector_multiply(a, b)
answer

array([ 0.      ,  0.      ,  0.      , 10.889999,  0.      ,  0.      ,
        0.      , 59.289997], dtype=float32)

Let’s see if our results match what we would expect.


a_dense = np.array([0.0, 1.1, 2.2, 3.3, 0, 0, 0, 7.7], dtype=np.float32)
b_dense = np.array([0, 0, 0, 3.3, 4.4, 0, 0, 7.7], dtype=np.float32)
np_result = a_dense * b_dense

np_result

array([ 0.      ,  0.      ,  0.      , 10.889999,  0.      ,  0.      ,
        0.      , 59.289997], dtype=float32)

all(answer == np_result)

True