All types torch.DoubleTensor
, torch.FloatTensor
, etc. should have their
sparse variants: torch.SparseDoubleTensor
, torch.SparseFloatTensor
, etc.
Copying between dense and sparse matrix should be done with :copy() function.
Underlying BLAS has to be swappable with MKL/OpenBLAS/Atlas, etc. Other math operations have to implemented with CSPARSE.
Cuda version has to be done with CUSPARSE, e.g. torch.SparseCudaTensor
or
torch.SparseFloatCudaTensor
Sparse tensors have to be serializable.
New constructors have to be added.
- Constructors
- Serialization
- Type-conversion
- Full (dense) - sparse conversion
- Pointwise ops
- add,mul,div,cadd,cmul,cdiv,fill
- min,max,dot
- BLAS level 1,2,3
- indexing
- LAPACK
- comparison
- mean, std
- scatter, gather, norm, dist, renorm
- Batch BLAS
require 'torch'
local indexes = {
{3,2,4, 5.63},
{1,3,4, 3.43},
{3,5,6, 1.23},
}
a = torch.SparseFloatTensor(4,6,9)
a:set(indexes)
b = torch.FloatTensor(#a):copy(a)
-- print non-zero elements
print(a)