All types torch.DoubleTensor, torch.FloatTensor, etc. should have their
sparse variants: torch.SparseDoubleTensor, torch.SparseFloatTensor, etc.
Copying between dense and sparse matrix should be done with :copy() function.
Underlying BLAS has to be swappable with MKL/OpenBLAS/Atlas, etc. Other math operations have to implemented with CSPARSE.