Skip to content

Instantly share code, notes, and snippets.

@vrilleup
Last active July 22, 2024 11:10
Show Gist options
  • Save vrilleup/9e0613175fab101ac7cd to your computer and use it in GitHub Desktop.
Save vrilleup/9e0613175fab101ac7cd to your computer and use it in GitHub Desktop.
Spark/mllib SVD example
import org.apache.spark.mllib.linalg.distributed.RowMatrix
import org.apache.spark.mllib.linalg._
import org.apache.spark.{SparkConf, SparkContext}
// To use the latest sparse SVD implementation, please build your spark-assembly after this
// change: https://github.com/apache/spark/pull/1378
// Input tsv with 3 fields: rowIndex(Long), columnIndex(Long), weight(Double), indices start with 0
// Assume the number of rows is larger than the number of columns, and the number of columns is
// smaller than Int.MaxValue
// sc is a SparkContext defined in the job
val inputData = sc.textFile("hdfs://...").map{ line =>
val parts = line.split("\t")
(parts(0).toLong, parts(1).toInt, parts(2).toDouble)
}
// Number of columns
val nCol = inputData.map(_._2).distinct().count().toInt
// Construct rows of the RowMatrix
val dataRows = inputData.groupBy(_._1).map[(Long, Vector)]{ row =>
val (indices, values) = row._2.map(e => (e._2, e._3)).unzip
(row._1, new SparseVector(nCol, indices.toArray, values.toArray))
}
// Compute 20 largest singular values and corresponding singular vectors
val svd = new RowMatrix(dataRows.map(_._2).persist()).computeSVD(20, computeU = true)
// Write results to hdfs
val V = svd.V.toArray.grouped(svd.V.numRows).toList.transpose
sc.makeRDD(V, 1).zipWithIndex()
.map(line => line._2 + "\t" + line._1.mkString("\t")) // make tsv line starting with column index
.saveAsTextFile("hdfs://...output/right_singular_vectors")
svd.U.rows.map(row => row.toArray).zip(dataRows.map(_._1))
.map(line => line._2 + "\t" + line._1.mkString("\t")) // make tsv line starting with row index
.saveAsTextFile("hdfs://...output/left_singular_vectors")
sc.makeRDD(svd.s.toArray, 1)
.saveAsTextFile("hdfs://...output/singular_values")
@gocanal
Copy link

gocanal commented Oct 8, 2015

Hello,
Thank you very much for sharing the code. I am looking for a solution to support matrix inverse, SVD could be one. A couple of questions:

  1. Looking at the Spark API Doc for RowMatrix.computeSVD:
    matrix A (m x n) ... we assume n is smaller than m.
    Does this mean that computeSVD does not work for a square matrix?
  2. What if matrix A is bigger than the physical memory of a node ? Will the program distribute sub matrix to different nodes in the hadoop cluster ?

thank you very much
canal

@Sayan21
Copy link

Sayan21 commented Nov 2, 2015

Hello,

Can you please tell me the article whose method you implemented in particular? I want to go through the theory a little.

Sincerely,

Sayantan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment