-
STL风格版本,利用
reverse_iterator
简化边界判断,可处理重复元素。参见:生成全排列、下一个排列、第k
个排列的。 -
一维DP版本,时间复杂度
O(N^2)
,空间复杂度O(N)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import numpy as np | |
from scipy import linalg, optimize | |
MAX_ITER = 100 | |
def group_lasso(X, y, alpha, groups, max_iter=MAX_ITER, rtol=1e-6, | |
verbose=False): | |
""" | |
Linear least-squares with l2/l1 regularization solver. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
package topic | |
import spark.broadcast._ | |
import spark.SparkContext | |
import spark.SparkContext._ | |
import spark.RDD | |
import spark.storage.StorageLevel | |
import scala.util.Random | |
import scala.math.{ sqrt, log, pow, abs, exp, min, max } | |
import scala.collection.mutable.HashMap |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import spark.SparkContext | |
import SparkContext._ | |
/** | |
* A port of [[http://blog.echen.me/2012/02/09/movie-recommendations-and-more-via-mapreduce-and-scalding/]] | |
* to Spark. | |
* Uses movie ratings data from MovieLens 100k dataset found at [[http://www.grouplens.org/node/73]] | |
*/ | |
object MovieSimilarities { |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
package topic | |
import spark.broadcast._ | |
import spark.SparkContext | |
import spark.SparkContext._ | |
import spark.RDD | |
import spark.storage.StorageLevel | |
import scala.util.Random | |
import scala.math.{ sqrt, log, pow, abs, exp, min, max } | |
import scala.collection.mutable.HashMap |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env python | |
from numpy import asmatrix, asarray, ones, zeros, mean, sum, arange, prod, dot, loadtxt | |
from numpy.random import random, randint | |
import pickle | |
MISSING_VALUE = -1 # a constant I will use to denote missing integer values | |
def impute_hidden_node(E, I, theta, sample_hidden): |