inst/profile/shell.R:29:3: style: Variable and function names should be all lowercase.
sqlContext <- SparkR::sparkRSQL.init(sc)
^~~~~~~~~~
inst/profile/shell.R:30:24: style: Variable and function names should be all lowercase.
assign("sqlContext", sqlContext, envir=.GlobalEnv)
^~~~~~~~~~
inst/profile/shell.R:32:1: style: lines should not be more than 80 characters.
cat("\n Spark context is available as sc, SQL context is available as sqlContext\n")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I have done a survey about int type long int type in python2.6/python3.4 which were treated in Java. I created some simple methods to deal with int and long int in python and java. And then, I run unit tests.
-
Whole difference of the experiment https://github.com/apache/spark/compare/master...yu-iskw:python-java-test
-
I created some simple methods in
PythonMLlibAPI.scala
https://github.com/apache/spark/compare/master...yu-iskw:python-java-test#diff-74976491aadaadfece9f332ea2daedbdR64
./R/install-dev.sh && ./R/run-tests.sh
inst/tests/test_binary_function.R:33:1: style: Trailing whitespace is superfluous.
^~
inst/tests/test_binary_function.R:43:6: style: Put spaces around all infix operators.
rdd<- map(text.rdd, function(x) {x})
~^
inst/tests/test_binary_function.R:55:57: style: Trailing whitespace is superfluous.
cogroup.rdd <- cogroup(rdd1, rdd2, numPartitions = 2L)
^
inst/tests/test_binary_function.R:43:6: style: Put spaces around all infix operators.
rdd<- map(text.rdd, function(x) {x})
~^
inst/tests/test_binary_function.R:79:12: style: Use <-, not =, for assignment.
mockFile = c("Spark is pretty.", "Spark is awesome.")
^
inst/tests/test_binaryFile.R:23:10: style: Use <-, not =, for assignment.
mockFile = c("Spark is pretty.", "Spark is awesome.")
^
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import os | |
import sys | |
from numpy import array | |
import matplotlib.pyplot as plt | |
from scipy.cluster.hierarchy import dendrogram | |
merge_list = [ | |
[0.0, 1.0, 0.866, 2], |
_REGION='ap-northeast-1'
_ZONE='ap-northeast-1b'
_VERSION='1.4.0'
_MASTER_INSTANCE_TYPE='r3.large'
_SLAVE_INSTANCE_TYPE='r3.8xlarge'
_SLAVES=5
_PRICE=1.0
_CLUSTER_NAME="spark-cluster-v${_VERSION}-${_SLAVE_INSTANCE_TYPE}x${_SLAVES}"