- Install ctags-exuberant
- add stuff as listed in http://leonard.io/blog/2013/04/editing-scala-with-vim/ to ~/.ctags
- Install gnu global
- brew install global --with-exuberant-ctags
- add following line in bash export GTAGSCONF=/usr/local/share/gtags/gtags.conf
- edit gtags.conf, in the exuberant-ctags section, add following line :langmap=Scala:.scala:\
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
g_LastCtrlKeyDownTime := 0 | |
g_AbortSendEsc := false | |
g_ControlRepeatDetected := false | |
*CapsLock:: | |
if (g_ControlRepeatDetected) | |
{ | |
return | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
1. create ~/.ctags as follows(notice for regexps I used ^[ \t] to excactly match): | |
--langdef=Scala | |
--langmap=Scala:.scala | |
--regex-Scala=/^[ \t]*class[ \t]*([a-zA-Z0-9_]+)/class \1/c,classes/ | |
--regex-Scala=/^[ \t]*object[ \t]*([a-zA-Z0-9_]+)/object \1/o,objects/ | |
--regex-scala=/^[ \t]*trait[ \t]*([a-zA-Z0-9_]+)/trait \1/t,traits/ | |
--regex-Scala=/^[ \t]*case[ \t]*class[ \t]*([a-zA-Z0-9_]+)/case class \1/m,case-classes/ | |
--regex-Scala=/^[ \t]*abstract[ \t]*class[ \t]*([a-zA-Z0-9_]+)/abstract class \1/a,abstract-classes/ | |
--regex-Scala=/^[^\*\/]*def[ \t]*([a-zA-Z0-9_]+)[ \t]*.*[:=]/f \1/f,functions/ | |
#--regex-Scala=/^[^\*\/]*val[ \t]*([a-zA-Z0-9_]+)[ \t]*[:=]/\1/V,values/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import org.apache.spark.sql.SparkSession | |
object SparkSessionS3 { | |
//create a spark session with optimizations to work with Amazon S3. | |
def getSparkSession: SparkSession = { | |
val spark = SparkSession | |
.builder | |
.appName("my spark application name") | |
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer") | |
.config("spark.hadoop.fs.s3a.access.key", "my access key") |