Skip to content

Instantly share code, notes, and snippets.

View ganeshchand's full-sized avatar

Ganesh Chand ganeshchand

View GitHub Profile
@ganeshchand
ganeshchand / mac_keyboard_shortcuts.md
Created December 4, 2023 20:48
keyboard shortcuts

Slack

Command Description
⌘/ Keyboard Shortcuts or Toggle to Help Panel
⌘ G Search
⌘ N New Message
⌘ SHIFT N New Canvas
/* scala-cli script
This script is published as part of the blog - https://ganeshchand.com/blog/scala-cli-getting-started
The script demonstrates how to parameterize your scripts and share/run the script as github gist.
How to run this script:
scala-cli https://gist.github.com/ganeshchand/fa703cc5459aa92dd07210ea6d549765 -- "Scala is fun"
*/
// greet with good morning or good afternoon or good evening based on the time of the day
val hour = java.time.LocalTime.now.getHour
val greeting = hour match {
@ganeshchand
ganeshchand / learn_scala_greetings.sc
Last active May 14, 2023 16:10
A Hello World Scala Script
println("Hello World, let us live live in peace and harmony")
// Initial Implementation: Counting rows to check if there is input data or not
val inputStockData: DataFrame = spark.read.json("/path/to/json/files")
val numInputRows = stockData.count
val isInputDataEmpty = if(numInputRows > 0) false else true
if(!isInputDataEmpty) {
// process input data
} else {
// no input data. Skip processing
}
/**
* Author: github.com/ganeshchand
* Date: 03/04/2021
* Specifying schema when reading different source format is mandatory or optional depending on which DataFrameReader you are using.
* spark.read() is a batch DataFrame reader
* spark.readStream() is a streaming DataFrame reader
* Let's write a quick test to test which reader enforces us to specify schema on read
*/
// step1: Let's generate test dataset for csv, json, parquet, orc and delta
trait User {
def name: String
}
//class FreeUser(name: String, upgradeProbality: Double) extends User
//class PremiumUser(name: String, loaltyPoint: Double) extends User
//val user1 = new FreeUser("John", 0.75)
//println(user1.name) doesn't work
@ganeshchand
ganeshchand / Mail.scala
Created March 7, 2017 06:38 — forked from mariussoutier/Mail.scala
Sending mails fluently in Scala
package object mail {
implicit def stringToSeq(single: String): Seq[String] = Seq(single)
implicit def liftToOption[T](t: T): Option[T] = Some(t)
sealed abstract class MailType
case object Plain extends MailType
case object Rich extends MailType
case object MultiPart extends MailType
Ctrl + a go to the start of the command line
Ctrl + e go to the end of the command line
Ctrl + k delete from cursor to the end of the command line
Ctrl + u delete from cursor to the start of the command line
Ctrl + w delete from cursor to start of word (i.e. delete backwards one word)
@ganeshchand
ganeshchand / IntelliJ Out-of-memory Troubleshooting
Created July 18, 2016 05:47
IntelliJ Out-of-memory Troubleshooting
Check Memory usage:
println(sys.runtime.totalMemory())
println(sys.runtime.maxMemory())
println(sys.runtime.freeMemory())
Often, Scala or Spark program will throw Spark driver and executor will be entirely running inside the JVM that is running your code shown here that creates the SparkContext.
By that time, it's too late to obtain more Java heap than was allocated when the JVM started. You need to add that -Xmx1G arg to the
command IntelliJ uses to launch the JVM that runs your code.