Skip to content

Instantly share code, notes, and snippets.

View bigorn0's full-sized avatar
💭
Enthousiast

Ugo Bechameil bigorn0

💭
Enthousiast
View GitHub Profile
You can use select with varargs including *:
import spark.implicits._
df.select($"*" +: Seq("A", "B", "C").map(c =>
sum(c).over(Window.partitionBy("ID").orderBy("time")).alias(s"cum$c")
): _*)
This:
Maps columns names to window expressions with Seq("A", ...).map(...)
package main
import (
"encoding/json"
"fmt"
"github.com/ghodss/yaml"
"io/ioutil"
"path/filepath"
)
#!/usr/bin/env node
// Replace the
// chmod +x pg-test.js
// npm install --save pg
// ./pg-test.js