This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from pyspark.sql import SparkSession | |
from pyspark.sql.functions import * | |
spark = SparkSession \ | |
.builder \ | |
.master('local') \ | |
.appName('pyspark-test-run') \ | |
.getOrCreate() | |
spark.sparkContext.setLogLevel("ERROR") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from pyspark.sql import SparkSession | |
from pyspark.sql.functions import * | |
spark = SparkSession \ | |
.builder \ | |
.master('local') \ | |
.appName('pyspark-test-run') \ | |
.getOrCreate() | |
spark.sparkContext.setLogLevel("ERROR") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from pyspark.sql import SparkSession | |
from pyspark.sql import Window | |
from pyspark.sql.functions import * | |
spark = SparkSession \ | |
.builder \ | |
.master('local') \ | |
.appName('pyspark-test-run') \ | |
.getOrCreate() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
1. How to find out second hightest value from a Map<String, Integer> | |
Map<String, Integer> books = new HashMap<>(); | |
books.put("one", 1); | |
books.put("two", 22); | |
books.put("three", 333); | |
books.put("four", 4444); | |
books.put("five", 55555); | |
books.put("six", 666666); | |
Stream<Integer> list = books.entrySet().stream().filter(e -> e.getValue().toString().length() > 3) | |
.map(Map.Entry::getValue); |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import org.apache.spark.sql.expressions.Window | |
import org.apache.spark.sql.functions._ | |
val data = sc.parallelize(Seq((101,"ram","12-01-2021",10001,120.00),(102,"sam","12-01-2021",10002,130.00),(101,"ram","12-01-2021",10003,140.00),(103,"jam","12-01-2021",10004,150.00),(101,"ram","12-01-2021",10005,130.00),(103,"jam","12-01-2021",10006,120.00),(102,"sam","12-01-2021",10007,130.00))) | |
val dataDF = data.toDF("id","name","date","transid","amount") | |
val windowSpec = Window.partitionBy("id").orderBy('transid desc) | |
val dataDF1 = dataDF.withColumn("row_number",rank().over(windowSpec)) | |
dataDF.printSchema | |
dataDF.show() | |
dataDF1.printSchema | |
dataDF1.show() |

This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
https://github.com/Thomas-George-T/Movies-Analytics-in-Spark-and-Scala | |
Change execution engine = Tez, spark ( set Tez/Spark client jars into HADOOP_CLASSPATH) | |
Partitioning - PARTITIONED BY clause is used to divide the table into buckets. | |
Buckting - CLUSTERED BY clause is used to divide the table into buckets. | |
Map-Side join, Bucket-Map-Side join, Sorted Bucket-Map-Side join | |
Usage of suitable file format = ORC(Optimized Row Columnar) file formate | |
Indexing | |
Vectorization along with ORC | |
CBO |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import subprocess | |
from pyspark.sql import functions as f | |
from operator import add | |
from pyspark.sql import Row, SparkSession | |
from pyspark.sql.types import StructField, StringType, StructType | |
def sparkwithhiveone(): | |
sparkwithhive = getsparkwithhive() | |
try: | |
assert (sparkwithhive.conf.get("spark.sql.catalogImplementation") == "hive") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
By default hive uses MR engine but, we can set to taz or even spark engine (in-memory computation) | |
But, | |
hive has SQL like HiveQL (HQL) and more usage when you are a SQL developer | |
even though we have UDFs, we do not have extra backyard area to do some core/complex business logic | |
and Spark has Spark SQL and we can move from DF to RDD and RDD to DF to perform core/complex business logic | |
No resume capability | |
Hive can not drop encripted databases |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
package com.kafkaconnectone; | |
import java.util.Map.Entry; | |
import java.util.Properties; | |
import org.apache.kafka.clients.producer.KafkaProducer; | |
import org.apache.kafka.clients.producer.Producer; | |
import org.apache.kafka.clients.producer.ProducerRecord; | |
import org.apache.kafka.common.KafkaException; | |
import org.apache.kafka.common.errors.AuthorizationException; |