This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// jQuery script for validating user input | |
$(document).ready(function () { | |
$('#random-user').append("Form submitted successfully!"); | |
$('#random-user').validate({ | |
// initialize jquery validation plugin | |
rules: { | |
count: { | |
required: true | |
}, |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# Download MySQL connector, store in working folder then unpack for use: | |
tar zxvf mysql-connector-java-5.1.31.tar.gz | |
# Call PySpark with MySQL connector: | |
pyspark --driver-class-path mysql-connector-java-5.1.47-bin.jar |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# CSCI E-63 HW3 - Problem 3 | |
# Author: Walter Yu | |
# Description: PySpark script to connect to MySQL database, register table and display row count. | |
# Create context and connect to MySQL: | |
sqlContext= SQLContext(sc) | |
dfm = sqlc.read.format("jdbc").option("url","jdbc:mysql://localhost/retail_db").option("driver","com.mysql.jdbc.Driver").option("dbtable","departments").option("user","xxxxx").option("password","xxxxx").load() | |
# Verify schema, create view and display row count: | |
dfm.printSchema() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
''' | |
Author: Walter Yu | |
Course: CSCI E-63, Fall 2018 | |
Assignment: HW3, Problem 2 | |
References: | |
Slide 46 & 47, Lecture 3 Notes | |
SparkContext Tutorial:https://www.tutorialspoint.com/pyspark/pyspark_sparkcontext.htm | |
''' | |
from pyspark import SparkContext, SparkConf | |
from pyspark.sql import SQLContext, Row, SparkSession |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# CSCI E63 HW4 - Walter Yu, Fall 2018 | |
# Script commands to complete HW4 | |
# P1 - Compress data file for trasnfer into VM: | |
tar -zcvf retail-data.tar.gz ../e63-hw4-data-oreilly/data/retail-data | |
# P1 - Extract data file within VM: | |
tar -zxvf prog-1-jan-2005.tar.gz -C /tmp |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/* | |
CSCI E63 HW5 - Walter Yu, Fall 2018 | |
Script commands to complete HW5 | |
*/ | |
-- Q1: Create tables: | |
create table stations( | |
stations_id int(11) , | |
name varchar(64) , | |
latitude decimal(11) , |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# CSCI E63 HW5 - Walter Yu, Fall 2018 | |
# Script commands to complete HW5 | |
# P1 - Compress data file for trasnfer into VM: | |
tar -zcvf retail-data.tar.gz ../e63-hw4-data-oreilly/data/retail-data | |
# P1 - Extract data file within VM: | |
tar -zxvf prog-1-jan-2005.tar.gz -C /tmp |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from pyspark.sql import SparkSession | |
from pyspark.sql.functions import explode | |
from pyspark.sql.functions import split | |
spark = SparkSession.builder.appName("StructuredNetworkWordCount").getOrCreate() | |
lines = spark.readStream.format("socket").option("host", "localhost") \ | |
.option("port", 9999).load() | |
# Split the lines into words |