one-to-many, may-to-one, many-to-many relations in Django
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
> df = spark.createDataFrame( | |
[(1, 0), (3, 0)], | |
("a", "b") | |
) | |
> transf_column(df, F.col('a') + F.col('a'), 'a').show() | |
+---+---+ | |
| a| b| | |
+---+---+ | |
| 2| 0| | |
| 6| 0| |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// Object Property Value Shorthand | |
let cat = 'Miaow'; | |
let dog = 'Woof'; | |
let bird = 'Peet peet'; | |
let someObject = { | |
cat, | |
dog, | |
bird |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
>>> def f(fst, *rest): # usually *args | |
... for this in rest: | |
... print(this) | |
... | |
>>> f(1, 2, 3, 4) | |
2 | |
3 | |
4 | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# udfs are applied to col elements, not to cols | |
# but they take col as args (pyspark.sql.Column) | |
# and return (pyspark.sql.types) | |
from pyspark.sql import functions as F | |
>>> def f(c1, c2): | |
return str(c1) + str(c2) | |
>>> fu = F.udf(f, StringType()) | |
>>> df = spark.createDataFrame([(1, 'a'), (1, 'b'), (2, 'd')], ['c1', 'c2']) | |
>>> df.withColumn('test', fu(df.c1, df.c2)).show() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# DATA ######################################################## | |
''' test table | |
['str_col', 'int_col'] | |
('abc', 1) | |
('def1', 2) | |
('def1', 3) | |
('def1', 3) | |
+-------+-------+ | |
|str_col|int_col| | |
+-------+-------+ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
if __name__ == '__main__': | |
print('mod1 is executing by having been called directly') | |
else: | |
print('mod1 is executing by having been imported by module ' + __name__) | |
def f(): | |
if __name__ == '__main__': | |
print('mod1.f is executing from module mod1') | |
else: | |
print('mod1.f is executing from module' + __name__) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def f(func = None): | |
if func != None: | |
func() | |
print('executing f') | |
def g(): | |
print('executing g') | |
return g |
scala
$ to reference this
java
a = false ? 1 : 0;
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Row, Column, DataFrame, value are different concepts, and operating over DataFrames requires | |
# understanding these differences well. | |
# | |
# withColumn + UDF | must receive Column objects in the udf | |
# select + UDF | udf behaves as a mapping | |
from pyspark.sql import SparkSession |
NewerOlder