Skip to content

Instantly share code, notes, and snippets.

One of my Python problems was to name modules in my projects. Whenever I had used same module name with built-in one or higher level modules, I was getting in trouble. Because, when I try to import built-in or higher level libraries in my code, only my module was seen and I had import error.

Here is clear explanation; Assume that I have a Project structer like;

* root
	* my_app
    	* logging
        	* __init__.py
 * my_logging_code.py

I was using master as a dev branch and I tought that dev branch is not necessary. I changed my mind.

I did't need dev branch. Because, my projects didn't need urgent bugfix or hotfix releases. But not getting in any trouble without dev branch doesn't mean everything is always gonna be okay.

####Case

Think about that case; You are on version 0.2.0 and you started to work for version 0.3.0 in master as development branch. On testing or production environment some urgent bugs are rised. This bugs should be fixed as soon as possible. What is gonna happen? Are you going to work for them in master too? Or are you going to work for them in another branch named like bugfix-0.2.1 or hotfix-0.2.1 and then merge it into master? But there were some new features from 0.3.0 which shouldn't be released yet. We can't do it by using master branch as our development branch. Theese kind of releases are called bugfix or hotfix versions.

###Solution

Apache Hive is a project which provides SQL dsl which is HiveQL on top of map-reduce in hadoop ecosystem. Mapper(s) and reducer(s) are produced by hive according to given SQL. It is an alternative to Apache Pig.

There are too many built-in functions in Hive. But sometimes we need to have our custom functions. This custom functions are called as UDF which is user defined functions.

UDFs can be written in any language which can be built as jar. For example, if it is in clojure, it needs to be built as jar at the end.

After we generate our jar file contains UDF code, we need to send it to hive auxiliary library folder. This folder is defined as a folder which contains extra libraries for hive. Hive validates and load them and also informs Hadoop-MapReduce(Yarn) about the libraries to make them loaded. Because, our UDF code is actually invoked in map-reduce job, not by

package dal.ahmetdal.hive.udf.lettercount;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;
public final class LetterCounter extends UDF {
public Integer evaluate(final Text input) {
if (input == null) return null;
return input.toString().length();
select count_letters('name')
select count_letters('name');

Pip is a package manager of python. You can download Python libraries from some Python repositories like PyPI. You can also download libraries from a git repository. This is gonna be the issue to be explained in this article.

I don't like to memorize things all the time. So, I guess, I couldn't be working without internet :). Whenever I need to install some python libraries from a git repositories, I see a lot of way to do it. It is really confusing. This should be the reason why I can't memorize it. I can see how a very simple requirement is handled with to many confusing way. There shouldn't be to many way. Some of them is not working neither. At last, I decided to blog it.

As you may know, you can use two protocols which are http and ssh to do something on git repositories. Using protocol ssh instead of http may provide some ease of use. Because of nature of ssh, you can do something with your primary/public keys. So, you don't have to input your credentials all the time. But I'll be

<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.klarna</groupId>
<artifactId>hiverunner</artifactId>
<version>3.0.0</version>
-- execute_student_count_report.hql
use mydatabase;
INSERT INTO TABLE student_count_report
SELECT
school.school_name,
count(student.student_id) as cnt
FROM school
LEFT JOIN student on student.school_id = school.school_id
package dal.ahmet.hive.unittest;
import com.klarna.hiverunner.HiveShell;
import com.klarna.hiverunner.StandaloneHiveRunner;
import com.klarna.hiverunner.annotations.HiveSQL;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;