Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save ZhuTingxiang/08a91130f584b55efc06269a096bc468 to your computer and use it in GitHub Desktop.
Save ZhuTingxiang/08a91130f584b55efc06269a096bc468 to your computer and use it in GitHub Desktop.
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data analysis of Linkedin Skills and positions\n",
"Tingxiang Zhu & Mengyao Zhang"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Introduction\n",
"\n",
"The aim of the project is to study through the **LinkedIn** data to find out the **relationships between personal skills and occupations fields**. We are going to analyze how certain types of skills contribute to a person’s chance of getting jobs in certain field. We also want to use a person’s education background as supporting information to see if the college a person studied in will add on the chances.\n",
"\n",
"The whole project can be divided into five parts \n",
"* data collecting: Collect data of personal skills and positions by scrapying Linkedin profile data\n",
"* pre-processing: Process the raw data we collected (most important)\n",
"* model training: Using model similar to TF-IDF to analysis relationship between skills and occupations\n",
"* model evaluating: Test model after training\n",
"* result visualizing: Using word-cloud and other ways to represent model results\n",
"\n",
"## Data Collection\n",
"\n",
"First of all, for analyzing, we need people’s skills and there current or past job information. Since LinkedIn doesn’t provide a public API that allows us to extract skill and job data directly, we are getting them through screen scraping. We start from a single person’s name and visit his or her homepage through linkedin url. Meanwhile, we store “also viewed” homepages as new entries for screen scraping. For each home page, we fetch all the listed skills, and titles for current and past work experiences. For each skill, we store the count of endorsement on that skill as a weight of that skill. We also fetch the company information for each period of work experience and the school (education) information for each person in case of future analysis usage. \n",
"\n",
"In all, we have scraped 7,000 lines of data from LinkedIn webpages. Here is one code snippet example of skills scrapy about how we scrap data from the website:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# Code Snippet from project file linkedin.py\n",
"# Reference: https://github.com/idwaker/linkedin\n",
"with WebBus(browser) as bus:\n",
" bus.driver.get(LINKEDIN_URL)\n",
" login_into_linkedin(bus.driver, username)\n",
" iteration = 0;\n",
" idx = 0;\n",
" while iteration < 20000:\n",
" name = all_names[idx]\n",
" if name in all_names[:idx]:\n",
" idx += 1\n",
" continue\n",
" iteration += 1\n",
" idx += 1\n",
" click.echo(\"Getting ...\")\n",
" try:\n",
" search_input = bus.driver.find_element_by_id('main-search-box')\n",
" except NoSuchElementException:\n",
" continue\n",
" search_input.send_keys(name)\n",
" search_form = bus.driver.find_element_by_id('global-search')\n",
" search_form.submit()\n",
" profiles = []\n",
" results = None\n",
" try:\n",
" results = bus.driver.find_element_by_id('results-container')\n",
" except NoSuchElementException:\n",
" continue\n",
" links = results.find_elements_by_xpath(link_title)\n",
"\n",
" # get all the links before going through each page\n",
" links = [link.get_attribute('href') for link in links]\n",
" for link in links:\n",
" # XXX: This whole section should be separated from this method\n",
" bus.driver.get(link)\n",
" overview = None\n",
" overview_xpath = '//div[@class=\"profile-overview-content\"]'\n",
" try:\n",
" overview = bus.driver.find_element_by_xpath(overview_xpath)\n",
" except NoSuchElementException:\n",
" click.echo(\"No overview section skipping this user\")\n",
" continue\n",
" skills = ''\n",
" skills_xpath = '//a[@class=\"endorse-item-name-text\"]'\n",
" try:\n",
" skills_summary = overview.find_elements_by_xpath(skills_xpath)\n",
" except NoSuchElementException:\n",
" skills = ''\n",
" else:\n",
" try:\n",
" skills_list = [skill.text for skill in skills_summary]\n",
" for i in skills_list:\n",
" skills += str(i)+','\n",
" except Exception, e:\n",
" pass\n",
" name_elements = None\n",
" name_list = []\n",
" name_xpath = '//a[contains(@href,\"trk=prof-sb-browse_map-name\")]'\n",
" try:\n",
" name_elements = overview.find_elements_by_xpath(name_xpath)\n",
" except NoSuchElementException:\n",
" name_list = ''\n",
" print \"failed\"\n",
" else:\n",
" for element in name_elements:\n",
" try:\n",
" if str(element.text.strip().lower()) != '' and (str(element.text.strip().lower()) not in name_list) and (str(element.text.strip().lower()) not in all_names):\n",
" name_list.append(str(element.text.strip().lower()))\n",
" except Exception:\n",
" continue\n",
" if len(name_list) != 0:\n",
" with open(\"list_of_names.csv\",\"a+\") as f:\n",
" wr = csv.writer(f,delimiter=\"\\n\")\n",
" wr.writerow(name_list)\n",
" all_names = collect_names(infile)\n",
" data = {\n",
" 'skills':skills\n",
" }\n",
" profiles.append(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The data is stored in a csv file.By loading them into pandas dataframe, we can have a brief look at the data. (Full code: data_process.py)\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print dataframe.dtypes"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"fullname object\n",
"locality object\n",
"industry object\n",
"current summary object\n",
"past summary object\n",
"education object\n",
"skills object\n",
"endorsements object\n",
"positions object\n",
"dtype: object"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"print dataframe.head(5)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"head fullname locality \\\n",
"0 Tingxiang Zhu (Star) Pittsburgh, Pennsylvania \n",
"1 Jiaming Ni (Oscar) Pittsburgh, Pennsylvania \n",
"2 Jiaming Ni Shanghai City, China \n",
"3 Jiaming Ni China \n",
"4 Yuqi Wang (yuki) Pittsburgh, Pennsylvania \n",
"\n",
" industry current summary \\\n",
"0 Information Technology and Services NaN \n",
"1 Information Technology and Services NaN \n",
"2 Mechanical or Industrial Engineering Honeywell Aerospace \n",
"3 Broadcast Media Shanghai Media Group \n",
"4 Internet NaN \n",
"\n",
" past summary \\\n",
"0 DaoCloud.io, 10years.me, Hand Enterprise Solut... \n",
"1 NetEase \n",
"2 SKF Global Technical Center China, Donghua Uni... \n",
"3 Shanghai Meda Group \n",
"4 Rakuten, Hundsun Technologies Inc. \n",
"\n",
" education \\\n",
"0 Carnegie Mellon University \n",
"1 Carnegie Mellon University \n",
"2 Donghua University \n",
"3 NaN \n",
"4 Carnegie Mellon University - H. John Heinz III... \n",
"\n",
" skills \\\n",
"0 Cloud Computing,Python,Hadoop,SQL,Start-ups,in... \n",
"1 Python,Java,Shell Scripting,MySQL,MapReduce,Ha... \n",
"2 Testing,Engineering,NI LabVIEW,Matlab,Manufact... \n",
"3 NaN \n",
"4 Java,Linux,Microsoft Office,Databases,HTML,Mic... \n",
"\n",
" endorsements \\\n",
"0 5,6,5,5,2,2,2,3,1,2,1,1,1,1, \n",
"1 8,8,6,6,4,3,2,0,0,0,0,0,0,0,0,0,0, \n",
"2 2,1,0,0,1,1,1,1,0,0,0,0,0,0,0,0, \n",
"3 NaN \n",
"4 20,14,13,12,9,6,6,5,4,3,3, \n",
"\n",
" positions \n",
"0 Software Engineer Intern,Co-Founder,Business I... \n",
"1 Software Development Intern, \n",
"2 Advanced Manufacturing Engineer,Hard Machining... \n",
"3 Researcher,Researcher,Researcher,Editor, \n",
"4 Software Engineer,Android Developer Intern, "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By acting some simply aggregation, the some statistics we can get from the raw data is as listed below."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Number of different skills:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"9541"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Number of different job titles:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"num of positions: 23994"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Most popular positions (TOP 10):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"['Director', 'Manager', 'Software Engineer', 'Consultant', 'Vice President', 'Owner', 'Project Manager', 'President', 'Intern', 'Founder']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"People with top number of skills:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"Christina Quinones"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"His/Her experiences:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"Oracle Applications,Oracle E-Business Suite,ERP,Oracle,CRM,Business Process,Testing,Business Analysis,Project Management,Visio,Microsoft Excel,Data Analysis,Oracle CRM,Analysis,Leadership,Troubleshooting,Program Management,Management,Financial Modeling,Cloud Computing,Oracle Order Management,Sales,Lean Process/DFSS Green...,MS Access, Excel, Word,Shoretel Administration,Project Management,Strategy,Agile Methodologies,Hedge Funds,Fixed Income,Software Development,Derivatives,SDLC,Management,Equities,SQL,Software Project...,.NET,Asset Managment,Consulting,Bloomberg,C#,Data Warehousing,Software Engineering"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Most well mentioned skills overall:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"['management', 'leadership', 'strategy', 'marketing', 'project management', 'social media', 'business development', 'strategic planning', 'program management', 'sales']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Most well mentioned skills among people who have been “Software Engineer”:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"{'software development': 16, 'xml': 11, 'java': 11, 'javascript': 10, 'sql': 10, 'agile methodologies': 9, 'c#': 9, 'ajax': 8, 'linux': 8, 'scrum': 8}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data Pre-processing\n",
"\n",
"### Raw Data Problems\n",
"\n",
"However, many problems exist in the raw data. \n",
"* **Invalid data problem**\n",
" * The homepage of some Linkedin user are written in languages other then English.\n",
" * Some positions is empty.\n",
" * Some users did not fill in any skills.\n",
" \n",
" \n",
"* **Job title similarity problem: **\n",
" ***This is a very tough nut to crack in this project. ***For example, some people use “Software Engineer” as their job titles while others use “Software Developer”. Since the meaning of these positions are similar, we want to simply merge those similar types of job as one. Likewise, we need to apply the merging to similar skills such as “teamwork” and “teamworking”. The school names are mostly in standard form."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"### Pre-processing of data\n",
"#### Drop invalid data\n",
"\n",
"The pre-processing task includes removing invalid data, unifying the expression of job positions and skills, and categorizing them. \n",
"\n",
"* Remove invalid data\n",
"Since we focus on the trend between skills and positions, we mainly deal with these two columns. We drop the NaN records and store them as dictionary for later processing.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"## Code Snippet from file data_process.py\n",
"def read_skill_and_position(df):\n",
" # drop NaN records\n",
" df = df.dropna(subset=['positions', 'skills'])\n",
" position_dic = {}\n",
" positions_total = []\n",
" top_skill_num = 0\n",
" for index, row in df.iterrows():\n",
" positions = row['positions'][:-1].split(\",\")\n",
" skills = row['skills'][:-1].split(\",\")\n",
" skills = [skill.lower() for skill in skills if skill]\n",
" if len(skills) > top_skill_num:\n",
" top_person = index\n",
" top_skill_num = len(skills)\n",
" positions_total.extend(positions)\n",
" for position in positions:\n",
" if position_dic.has_key(position):\n",
" position_dic[position.strip()] += Counter(skills)\n",
" else:\n",
" position_dic[position.strip()] = Counter(skills)\n",
" positions_set = set(positions_total)\n",
" position_counter = Counter(positions_total)\n",
" return position_dic, position_counter, top_person, positions_set"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After process of data, we can get positions lists and skill lists for later analysis."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"Snippet of positions list:\n",
" \n",
"['Senior Security Architect', ' Police Central e-Crime Unit', 'Consulting Services Manager', \n",
" 'Cape Sharp Tidal', 'Director of Training', 'Regional Sales Director Middle East',\n",
" 'Manager- Business Planning and Analysis', 'Product & Services Manager', \n",
" ' Marketing & Business Operations/Vice President', ' Development Manager', \n",
" 'Temporary', \"Vice President of America's Region Marketing\", 'Senior Consultant']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"Snippet of skills list:\n",
" \n",
"['java script', 'ministry leadership', \"children's books\", \n",
" 'probability', 'garden coaching', 'maple', 'logicworks', 'permanent life insurance',\n",
" 'activity coordination', 'computer arithmetic', 'dodaf', 'mathematical programming', \n",
" 'tech sherpa', 'energy work', 'member of agcas...', 'goldsmithing', 'asp.net web api',\n",
" 'metal card', 'bathroom accessories', 'google website optimizer', \n",
" 'iar ezbedded workbench', 'undercover', 'icefaces', 'gifting strategies']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Clustering positions and skills data:\n",
"\n",
"As we said above about the \"job tital similarity problem\", we can apply some natural language processing such as lemmatizing and word similarity analysis. Since we have 23994 different positions and 9541 different skills right now, We expect to have much less types of skills and job positions remained after the processing. Therefore, we may categorize them manually. \n",
"\n",
"So, the actual problem here is how to cluster a list of words like positions and skills?\n",
"\n",
"From the course we know that we can represent strings in a numerical vector. One typical approach is to combine k-means clustering with Distance, but how to represent means of string? We normally use weight called as TF-IDF weight, but that is mostly related to the area of \"text document\" clustering. We have no documents, but we want to clustering single words.\n",
"\n",
"We first need to convert word to vector:\n",
"* ** Method 1:** using **edit distance** to calculate similarity between words:\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# http://stats.stackexchange.com/questions/123060/clustering-a-long-list-of-strings-words-into-similarity-groups\n",
"words = np.asarray(list(positions_set)) #So that indexing with a list will work\n",
"# https://pypi.python.org/pypi/editdistance , a faster algorithm than Levenshtein distance\n",
"lev_similarity = -1*np.array([[editdistance.eval(w1,w2) for w1 in words] for w2 in words])\n",
"affprop = sklearn.cluster.AffinityPropagation(affinity=\"precomputed\", damping=0.5)\n",
"affprop.fit(lev_similarity)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we get the result like this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
" - *Vice President:* Asst Vice-President, Doctoral Student, Global President, Graduate Student, President, Senior Vice President, Vice President, Vice-President\n",
" - *PGA Assistant Golf Professional:* Assistant Golf Professional, PGA Assistant Golf Professional, PGA Head Golf Professional, PGA Lead Assistant Golf Professional\n",
" - *Vice President - Account:* President & Author, Executive Vice President - Account, General Ledger Accountant, Vice President - Account, Vice President of Peabody Hall\n",
" - *Accounting Specialist:* Accounting Assistant, Accounting Specialist, Application Specialist, Communications Specialist, Hard Machining Specialist, Laboratory Specialist, Staffing Specialist\n",
" - *Assistant Manager:* Account Technical Manager, Assistant Client Manager, Assistant Director, Assistant Engineer, Assistant Executive, Assistant Location Manager, Assistant Manager, Business Alliance Manager, Engineering Manager, Install Sales Manager, Operations Manager, Sales Associate and Cashier\n",
" - *Graduate Student Researcher:* Final Year Project Researcher, Graduate Student Instructor, Graduate Student Researcher, Student Researcher, Undergraduate Researcher, Undergraduate independent research\n",
" - * Economics Department:* Economics Department, Pharmacy Department, Economics 10 Teaching Fellow, Economics Tutor"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It seems ok but when we analysis it carefully, we can find out that this method only take account of the string similarity in the word.\n",
"For us, we need the similarity of the meaning between the words. \n",
"So, we come up another method:\n",
"\n",
"* Method 2: Using wiki pedia search list word to cluster the positons. Since when we search wikipeida, we can get a list of words that has the close meaning. We used think that may be a good idea since the list of wiki must be edited manually. We expect higher accuracy. For example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"wikipedia.search(\"obama\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"[u'Barack Obama', u'Family of Barack Obama', u'Michelle Obama', u'Crush on Obama', u'Barack Obama Sr.', u'Obama logo', u'Presidency of Barack Obama', u'Barack Obama in comics', u'Obama']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"But when we cluster all positions using this method, we only drop about 200 positions with 20000 postions left.\n",
"The result is really bad."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* Method 3: We can search every position words in WikiPedia and then get all the wiki documents as the training material. Finally we can get the vector of words. This method can let us calculate similarity between words by their real meaning instead of just the surface similarity of string.\n",
"In this process, we use [**Gensim**](https://radimrehurek.com/gensim/) python library find the similarity. Since it can automatically extract semantic topics from raw document. \n",
"Here is the code snippet to get Wiki documents and get similarity:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# Code Snippet from project file wiki_word2vec_test.py\n",
"# Reference :http://www.shuang0420.com/2016/05/30/gensim-word2vec%E5%AE%9E%E6%88%98/\n",
"# -*- coding: utf-8 -*-\n",
"import gensim\n",
"from gensim.models import Phrases\n",
"from gensim.corpora import WikiCorpus\n",
"from gensim.models import Word2Vec\n",
"from gensim.models.word2vec import LineSentence\n",
"from stop_words import get_stop_words\n",
"import os\n",
"import logging\n",
"import re\n",
"import multiprocessing\n",
"import sys\n",
"reload(sys)\n",
"sys.setdefaultencoding('utf-8')\n",
"\n",
"# logging information\n",
"logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s')\n",
"logging.root.setLevel(level=logging.INFO)\n",
"\n",
"# get input file, text format\n",
"inp = sys.argv[1]\n",
"input = open(inp, 'r')\n",
"output = open('output_word.seq', 'w')\n",
"# remove the stop words\n",
"stop_words = get_stop_words('en')\n",
"\n",
"# read file and separate words\n",
"for line in input.readlines():\n",
" line=line.strip('\\n')\n",
" new_line = ''\n",
" for i in line.split(' '):\n",
" if i.isalpha():\n",
" if i.lower() not in stop_words:\n",
" new_line += i.lower() + ' '\n",
" output.write(new_line)\n",
"\n",
"output.close()\n",
"output= open('output_word.seq', 'r')\n",
"model = Word2Vec(LineSentence(output), size=100, window=3, min_count=5,workers=multiprocessing.cpu_count())\n",
"# # save model\n",
"model.save('output_word.model')\n",
"model.save_word2vec_format('output_word.vector', binary=False)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After save the model we can use this training model get the similarity we need:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# test\n",
"model=gensim.models.Word2Vec.load('output_word.model')\n",
"vocab = list(model.vocab.keys())\n",
"print model.similarity('engineer', 'consultant')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"0.512911466709"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Besides, we also tried some other methods of clutering these words such as using phrases model instead words model. However, the clustring result is not good as the words model. \n",
"Finally, after we get the similarity between the words of positions, it's much easier for us to clustering all the positions.\n",
"We found a job field list as follows. We used this list to compare similarity with all the positions in our data. Since some positions are not single word, we iterate compare each word in phrases then get the highest similarity as its category.\n",
"\n",
"Code Snippets:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"\n",
"# Code Snippet from project file get_new_positions.py \n",
"model=gensim.models.Word2Vec.load('output_word.model')\n",
"count = 0\n",
"category_list = ['software','product', 'consultant','analyst', 'accounting','manufacturing','sports','banking','fundraiser','fashion','information',\n",
"'operation', 'market', 'reporter', 'sales', 'finance', 'hr', 'public','technology','business']\n",
"for word in positions_set:\n",
" old_word = word\n",
" new_position = ''\n",
" word = re.sub('[^0-9a-zA-Z]+', ' ', word)\n",
" word_ist = word.split()\n",
" max_similarity = 0\n",
" for i in word_ist:\n",
" for j in category_list:\n",
" simi = 0\n",
" try:\n",
" simi = model.similarity(i.lower(), j)\n",
" except:\n",
" continue\n",
" if simi >= max_similarity:\n",
" max_similarity = simi\n",
" new_position = j\n",
" if new_position not in category_list:\n",
" new_position = 'other'\n",
" count += 1\n",
" new_positions_dict[old_word] = new_position"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Snippet of final old-positon: new-postions dictionary:\n",
"\n",
"['Software Developer Engineer in Test': 'software', 'Commercial Restructuring Executive': 'finance', 'Partner/Creative': 'fashion', ' Web 1.0': 'software', 'Web Content Producer': 'reporter', 'Senior Law Enforcement Policy Analyst for US Army': 'analyst', 'Short-term attachment': 'fundraiser', 'CHRO & EVP Human Resources': 'hr', ' Business Developmen'Lab Intern': 'fundraiser', 'People & Culture Product Manager': 'product', 'Java/UI Developer': 'software']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We generally checked the text, most category is reasonable. \n",
"Using this new postions list, we can process our analysis model better.\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"## Analysis Model\n",
"\n",
"There are two type of models we used in the study on LinkedIn profiles. \n",
"* **Skill Model** to transform skill set and position set of each person into numeric vectors\n",
"* **Learning Model** to learn from the training data to perdict a person's position set from his/her skills\n",
"\n",
"### Skill Model\n",
"\n",
"The skill model aims to change the lists of string skills and positions into numeric vectors so that they can be accepted by the learning model. \n",
"\n",
"#### Feature Vector\n",
"\n",
"The skills for people is somewhat similar to words in natual language processing. Person A may shares some skills with person B and some other skills with person C, which is just like the occurence of words in documents. The more skills two people have in common, the more likely they obtain the same position, as how we classify documents by word frequency. \n",
"\n",
"Therefore, se apply the idea in natural language processing to quantify the skills and positions. That is, the TF-IDF matrix. In NLP, we use the term frequency (TF) to evaluate the weight of a word in a document. In skill processing, we use the endorsement on that skill (SE) alternatively, since endorsement in a way shows us how well the person really master the skill. Inverse document frequency (IDF) is used to evaluate the importance of a term by how rare it appears among documents. Accordingly, we use inverse person frequency (IPF) to weigh the importance of a skill. A skill is more generally mastered by everyone (Microsoft for instance), it is less likely that it can represent certain type of job positions. The detailed correspondence of TF-IDF and our skill model is shown in the table below.\n",
"\n",
"![Named Entity Recognition](http://i.imgur.com/urNysdM.png)\n",
"\n",
"A union of all skills acquired by each person forms the whole skill set, and we can build the skill vector based on the skill set. \n",
"\n",
"For instance, if the whole skill set is $['marketing','leadership','microsoft','python','data analysis','management']$ with IPF $[2, 1, 0.1, 2, 4, 2]$. \n",
"\n",
"A person with skill $['data analysis', 'microsoft','leadership']$ and endorsement $[10, 15, 4]$ has skill vector: $[0, 4, 1.5, 0, 40 ,0]$\n",
"\n",
"\n",
"We have also tried matrix using only SE without IPF, the classification result is slightly lower to the one with IPF. Therefore, SE-IPF is our final model for features.\n",
"\n",
"#### Label Vector\n",
"\n",
"One challenge part of our skill analysis study is that each person has had different type of positions through their lives.Therefore, the class label should be a set of positions instead of one single position. That is, certain input will get multiple labels for both training and testing data.\n",
"\n",
"In this case, we use ad-hoc representation for labels. The pre-processing already generates us a position set: \n",
"\n",
"$['product','fashion', 'finance', 'business', 'reporter', 'hr', 'sales', 'fundraiser', 'accounting', 'operation', 'manufacturing',\\\\ 'technology', 'analyst', 'market', 'information', 'consultant', 'sports', 'banking', 'other', 'public', 'software']$ \n",
"\n",
"Thus, the position vector is build according to this set. A position taken by a person (at any time through his/her life) is marked 1 in position vector, otherwise it is marked 0.\n",
"\n",
"For instance, a person has taken job as $['software', 'business','analyst']$ will be assigned label vector $[0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1]$.\n",
"\n",
"We mark the position 1 as long as it is taken by a person, no matter when has he/she taken the position or how many times he/she taken that position.\n",
"\n",
"### Learning Model\n",
"\n",
"The fact that a set of skills can lead to multiple positions make our classification task a multilabel classification task. We expect to see multiple possible positions when we do the prediction on a certain set of skills. Therefore, we use the OneVsRestClassifier in sklearn.multiclass which provides the multilabel function.\n",
"\n",
"For the internal estimators of the classfier, we have test linear SVM, Multinomial Naive Bayes, Gaussian Naive Bayes and Orthogonal Matching Pursuit. The classification results for those different estimators are listed in the table below. The classication results for using only SE without IPF is also included.\n",
"\n",
"[![Alt text](img/skill_graph_sample.png)](http://example.net/)\n",
"\n",
"We have also tried to implement our own Multinomial Naive Bayes classifier but the result is far from comparable with sklearn. Therefore we end up selecting the classifiers in sklearn.\n",
"\n",
"Notice in the result table, there are average precision and recall as well as f1-score. Considering the actuall situation that a person may want to use the prediction on his/her skills to decide which position to apply for, it would be preferred to have more possible positions with some of them not so related instead of having all positions related but miss some potential positions. Therefore, we think we should value recall over precision. At last, we choose Multinomial Naive Bayes with highest recall.\n",
"\n",
"## Result\n",
"\n",
"### Position Predict\n",
"\n",
"With the model developed in previous section, we are able to predict a person's potential position given his/her skills and corresponding endorsements.\n",
"\n",
"For instance, Given a person's skill as $['java', 'python', 'programming', 'visual studio']$ with endorsements $[1,1,1,1]$, the prediction result will be $['software', 'information', 'reporter']$.\n",
"\n",
"While the report doesn't seem very related, the top related positions according to our common sense - software and information are in the predict output. Moreover, probably a good programmer can become a good reporter!\n",
"\n",
"### Skill graph\n",
"\n",
"One thing good about Naive Bayes classifier is that is actually predict from the conditional probability per feature per class and it records the conditional probabilities in classifier variables. Therefore, we can access those probabilities and find out what skills take more weights on obtaining a certain position.\n",
"\n",
"To visualize the importance of skills in certain positions, we use wordgraph package in python to build a skill graph for each position based on the skills' conditional probabilities. Some sample figures are shown as below.\n",
"\n",
"\n",
"![](http://i.imgur.com/lKUEM7u.png)\n",
"\n",
"From the graph, we can get some idea about which skills are essential when you want to apply for certain positions.\n",
"\n",
"For instance, software jobs emphasize skills on software development. Knowledge about leading technologies such as cloud computing, data center will also add to the chance of entering such field. On the other hand, public jobs require for skills like public relations, and dealing with the media. Accounting jobs asks more for skills on accounting and finance report.\n",
"\n",
"### Conclusion & Future Works###\n",
"\n",
"#### Conclusions\n",
"\n",
"In this project, we scrapy the data from Linkedin Website to analysis how skills meet the requirements of positions in specific fields. In this project, we finished the completed process of \"practical data science\". \n",
"\n",
"We begin from getting raw data from Linkedin, and convert the data to better expression for latter analysis. Then we used model to train the training data and do prediction using test data. Finally, we use the popular word-cloud finish the data visualization. \n",
"\n",
"By doing the whole process step by step, we also tried a lot of new models and libraries to optimize the final model.\n",
"\n",
"The final complete model provides two functionalities:\n",
"\n",
"** 1. Predict potential positions given a set of skills **\n",
"\n",
"** 2. Illustrate the important skill in each of the 21 field by word graph **\n",
"\n",
"Using this model, we can identify that there do exist certain pattern between the skills and job positions, which is consistent with our original assumption.\n",
"\n",
"#### Future Works\n",
"\n",
"There are still some points to improve for our model.\n",
"\n",
"1. During data pre-processing, we filter and classify only the position data into 21 different fields to improve classification. In fact, the skills data also includes duplicate skills expressed in different words. Getting those data pre-processed may help further improve the result.\n",
"\n",
"2. We use all the skills when training the model, while some of them have little impact on the results. Applying some feature selection algorithm and reduce the dimension may be helpful.\n",
"\n",
"3. When using the positions as label, we mark a position as 1 if a person has taken that position regardless of how often and when did that person take it. Taking these information into consideration may help further improve the classification.\n",
"\n",
"4. One thing we noticed is that the trainning error of all tried estimators are nearly 0. That is, the training result suffers from overfitting. Having more trainning samples may help solving the problem.\n",
"\n",
"If we have time, we would like to try out those method in the future to see if they can acheive a higher accuracy."
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"Reference: \n",
"* http://www.tfidf.com/\n",
"* https://github.com/idwaker/linkedin\n",
"* http://www.shuang0420.com/2016/05/30/gensim-word2vec%E5%AE%9E%E6%88%98/\n",
"* https://radimrehurek.com/gensim/\n",
"* https://github.com/amueller/word_cloud"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python [Root]",
"language": "python",
"name": "Python [Root]"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.12"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment