This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Import libraries required for neutral network | |
import torch | |
# Vision-specific library which will contain our dataset and help us define how our images are loaded | |
import torchvision | |
import torchvision.transforms as transforms | |
import torch.optim as optim | |
# Create image transform when we load the dataset of PIL images | |
transform = transforms.Compose( | |
[transforms.ToTensor(), #Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import matplotlib.pyplot as plt | |
plt.rcParams['figure.figsize'] = [12, 10] # Set out plot size | |
# Import required libraries, including gensim, matplotlib, and pandas | |
import pprint | |
import sys | |
from collections import defaultdict | |
import gensim | |
from gensim import corpora |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import os | |
# Initialize variables | |
directory = str(input("Enter directory for documents: ")) # Wait for input directory | |
document = "" | |
corpus = "" | |
# Utility functions | |
def doc_to_cor(document): | |
fedit = open(directory.replace('/','') + ".cor", "a") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
responsibilities ability to control & maintain large scale data coming from different sources. mining data from primary and secondary sources, then reorganizing said data in a format that can be easily read by either human or machine. using statistical tools to interpret data sets, paying particular attention to trends and patterns that could be valuable for diagnostic and predictive analytics efforts. preparing reports for executive leadership that effectively dashboard, communicate trends, patterns, and predictions using relevant data. records macros & knowledge of basic programming of vb through excel sheets | |
responsibilities collecting and interpreting data analyzing results reporting the results back to the relevant members of the business identifying patterns and trends in data sets working alongside teams within the business or the management team to establish business needs defining new data collection and analysis processes controlling existing database processing weekly and mo |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import paho.mqtt.client as mqtt | |
import time | |
def on_message(client, userdata, message): | |
print("message received " ,str(message.payload.decode("utf-8"))) | |
print("message topic=",message.topic) | |
broker = "{Broker IP Address}" | |
username = "{MQTT Client Username}" | |
password = "{MQTT Client Password}" |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# This is a sample pandas script which you can integrate with your Tableau Prep flow | |
# 'import pandas as pd' can be skipped as it is already loaded on your server | |
# 'df is your input dataset' which you connected | |
def get_data(df): | |
return df.head(100) | |
def drop_duplicates(df): | |
return df.drop_duplicates() |