Update CentOS and reboot (you will need to login again):
sudo yum update -y
sudo reboot
| # *----------------------------------------------------------------- | |
| # | PROGRAM NAME: ex matching.py | |
| # | DATE: 6/25/21 | |
| # | CREATED BY: MATT BOGARD | |
| # | PROJECT FILE: | |
| # *---------------------------------------------------------------- | |
| # | PURPOSE: very basic matching and IPTW analysis with balance diagnostics | |
| # *---------------------------------------------------------------- |
| distributed: | |
| version: 2 | |
| scheduler: | |
| bandwidth: 1000000000 # 100 MB/s estimated worker-worker bandwidth | |
| worker: | |
| memory: | |
| target: 0.90 # target fraction to stay below | |
| spill: False # fraction at which we spill to disk | |
| pause: 0.80 # fraction at which we pause worker threads | |
| terminate: 0.95 # fraction at which we terminate the worker |
Note
to active Office without crack, just follow https://github.com/WindowsAddict/IDM-Activation-Script,
you wiil only need to run
irm https://massgrave.dev/ias | iex| import numpy as np | |
| import pandas as pd | |
| from sklearn.base import BaseEstimator, TransformerMixin | |
| # Adapted from https://www.kaggle.com/ogrellier/python-target-encoding-for-categorical-features | |
| class TargetEncoder(BaseEstimator, TransformerMixin): | |
| def __init__(self, columns, noise_level = 0): | |
| self.columns = columns | |
| self.maps = {} |
| """ | |
| Dynamic Routing Between Capsules | |
| https://arxiv.org/abs/1710.09829 | |
| """ | |
| import torch | |
| import torch.nn as nn | |
| import torch.optim as optim | |
| import torch.nn.functional as F | |
| import torchvision.transforms as transforms |
| # -*- coding: utf-8 -*- | |
| """ | |
| Created on Mon Sep 23 23:16:44 2017 | |
| @author: Marios Michailidis | |
| This is an example that performs stacking to improve mean squared error | |
| This examples uses 2 bases learners (a linear regression and a random forest) | |
| and linear regression (again) as a meta learner to achieve the best score. | |
| The initial train data are split in 2 halves to commence the stacking. |