Skip to content

Instantly share code, notes, and snippets.

View EJSohn's full-sized avatar

Harley Son EJSohn

View GitHub Profile
@EJSohn
EJSohn / README.md
Created August 7, 2019 07:43 — forked from leonardofed/README.md
A curated list of AWS resources to prepare for the AWS Certifications


A curated list of AWS resources to prepare for the AWS Certifications

A curated list of awesome AWS resources you need to prepare for the all 5 AWS Certifications. This gist will include: open source repos, blogs & blogposts, ebooks, PDF, whitepapers, video courses, free lecture, slides, sample test and many other resources.


import csv
import comware
class HPEConf(object):
def __init__(self, input_filenames, output_filename):
self.input_filenames = input_filenames
self.output_filename = output_filename
def _writerows(self, data):
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import numpy as np
from helper import batches # Helper function created in Mini-batching section
def print_epoch_stats(epoch_i, sess, last_features, last_labels):
"""
Print cost and validation accuracy of an epoch
"""
/*
* Superuser processes are usually more important, so we make it
* less likely that we kill those.
*/
if (cap_t(p->cap_effective) & CAP_TO_MASK(CAP_SYS_ADMIN) ||
p->uid == 0 || p->euid == 0)
points /= 4;
/*
* We don't want to kill a process with direct hardware access.
s = int_sqrt(cpu_time);
if (s)
points /= s;
s = int_sqrt(int_sqrt(run_time));
if (s)
points /= s;
/*
* Niced processes are most likely less important, so double
* their badness points.
/*
* Processes which fork a lot of child processes are likely
* a good choice. We add the vmsize of the childs if they
* have an own mm. This prevents forking servers to flood the
* machine with an endless amount of childs
*/
...
if (chld->mm != p->mm && chld->mm)
points += chld->mm->total_vm;
/*
* The memory size of the process is the basis for the badness.
*/
points = p->mm->total_vm;
/*
* oom_badness - calculate a numeric value for how bad this task has been
* @p: task struct of which task we should calculate
* @p: current uptime in seconds
*
* The formula used is relatively simple and documented inline in the
* function. The main rationale is that we want to select a good task
* to kill when we run out of memory.
*
* Good in this context means that:
$ dmesg | grep "Out of memory"
[ 3635.537538] Out of memory: Kill process 21580 (jupyter-noteboo) score 37 or sacrifice child
[ 3636.822607] Out of memory: Kill process 22714 (jupyter-noteboo) score 37 or sacrifice child
[ 3643.006328] Out of memory: Kill process 24976 (jupyter-noteboo) score 37 or sacrifice child
[ 3654.916468] Out of memory: Kill process 26118 (jupyter-noteboo) score 37 or sacrifice child
[ 3658.712286] Out of memory: Kill process 28364 (jupyter-noteboo) score 35 or sacrifice child
[ 3666.654763] Out of memory: Kill process 30603 (jupyter-noteboo) score 36 or sacrifice child
[ 3685.390829] Out of memory: Kill process 30620 (jupyter-noteboo) score 36 or sacrifice child
$ free -m
total used free shared buff/cache available
Mem: 990 851 20 24 119 54
Swap: 0 0 0