Skip to content

Instantly share code, notes, and snippets.

View llSourcell's full-sized avatar

Siraj Raval llSourcell

View GitHub Profile
yosadad
https://docs.google.com/presentation/d/1dinjf2mZ1SkRnzJo6fL-ZfGyGDBI4JWeoGsnBj7hNNg/edit?usp=sharing
@llSourcell
llSourcell / manual_control.py
Created December 18, 2020 15:41
replace 'manual_control.py' in the following repo with this version: https://github.com/maximecb/gym-miniworld
#!/usr/bin/env python3
"""
This script allows you to manually control the simulator
using the keyboard arrows.
"""
import sys
import argparse
import pyglet
use std::io;
use std::process;
fn main() {
//Step 0 - Collect data
let x: [f64; 8] = [44.0, 46.0, 59.0, 67.0, 74.0, 85.0, 97.0, 43.0];
let y: [f64; 8] = [1200.0, 2400.0, 3200.0, 3000.0, 5000.0, 4000.0, 6000.0, 6050.0];
println!("Total profit per month");
//Step 1 - Compute the average of X

Game 1: Avoid the Hooks

Goal

  • Dedicate 60 hours/week to learning a subject

Feedback System

  • Pomodoro Score

Rules

  • Find or Build the idea gateway
@llSourcell
llSourcell / cuda.h
Last active February 22, 2020 22:31
//
// onehiddenlayerperceptron.cu
// onehiddenlayerperceptron
//
// Created by Sergei Bugrov on 8/21/17.
// Copyright © 2017 Sergei Bugrov. All rights reserved.
//
#include <stdio.h>
#by MBT on StackOverflow
import numpy as np
# D_in is input dimension;
# H is hidden dimension;
# D_out is output dimension.
Batch_Size, D_in, H, D_out = 12, 1000, 100, 10
# Create random input and output data
x = np.random.randn(Batch_Size, D_in)
#by MBT on StackOverflow
import numpy as np
# D_in is input dimension;
# H is hidden dimension;
# D_out is output dimension.
Batch_Size, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data
x = np.random.randn(Batch_Size, D_in)
#pytorch team
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
#by deepmind
import tensorflow as tf
import sonnet as snt
#setup a 'module'
mlp = snt.Sequential([
snt.Linear(1024),
tf.nn.relu,
snt.Linear(10),
])