I hereby claim:
- I am tomlisankie on github.
- I am tomlisankie (https://keybase.io/tomlisankie) on keybase.
- I have a public key ASDtfZsxg2L_K02VywkZHxguyaA6ktChswzMW5rFWuwOrwo
To claim this, I am signing this object:
| # Was watching this (https://www.youtube.com/watch?v=ymsbTVLbyN4&t=3749s) and Abelson was talking about how you could just make a cons cell with a lambda. Remembered Python had lambdas (although they're one-line lambdas smh) and figured "why not" | |
| def cons(a, b): | |
| return lambda x: a if (x == 1) else b | |
| def car(cell): | |
| return cell(1) | |
| def cdr(cell): | |
| return cell(2) |
| lisp = input("Enter your Lisp code: ") | |
| # splits Lisp expression into tokens and strips them of whitespace. | |
| def makeTokens(lisp): | |
| lispTokens = lisp.replace("(", " ( ").replace(")", " ) ").split() | |
| return [token.strip() for token in lispTokens] | |
| # if the token is a number, return it as the appropriate type (int or float). | |
| # otherwise, return it as a string | |
| def createAtom(token): |
| """ | |
| Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) | |
| BSD License | |
| """ | |
| import numpy as np | |
| # data I/O | |
| data = open('input.txt', 'r').read() # should be simple plain text file | |
| chars = list(set(data)) | |
| data_size, vocab_size = len(data), len(chars) |
| #fit_model code: | |
| # TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' | |
| from sklearn.tree import DecisionTreeRegressor | |
| from sklearn.metrics import make_scorer | |
| from sklearn.model_selection import GridSearchCV | |
| def fit_model(X, y): | |
| """ Performs grid search over the 'max_depth' parameter for a | |
| decision tree regressor trained on the input data [X, y]. """ |
I hereby claim:
To claim this, I am signing this object:
| // | |
| // Phoneme.swift | |
| import Foundation | |
| class Phoneme { | |
| private var isAVowelPhoneme = false; | |
| private var phoneme = ""; |