This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# IDA (disassembler) and Hex-Rays (decompiler) plugin for Apple AMX | |
# | |
# WIP research. (This was edited to add more info after someone posted it to | |
# Hacker News. Click "Revisions" to see full changes.) | |
# | |
# Copyright (c) 2020 dougallj | |
# Based on Python port of VMX intrinsics plugin: | |
# Copyright (c) 2019 w4kfu - Synacktiv |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import Foundation | |
import Python | |
import TensorFlow | |
public struct MyModel : Layer { | |
public var conv1d: Conv1D<Float> | |
public var dense1: Dense<Float> | |
public var dropout: Dropout<Float> | |
public var denseOut: Dense<Float> | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) | |
BSD License | |
""" | |
import numpy as np | |
# data I/O | |
data = open('input.txt', 'r').read() # should be simple plain text file | |
chars = list(set(data)) | |
data_size, vocab_size = len(data), len(chars) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# 2nd layer hyperparameters of our network | |
w = tf.Variable([[1], [-2]]) # weights between hidden layer and output layer. | |
b = tf.Variable([[0], [0], [0], [0]]) # biases for output units. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import tensorflow as tf | |
# 1st layer hyperparameters of our network | |
X = tf.Variable([[0, 0], [0, 1], [1, 0], [1, 1]]) # inputs where the rows correspond to a single input. | |
W = tf.Variable([[1, 1], [1, 1]]) # weights between hidden layer and input layer. | |
c = tf.Variable([[0], [-1]]) # biases for hidden units. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# 2nd feedforward propagation | |
a_XW_c_w = tf.matmul(a_XW_c, w) | |
a_XW_c_w_b = tf.add(a_XW_c_w, b) | |
init = tf.global_variables_initializer() | |
# Launch the TensorFlow graph session | |
with tf.Session() as sess: | |
# initialize all the variables | |
sess.run(init) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# 1st feedforward propagation | |
XW = tf.matmul(X, tf.transpose(W)) | |
XW_c = tf.add(XW, tf.transpose(c)) | |
a_XW_c = tf.nn.relu(XW_c) |