Skip to content

Instantly share code, notes, and snippets.

@joshuakfarrar
Last active March 15, 2022 09:31
Show Gist options
  • Save joshuakfarrar/a15699d03143b468733f1ef06f2513f8 to your computer and use it in GitHub Desktop.
Save joshuakfarrar/a15699d03143b468733f1ef06f2513f8 to your computer and use it in GitHub Desktop.
# nnfs.io
import numpy as np
# a single array of inputs, for a layer of 3 neurons
inputs = [1, 2, 3, 2.5]
weights = [[0.2, 0.8, -0.5, 1], [0.5, -0.91, 0.26, -0.5], [-0.26, -0.27, 0.17, 0.87]] # 3 neurons
biases = [2, 3, 0.5] # and their bias
np.matmul(weights, inputs)
# array([ 2.8 , -1.79 , 1.885])
# what about batching input data?
# enter numpy.matrix.transpose:
inputs = [[1, 2, 3, 2.5], [2, 5, -1, 2], [-1.5, 2.7, 3.3, -0.8]]
# >>> np.array(inputs)
# array([[ 1. , 2. , 3. , 2.5],
# [ 2. , 5. , -1. , 2. ],
# [-1.5, 2.7, 3.3, -0.8]])
# >>> np.array(inputs).transpose()
# array([[ 1. , 2. , -1.5],
# [ 2. , 5. , 2.7],
# [ 3. , -1. , 3.3],
# [ 2.5, 2. , -0.8]])
# now inputs are aligned to the number of neurons we can feed them into at a time
# inputs can now be fed into network in chunks
np.matmul(weights, np.array(inputs).transpose()) # transpose() for row and column vectors
# array([[ 2.8 , 6.9 , -0.59 ],
# [-1.79 , -4.81 , -1.949],
# [ 1.885, -0.3 , -0.474]])
# πŸ‘πŸ‘πŸ‘ encore πŸ‘πŸ‘πŸ‘
layer_outputs = np.dot(inputs, np.array(weights).T) + biases
# >>> layer_outputs
# array([[ 4.8 , 1.21 , 2.385],
# [ 8.9 , -1.81 , 0.2 ],
# [ 1.41 , 1.051, 0.026]])
# questions? comment below!
# pif.gov πŸ‡ΊπŸ‡Έ
@joshuakfarrar
Copy link
Author

Want to become a better programmer? Learn Functional Programming in Scala and enough category theory to build RESTful web services from scratch with http4s! πŸ˜‚

https://www.manning.com/books/functional-programming-in-scala

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment