Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.
Avoid being a link dump. Try to provide only valuable well tuned information.
Neural network links before starting with transformers.
#To download all your programming assignments including all files and notebooks, follow these steps: | |
#1 - Go to the root tree folder for instance: https://hub.coursera-notebooks.org/user/${user_token}/tree/ | |
#2- Open the terminal by clicking the + button on the right-hand corner | |
#3 - Enter the following command in the terminal: | |
tar cvfz allassignments.tar.gz * | |
#4 - The previous command will create a zip named allassignments containing all your programmings assignment | |
#5 - Select allassignments.tar.gz and download | |
#6 - Enjoy, don't forget to delete it afterward ;-) |
mkdir opencv | |
cd opencv | |
sudo apt-get update -y && sudo apt-get upgrade -y | |
sudo apt-get install build-essential cmake pkg-config -y | |
sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev -y | |
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev -y | |
sudo apt-get install libxvidcore-dev libx264-dev -y | |
sudo apt-get install libgtk2.0-dev libgtk-3-dev -y | |
sudo apt-get install libatlas-base-dev gfortran -y |
import argparse | |
import psutil | |
import tensorflow as tf | |
from typing import Dict, Any, Callable, Tuple | |
## Data Input Function | |
def data_input_fn(data_param, | |
batch_size:int=None, | |
shuffle=False) -> Callable[[], Tuple]: | |
"""Return the input function to get the test data. |
# Mostly copied from https://keras.io/applications/#usage-examples-for-image-classification-models | |
# Changing it to use InceptionV3 instead of ResNet50 | |
from keras.applications.inception_v3 import InceptionV3, preprocess_input, decode_predictions | |
from keras.preprocessing import image | |
import numpy as np | |
model = InceptionV3() | |
img_path = 'elephant.jpg' |
#renderdoc.LoadLogfile('D:\gta5_frames\GTAVLauncher_2017.02.20_11.53.06_frame9280.rdc') | |
config = {} | |
# where we find the Python libraries | |
#config['py_lib_dir'] = 'C:\\Program Files\\Anaconda3\\Lib\\' | |
# where we find the Python libraries | |
config['py_lib_dir'] = 'C:\\Program Files\\Anaconda2\\Lib\\' | |
#import os | |
#from os import listdir |
If you want a run-down of the 1.3 changes and the design decisions behidn those changes, check out the LonestarElixir Phoenix 1.3 keynote: https://www.youtube.com/watch?v=tMO28ar0lW8
To use the new phx.new
project generator, you can install the archive with the following command:
$ mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez
Phoenix v1.3.0 is a backwards compatible release with v1.2.x. To upgrade your existing 1.2.x project, simply bump your phoenix dependency in mix.exs
:
defaults write xcodebuild PBXNumberOfParallelBuildSubtasks 4 | |
defaults write xcodebuild IDEBuildOperationMaxNumberOfConcurrentCompileTasks 4 | |
defaults write com.apple.xcode PBXNumberOfParallelBuildSubtasks 4 | |
defaults write com.apple.xcode IDEBuildOperationMaxNumberOfConcurrentCompileTasks 4 |
Max Goldstein | July 30, 2015 | Elm 0.15.1
In Elm, signals always have a data source associated with them. Window.dimensions
is exactly what you think it is, and you can't send your own events on it. You can derive your own signals from these primitives using map
, filter
, and merge
, but the timing of events is beyond your control.
This becomes a problem when you try to add UI elements. We want to be able to add checkboxes and dropdown menus, and to receive the current state of these elements as a signal. So how do we do that?