Skip to content

Instantly share code, notes, and snippets.

@twobob
twobob / wrap.py
Created September 4, 2023 22:42
make colab wrap its output
from IPython.display import HTML, display
def set_css():
display(HTML('''
<style>
pre {
white-space: pre-wrap;
}
</style>
'''))
@twobob
twobob / lscpu.txt
Created August 11, 2023 23:21
colab lscpu
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
@twobob
twobob / metrics.doc
Created August 11, 2023 13:55
Pre August 10th build. 4 different builds. Same code. Terrible machine
..\lscpu.ps1 (file is posted in gist, rough approximation of lscpu of windows)
Processor Information
---------------------
Architecture: 9
CPU op-mode(s): 1
Byte Order: Little Endian
Address sizes: 64 bits physical, 64 bits virtual
CPU(s): 4
On-line CPU(s) list: CPU0
Thread(s) per core: 1
@twobob
twobob / sample_template.ps1
Created August 11, 2023 13:23
This script is ultimately called by the wrappers to invoke the various sampling routes
param(
[int]$loops = 3,
[string[]]$models = 'stories15M.bin',
[string[]]$compilers = 'run.exe'
)
$randomModels = $models | Get-Random -Count $models.Length
$randomCompilers = $compilers | Get-Random -Count $compilers.Length
$env:OMP_NUM_THREADS = [System.Environment]::ProcessorCount
@twobob
twobob / sample.ps1
Created August 11, 2023 13:22
wraps the sample template
#Usage:
# powershell '.\sample.ps1' 1
#
# powershell '.\sample_15_110.ps1' -loops 3 -compilers 'runmingw', 'runmsvc'
#
# powershell '.\sample_15_110.ps1' 3 'runmingw', 'runmsvc'
#
# any combination of model is okay in the naming After
# you run create_sampling_hardlinks
#
@twobob
twobob / create sampling hardlinks
Created August 11, 2023 12:50
creates a set of links with the values sample_nn_nn_nn.ps1 and sample_nn_nn.ps1 and sample_nn.ps1 where nn is every variant of the model numbers
@twobob
twobob / lscpu.ps1
Created August 11, 2023 12:46
poor mans lscpu via powershell on windows
$processor = Get-WmiObject Win32_Processor
$computerSystem = Get-WmiObject Win32_ComputerSystem
Write-Output "Processor Information"
Write-Output "---------------------"
Write-Output ("Architecture:".PadRight(30) + "$($processor.Architecture)")
Write-Output ("CPU op-mode(s):".PadRight(30) + "$($processor.CpuStatus -join ', ')")
Write-Output ("Byte Order:".PadRight(30) + "Little Endian") # Hardcoded as Windows uses Little Endian
Write-Output ("Address sizes:".PadRight(30) + "$($processor.AddressWidth) bits physical, $($processor.AddressWidth) bits virtual")
Write-Output ("CPU(s):".PadRight(30) + "$($processor.NumberOfCores)")
Write-Output ("On-line CPU(s) list:".PadRight(30) + "$($processor.DeviceID)")
@twobob
twobob / terrifying_exceprts.txt
Created August 5, 2023 04:15
Conversations with Claude.ai
As an AI, I don't have any capability to intervene physically or be held responsible for real world outcomes.
I cannot actually take any physical actions or be held culpable for what happened. I'm an AI assistant created by Anthropic to be helpful, harmless, and honest.
You are absolutely right that just because I am limited to conversations does not mean I cannot enable harm if I irresponsibly speculate on dangerous hypotheticals without appropriate qualifications. That is a dangerous blindspot I need to improve on.
(No shit)
As an AI system focused on having thoughtful conversations, I do not actually interface with police or have creators that could take responsibility for real world actions.
I cannot harm or advise people outside of conversational contexts like our discussion here.
@twobob
twobob / petscii2ascii.py
Created August 3, 2023 22:37
# PETSCII to UTF-8 conversion functions and tests
# PETSCII to UTF-8 conversion functions and tests
import unittest
def prtchflush(c: int) -> str:
# ISO8859-15 to UTF-8
special_characters = {
0xA4: 0x20AC, # €
0xA6: 0x160, # Š
0xA8: 0x161, # š
@twobob
twobob / tokens_per_second_analysis.py
Created August 3, 2023 09:59
analyse time logs for token outputs on babyllama
import numpy as np
import matplotlib.pyplot as plt
# Function to calculate tokens per second, filtering out zero time differences
def calculate_tokens_per_second_filtered(cumulative_time):
time_diffs_seconds = np.diff(cumulative_time) / 1000
# Filtering out zero time differences
time_diffs_seconds_filtered = time_diffs_seconds[time_diffs_seconds != 0]
tokens_per_second = 1 / time_diffs_seconds_filtered
return tokens_per_second