Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.
There's a neat writeup I stumbled across recently titled "Reproducible codesigning on Apple Silicon" from Keith Smiley about some gotchas when it comes to compiling a binary in a way that's repeatable and always generates the exact same byte output (which would then checksum to the exact same hash) - even if compiled on a different Mac.
In applying the suggestions I found in the blog post, I found a few other corner cases that I just wanted to get documented more explicitly somewhere.
Footnote 2 from that blog post is important:
| cd /sys/devices/pci0000\:00/0000\:00\:03.0/0000\:09\:00.0/0000\:0a\:10.0/0000\:0e\:00.0/ | |
| systemctl stop kubelet | |
| systemctl stop docker | |
| echo 1 > remove | |
| systemctl start docker | |
| systemctl start kubelet |
As far as I can tell, you can't do it conveniently. That is, git-rebase does not give you an option to preserve the committer date. Unless you give the --ignore-date (or its alias, --reset-author-date) option, it will always preserve the author date. However, there is no way to make git-rebase preserve the committer date, unless some manual script is crafted.
The best you can do is to make the committer date equal to the author date. Recently (in 2020 Q4), git-rebase --interactive has gained the ability to use the --committer-date-is-author-date flag with the interactive rebase. Before that, there was no way of influencing the committer date at all with the interactive rebase. Note that this flag does not preserve the committer date. It merely makes the committer date equal to the author date.
You might be thinking "well, isn't that effectively preserving the committer date, since normally the committer date is always equal to the author date?". Normally, you would be correct. However, there
| def keras_model_memory_usage_in_bytes(model, *, batch_size: int): | |
| """ | |
| Return the estimated memory usage of a given Keras model in bytes. | |
| This includes the model weights and layers, but excludes the dataset. | |
| The model shapes are multipled by the batch size, but the weights are not. | |
| Args: | |
| model: A Keras model. | |
| batch_size: The batch size you intend to run the model with. If you |
| import torch | |
| import numpy as np | |
| import matplotlib.pyplot as plt | |
| import cv2 | |
| def select_top_predictions(predictions, threshold): | |
| idx = (predictions["scores"] > threshold).nonzero().squeeze(1) | |
| new_predictions = {} | |
| for k, v in predictions.items(): |
| import time, traceback | |
| def every(delay, task): | |
| next_time = time.time() + delay | |
| while True: | |
| time.sleep(max(0, next_time - time.time())) | |
| try: | |
| task() | |
| except Exception: | |
| traceback.print_exc() |
| dconf write /org/gnome/terminal/legacy/profiles:/:<id>/allow-bold false |
| // This shows an example of how to generate a SSH RSA Private/Public key pair and save it locally | |
| package main | |
| import ( | |
| "crypto/rand" | |
| "crypto/rsa" | |
| "crypto/x509" | |
| "encoding/pem" | |
| "golang.org/x/crypto/ssh" |