Skip to content

Instantly share code, notes, and snippets.

View d4l3k's full-sized avatar
⛰️
Hi

Tristan Rice d4l3k

⛰️
Hi
View GitHub Profile
@d4l3k
d4l3k / reference.md
Last active July 25, 2019 19:10
Dropbike.ca API
@d4l3k
d4l3k / output zmqtest
Last active February 5, 2019 19:57
ZMQ TCP Push/Pull message drop on tcp reset
tristanr@tristanr-arch ~/D/zmqtest> go build -v .; and sudo ./zmqtest
2019/02/05 11:55:44 zmqtest.go:93: sent 0
2019/02/05 11:55:44 zmqtest.go:124: received 0
2019/02/05 11:55:44 zmqtest.go:93: sent 100000
2019/02/05 11:55:44 zmqtest.go:124: received 100000
2019/02/05 11:55:44 zmqtest.go:93: sent 200000
2019/02/05 11:55:44 zmqtest.go:93: sent 300000
2019/02/05 11:55:44 zmqtest.go:124: received 200000
2019/02/05 11:55:44 zmqtest.go:93: sent 400000
2019/02/05 11:55:44 zmqtest.go:93: sent 500000
package main
import (
"bytes"
"encoding/binary"
"flag"
"io"
"io/ioutil"
"log"
"os"
@d4l3k
d4l3k / ap.fbs
Last active June 15, 2020 06:03
Autopilot neural network weights flatbuffer schema file from Tesla Model 3 - version 2018.32.6
namespace fbs.ap;
table Root {
layers : [Layer];
}
table Layer {
name: string;
weights : [Weights];
}
package main
import (
"archive/zip"
"io"
"log"
"os"
"path/filepath"
"strings"

Mounting LVM partition from disk image

$ sudo losetup /dev/loop0 ./MTFC64GJVDN-4M_1BIT@LFBGA169_3731.BIN
$ sudo partx --update /dev/loop0
$ sudo lvmdiskscan
  /dev/nvme0n1         [    <476.94 GiB]
  /dev/loop0           [      59.28 GiB]
  /dev/mapper/cryptlvm [    <237.13 GiB] LVM physical volume
  /dev/nvme0n1p1       [     100.00 MiB]
@d4l3k
d4l3k / load_kallsyms.py
Last active December 29, 2020 02:58
Ghidra script to load kallsyms as labels
# load a kallsyms table
#@author d4l3k
#@category kernelscripts
USER_DEFINED = ghidra.program.model.symbol.SourceType.USER_DEFINED
baseAddress = currentProgram.getImageBase()
print("base address", baseAddress)
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
"""
For distributed training, TorchX relies on the scheduler's gang scheduling
capabilities to schedule ``n`` copies of nodes. Once launched, the application
is expected to be written in a way that leverages this topology, for instance,
@d4l3k
d4l3k / LICENSE
Created February 17, 2023 18:03
A pytorch implementation of torch_gather_nd with multiple batch dim and multiple channel dim support.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
Traceback (most recent call last):
File "/home/rice/Developer/test_torch_export.py", line 11, in <module>
compiled = export(foo, args=tuple())
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rice/Developer/pytorch/torch/export/__init__.py", line 440, in export
return export__RC__(f, args, kwargs, dynamic_shapes=dynamic_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rice/Developer/pytorch/torch/_export/__init__.py", line 252, in export__RC__
return _export(f, args, kwargs, constraints=constraints)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^