start new:
tmux
start new with session name:
tmux new -s myname
"""Kernel K-means""" | |
# Author: Mathieu Blondel <[email protected]> | |
# License: BSD 3 clause | |
import numpy as np | |
from sklearn.base import BaseEstimator, ClusterMixin | |
from sklearn.metrics.pairwise import pairwise_kernels | |
from sklearn.utils import check_random_state |
I'm going to cover a simple, but effective, utility for managing state and transitions (aka workflow). We often need to store the state (status) of a model and it should only be in one state at a time.
# main | |
llama-index | |
langchain |
/// Fix a huggingface tokenizer to which tokens have been added after training. | |
/// | |
/// Adding tokens after training via `add_special_tokens` leads to them being added to the | |
/// `added_tokens` section but not to the `model.vocab` section. This yields warnings like: | |
/// ``` | |
/// [2023-10-17T07:54:05Z WARN tokenizers::tokenizer::serialization] Warning: Token '<|empty_usable_token_space_1023|>' was expected to have ID '129023' but was given ID 'None' | |
/// ``` | |
/// The code in this file ensures that all tokens from `added_tokens` are also placed into | |
/// `model.vocab`. This fixes the warning and does not change the tokenizer's behavior. |
This list was compiled from https://web.archive.org/web/20230911135706/https://docs.docker.com/desktop/release-notes/, retrieved on 2024-05-24 | |
It appears links for later updates than 4.22.1 are currently (as of early Aug 2024) being kept up to date on https://docs.docker.com/desktop/release-notes, I'm not going to continuously update this but others may in the comments below, so check that link or the comments if you need a later version than 4.22.1. | |
The information may not be correct in all cases, or may have changed since archive.org archived the page. At time of posting, I spot-checked a few links and they appeared to be good, but really, all I've done is copied, pasted, and visually formatted the information I found on archive.org, so no warrantee that it's good. | |
If the download links don't work, sometimes archive.org has the download archive, and you can try adding https://web.archive.org/web/20230911135706/ to the beginning of the URL. For instance, as of this writing, the 4.22.1 Windows download i |
import h5py | |
# Copyright (c) 2023 kglspl | |
# MIT License (the same as: https://github.com/kglspl/ppmparser/blob/master/LICENSE) | |
class H5FS(object): | |
def __init__(self, filename, mode): | |
self.filename = filename | |
self.f = h5py.File(filename, mode) | |
self.dset = None |
# install DSPy: pip install dspy | |
import dspy | |
# Ollam is now compatible with OpenAI APIs | |
# | |
# To get this to work you must include `model_type='chat'` in the `dspy.OpenAI` call. | |
# If you do not include this you will get an error. | |
# | |
# I have also found that `stop='\n\n'` is required to get the model to stop generating text after the ansewr is complete. | |
# At least with mistral. |
''' in .env file | |
GITHUB_TOKEN="YOUR_GH_CLASSIC_TOKEN" | |
OPENAI_API_KEY="YOUR_OPENAI_KEY" | |
ACTIVELOOP_TOKEN="YOUR_ACTIVELOOP_TOKEN" | |
DATASET_PATH="hub://YOUR_ORG/repository_vector_store" | |
need to install llama-index >= 0.10.0, python-dotenv, and llama-index-readers-github >= 0.1.5 | |
''' |