1) Create a branch with the tag | |
git branch {tagname}-branch {tagname} | |
git checkout {tagname}-branch | |
2) Include the fix manually if it's just a change .... | |
git add . | |
git ci -m "Fix included" | |
or cherry-pick the commit, whatever is easier | |
git cherry-pick {num_commit} | |
<?php | |
class My_Custom_My_Account_Endpoint { | |
/** | |
* Custom endpoint name. | |
* | |
* @var string | |
*/ | |
public static $endpoint = 'my-custom-endpoint'; |
No need for homebrew or anything like that. Works with https://www.git-tower.com and the command line.
- Install https://gpgtools.org -- I'd suggest to do a customized install and deselect GPGMail.
- Create or import a key -- see below for https://keybase.io
- Run
gpg --list-secret-keys
and look forsec
, use the key ID for the next step - Configure
git
to use GPG -- replace the key with the one fromgpg --list-secret-keys
Below links provide source, reference link and relevant quote
https://github.com/usnistgov/800-63-3/blob/nist-pages/sp800-63b/sec5_authenticators.md
Verifiers SHOULD NOT impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets. Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically).However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
--- | |
AWSTemplateFormatVersion: '2010-09-09' | |
Description: AWS CloudFormation deployment for Veeam Parameter Retrieval solution. | |
Resources: | |
# API Gateway Configuration | |
ApiGateway: | |
Type: AWS::ApiGateway::RestApi | |
Properties: | |
Name: !Sub ${AWS::StackName}-API |
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
- Clone llama.cpp from git, I am on commit
08737ef720f0510c7ec2aa84d7f70c691073c35d
.