Largely based on the Tensorflow 1.6 gist, and Tensorflow 1.7 gist for xcode, this should hopefully simplify things a bit.
- NVIDIA Web-Drivers 387.10.10.10.30.103 for 10.13.4
- CUDA-Drivers 387.178
- CUDA 9.1 Toolkit
| import torch | |
| import torch.nn as nn | |
| import torch.optim as optim | |
| # Create target distribution (fixed) | |
| target_logits = torch.randn(10) | |
| target_log_probs = torch.log_softmax(target_logits, dim=0) | |
| # Create learnable distribution | |
| learnable_logits = nn.Parameter(torch.rand_like(target_logits)) # Initialize randomly |
| <artifacts_info> | |
| The assistant can create and reference artifacts during conversations. Artifacts are for substantial, self-contained content that users might modify or reuse, displayed in a separate UI window for clarity. | |
| # Good artifacts are... | |
| - Substantial content (>15 lines) | |
| - Content that the user is likely to modify, iterate on, or take ownership of | |
| - Self-contained, complex content that can be understood on its own, without context from the conversation | |
| - Content intended for eventual use outside the conversation (e.g., reports, emails, presentations) | |
| - Content likely to be referenced or reused multiple times |
| # make sure you don't have any soon to be forgotten version of vim installed | |
| $ sudo apt-get remove --purge vim vim-runtime vim-gnome vim-tiny vim-gui-common | |
| # Install Deps | |
| $ sudo apt-get install build-essential cmake | |
| $ sudo apt-get install python3-dev | |
| #Optional: so vim can be uninstalled again via `dpkg -r vim` | |
| $ sudo apt-get install checkinstall |
| pragma solidity ^0.4.24; | |
| // ---------------------------------------------------------------------------- | |
| // Sample token contract | |
| // | |
| // Symbol : LCST | |
| // Name : LCS Token | |
| // Total supply : 100000 | |
| // Decimals : 2 | |
| // Owner Account : 0xde0B295669a9FD93d5F28D9Ec85E40f4cb697BAe |
Largely based on the Tensorflow 1.6 gist, and Tensorflow 1.7 gist for xcode, this should hopefully simplify things a bit.
| Ninj0r admin, [Oct 20, 2017, 9:18:55 AM]: | |
| It's a three step process: | |
| 1) Start listening to the stream and buffering the messages | |
| 2) Get a depth snapshot | |
| 3) replay the buffered messages and the live messges. | |
| Depth updates have two variables, u and U | |
| U is the initial updateId, and u is the final updateId. There can be multiple updates "compressed" into a single update that comes out via the web socket stream. |
https://github.com/aancel/admin/wiki/VirtualGL-on-Ubuntu
https://virtualgl.org/About/Introduction
When you use ssh with X forwarding, you might have noticed that you cannot execute programs that require 3D acceleration. That's where VirtualGL comes into play.
If you work across many computers (and even otherwise!), it's a good idea to keep a copy of your setup on the cloud, preferably in a git repository, and clone it on another machine when you need.
Thus, you should keep the .vim directory along with your .vimrc version-controlled.
But when you have plugins installed inside .vim/bundle (if you use pathogen), or inside .vim/pack (if you use Vim 8's packages), keeping a copy where you want to be able to update the plugins (individual git repositories), as well as your vim-configuration as a whole, requires you to use git submodules.
Initialize a git repository inside your .vim directory, add everything (including the vimrc), commit and push to a GitHub/BitBucket/GitLab repository:
cd ~/.vim
| from tensorflow.python.client import device_lib | |
| def get_available_gpus(): | |
| local_device_protos = device_lib.list_local_devices() | |
| return [x.name for x in local_device_protos if x.device_type == 'GPU'] | |
| get_available_gpus() |