type below:
brew update
brew install redis
To have launchd start redis now and restart at login:
brew services start redis
type below:
brew update
brew install redis
To have launchd start redis now and restart at login:
brew services start redis
| <?php | |
| // Open file pointer to standard output | |
| $fp = fopen( 'php://output', 'w' ); | |
| // Write BOM character sequence to fix UTF-8 in Excel | |
| fputs( $fp, $bom = chr(0xEF) . chr(0xBB) . chr(0xBF) ); | |
| // Write the rest of CSV to the file | |
| if ( $fp ) { |
Code is clean if it can be understood easily – by everyone on the team. Clean code can be read and enhanced by a developer other than its original author. With understandability comes readability, changeability, extensibility and maintainability.
| import 'package:flutter/material.dart'; | |
| import 'mjpeg_view.dart'; | |
| void main() => runApp(MyApp()); | |
| class MyApp extends StatelessWidget { | |
| @override | |
| Widget build(BuildContext context) { | |
| return MaterialApp( | |
| title: 'Flutter Demo', |
| #include "Arduino.h" | |
| #include "esp_camera.h" | |
| #include "ESPAsyncWebServer.h" | |
| typedef struct { | |
| camera_fb_t * fb; | |
| size_t index; | |
| } camera_frame_t; | |
| #define PART_BOUNDARY "123456789000000000000987654321" |
| _APP_ENV=production | |
| _APP_LOCALE=en | |
| _APP_OPTIONS_ABUSE=enabled | |
| _APP_OPTIONS_FORCE_HTTPS=disabled | |
| _APP_OPENSSL_KEY_V1=your-secret-key | |
| _APP_DOMAIN=localhost | |
| _APP_DOMAIN_TARGET=localhost | |
| _APP_CONSOLE_WHITELIST_ROOT=enabled | |
| _APP_CONSOLE_WHITELIST_EMAILS= | |
| _APP_CONSOLE_WHITELIST_IPS= |
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
08737ef720f0510c7ec2aa84d7f70c691073c35d.| Latest versions of these scripts are available in git repository https://github.com/jcmvbkbc/esp32-linux-build |