Put SearchTrait.php
in app
directory. Then use SearchTrait
in your model, like so
use App\SearchTrait;
use Illuminate\Database\Eloquent\Model;
class Article extends Model
{
version: '3.1' | |
services: | |
wordpress: | |
image: wordpress | |
restart: always | |
ports: | |
- 80:80 | |
environment: |
# regex to split $uri to $fastcgi_script_name and $fastcgi_path | |
fastcgi_split_path_info ^(.+\.php)(/.+)$; | |
# Check that the PHP script exists before passing it | |
try_files $fastcgi_script_name =404; | |
# Bypass the fact that try_files resets $fastcgi_path_info | |
# see: http://trac.nginx.org/nginx/ticket/321 | |
set $path_info $fastcgi_path_info; | |
fastcgi_param PATH_INFO $path_info; |
# /etc/logrotate.d/cloudflared | |
/var/log/cloudflared/cloudflared.log { | |
rotate 7 | |
daily | |
compress | |
missingok | |
notifempty | |
} |
Ramp up your Kubernetes development, CI-tooling or testing workflow by running multiple Kubernetes clusters on Ubuntu Linux with KVM and minikube.
In this tutorial we will combine the popular minikube
tool with Linux's Kernel-based Virtual Machine (KVM) support. It is a great way to re-purpose an old machine that you found on eBay or have gathering gust under your desk. An Intel NUC would also make a great host for this tutorial if you want to buy some new hardware. Another popular angle is to use a bare metal host in the cloud and I've provided some details on that below.
We'll set up all the tooling so that you can build one or many single-node Kubernetes clusters and then deploy applications to them such as OpenFaaS using familiar tooling like helm. I'll then show you how to access the Kubernetes clusters from a remote machine such as your laptop.
# remove specific file from git cache | |
git rm --cached filename | |
# remove all files from git cache | |
git rm -r --cached . | |
git add . | |
git commit -m ".gitignore is now working" |
Rule:
{
"printWidth": 100,
"singleQuote": true,
"trailingComma": "all",
pm2 start app.js --interpreter ./node_modules/.bin/babel-node |
I this is part of the first node web scraper I created with axios and cheerio. I took out all of the logic, since I only wanted to showcase how a basic setup for a nodejs web scraper would look.
const cheerio = require('cheerio'),
axios = require('axios'),
url = `<url goes here>`;