Defining Eloquent model (will assume that DB table named is set as plural of class name and primary key named "id"):
class Shop extends Eloquent {}
Using custom table name
protected $table = 'my_shops';
| #!/bin/bash | |
| # hosted at https://gist.github.com/Mark-Booth/5058384 | |
| # forked from https://gist.github.com/lth2h/4177524 @ ae184f1 by mark.booth | |
| # forked from https://gist.github.com/jehiah/1288596 @ e357c1e by lth2h | |
| # ideas from https://github.com/kortina/bakpak/blob/master/bin/git-branches-vs-origin-master | |
| # this prints out some branch status | |
| # (similar to the '... ahead' info you get from git status) | |
| # example: |
One of the best ways to reduce complexity (read: stress) in web development is to minimize the differences between your development and production environments. After being frustrated by attempts to unify the approach to SSL on my local machine and in production, I searched for a workflow that would make the protocol invisible to me between all environments.
Most workflows make the following compromises:
Use HTTPS in production but HTTP locally. This is annoying because it makes the environments inconsistent, and the protocol choices leak up into the stack. For example, your web application needs to understand the underlying protocol when using the secure flag for cookies. If you don't get this right, your HTTP development server won't be able to read the cookies it writes, or worse, your HTTPS production server could pass sensitive cookies over an insecure connection.
Use production SSL certificates locally. This is annoying
I have spent quite a bit of time figuring out automounts of NFS shares in OS X...
Somewhere along the line, Apple decided allowing mounts directly into /Volumes should not be possible:
/etc/auto_master (see last line):
#
# Automounter master map
#
+auto_master # Use directory service
Whether you're trying to give back to the open source community or collaborating on your own projects, knowing how to properly fork and generate pull requests is essential. Unfortunately, it's quite easy to make mistakes or not know what you should do when you're initially learning the process. I know that I certainly had considerable initial trouble with it, and I found a lot of the information on GitHub and around the internet to be rather piecemeal and incomplete - part of the process described here, another there, common hangups in a different place, and so on.
In an attempt to coallate this information for myself and others, this short tutorial is what I've found to be fairly standard procedure for creating a fork, doing your work, issuing a pull request, and merging that pull request back into the original project.
Just head over to the GitHub page and click the "Fork" button. It's just that simple. Once you've done that, you can use your favorite git client to clone your repo or j
#Some discussions on logging from docker: Using logstash Using Papertrail
A lot of this boils down to whether you want a single or multi-process (systemd, supervisord etc.) container...
The idea of providing a dedicated, encapsulated development environment with is basically the same for all of your teams' members has been around for a while. For quite some time Vagrant seemed to be the state-of-the-art solution for such needs. In combination with puppet for example one could easily set up a fully working VM with all dependencies installed and set up. But Vagrant is kind of an monstrosity. There are so many things that could possibly go wrong while provisioning a Vagrant machine. Also, if you happen to have a lot of projects using Vagrant the VMs are probably using a lot of your precious SSDs disk space. The small containerized images build with Docker seemed to be an interesting alternative, but rather hard to maintain in the past.
Fear no more! docker-compose has you covered. But there a