Skip to content

Instantly share code, notes, and snippets.

THE HISTORY OF /ETC FOLDER IN LINUX/UNIX
In initial days of UNIX OS development there is a folder for each type of data like /bin folder for all your executable binaries
/boot folder for all booting related information.
/dev folder for all hardware devices attached to machine.
But people encountered a situation to keep some files which can be a config file or a data file or a socket file or some other files. So they implemented a folder to keep all these files in it and they named it as /etc(As said earlier etcetera). As time passed the meaning of this folder has changed but not the name “etc”. Now /etc folder means a central location for all your configuration files are located and this can be treated as nerve centre of your Linux/Unix machine.
Coming back to our /etc folder explanation, one post on this is not sufficient and the number files/folder in this directory will depend on the applications you install in it. Below are some list of files/folders commonly installed in most of Linux/Unix machines. I t
Docker originally used LinuX Containers (LXC), but later switched to runC (formerly known as libcontainer), which runs in the same operating system as its host. This allows it to share a lot of the host operating system resources. Also, it uses a layered filesystem (AuFS) and manages networking.
AuFS is a layered file system, so you can have a read only part and a write part which are merged together. One could have the common parts of the operating system as read only (and shared amongst all of your containers) and then give each container its own mount for writing.
So, let's say you have a 1 GB container image; if you wanted to use a full VM, you would need to have 1 GB times x number of VMs you want. With Docker and AuFS you can share the bulk of the 1 GB between all the containers and if you have 1000 containers you still might only have a little over 1 GB of space for the containers OS (assuming they are all running the same OS image).
A full virtualized system gets its own set of resources allocated

I will try to explain you in very simple words.

Virtualization

Virtual machines have a full OS with its own memory management installed with the associated overhead of virtual device drivers. In a virtual machine, valuable resources are emulated for the guest OS and hypervisor, which makes it possible to run many instances of one or more operating systems in parallel on a single machine (or host). Every guest OS runs as an individual entity from the host system.

Docker Containers

On the other hand Docker containers are executed with the Docker engine rather than the hypervisor. Containers are therefore smaller than Virtual Machines and enable faster start up with better performance, less isolation and greater compatibility possible due to sharing of the host’s kernel.

OS X is the platform, Darwin is the operating system, and XNU is the kernel. Namely, the XNU kernel is the core piece of software that provides resource management, hardware abstraction, and scheduling. Darwin consists of the XNU kernel and basic software run by there kernel to provide a UNIX environment. OS X is built atop Darwin and provides a collection of frameworks and services that implement the user interface and main application libraries. Darwin and XNU are open source software, but the frameworks that make up the OS X platform on top of Darwin, are not.
Ubuntu and other distributions are based on the Linux kernel and GNU software suite. Ubuntu is the platform, Linux + GNU is the operating system, and Linux is the kernel (more or less). Unlike OS X, Ubuntu doesn't have proprietary frameworks - everything is open source. And the distinction between the OS and platform are blurred as a result. That's why it's called a distribution, because what distinguishes Ubuntu from other Linux distributions is mo
Vagrant and Docker are different beasts. Docker is a two part shell/management layer for building and running virtual linux containers, based on lxc.
The great thing about Docker is that it is light-weight (because it relies on shared-kernel linux containers) and it is distribution agnostic. While the kernel between all instances is shared (but isolated from the host and each other), the user space for different instances can be based on different linux distributions.
Vagrant on the other hand is a wonderful tool for automatically provisioning multiple virtual machines each with their own configurations managed with puppet and/or chef. For its virtualisation it can use different providers. Originally the default provider was virtualbox, but it now supports many more, including vmware fusion and even amazon-ec2.
Interestingly, Vagrant has a Docker provider now, so you can use vagrant to manage your Docker builds and deployments.
Docker is yet limited in its flexibility - 'everything is an image', and yo
What is a snapshot?
A snapshot preserves the state and data of a virtual machine at a specific point in time.
The state includes the virtual machine’s power state (for example, powered-on, powered-off, suspended).
The data includes all of the files that make up the virtual machine. This includes disks, memory, and other devices, such as virtual network interface cards.
A virtual machine provides several operations for creating and managing snapshots and snapshot chains. These operations let you create snapshots, revert to any snapshot in the chain, and remove snapshots. You can create extensive snapshot trees.
In VMware Infrastructure 3 and vSphere 4.x, the virtual machine snapshot delete operation combines the consolidation of the data and the deletion of the file. This caused issues when the snapshot files are removed from the Snapshot Manager, but the consolidation failed. This left the virtual machine still running on snapshots, and the user may not notice until the datastore is full with multiple snapsh
http://blog.firsthand.ca/2011/01/upgrading-git-via-macports-166-to-1735.html
https://guide.macports.org/chunked/installing.macports.html
http://docs.gz.ro/node/330
https://stackoverflow.com/questions/34102448/i-want-to-use-git-via-macports-instead-of-apple-git
https://stackoverflow.com/questions/8957862/how-to-upgrade-git-to-latest-version-on-os-x
Quick Primer On DNS
DNS is the directory of the Internet. Whenever you click on a link, send an email, open a mobile app, often one of the first things that has to happen is your device needs to look up the address of a domain. There are two sides of the DNS network: Authoritative (the content side) and Resolver (the consumer side).
Every domain needs to have an Authoritative DNS provider. Cloudflare, since our launch in September 2010, has run an extremely fast and widely-used Authoritative DNS service. 1.1.1.1 doesn't (directly) change anything about Cloudflare's Authoritative DNS service.
On the other side of the DNS system are resolvers. Every device that connects to the Internet needs a DNS resolver. By default, these resolvers are automatically set by whatever network you're connecting to. So, for most Internet users, when they connect to an ISP, or a coffee shop wifi hot spot, or a mobile network then the network operator will dictate what DNS resolver to use.
DNS's Privacy Problem
The problem is th

First and most important classification, we have HDDs and SSDs.

HDD: Hard Disk Drive: This is a bunch of magnetic discs spinning very fast, with a head reading and writing data on the magnetic surface of the disks. These are larger and cheaper, but also slower, since the head has to reposition itself every time you need data from a different location.

SSD: Solid State Drives: You can consider these as bigger version of pen drives. They store data in memory (flash) chips, and this means they don't have any moving parts. This allows higher speeds, but also makes them expensive. Also, they don't come in the capacities HDDs come in.

@zealfire
zealfire / OSI.md
Last active August 15, 2019 21:35

See this: http://www.omnisecu.com/tcpip/osi-model.php too. OSI stands for Open Systems Interconnection. It has been developed by ISO – ‘International Organization of Standardization‘, in the year 1974. It is a 7 layer architecture with each layer having specific functionality to performed. All these 7 layers work collaboratively to transmit the data from one person to another across the globe.

image not available

  1. Physical Layer (Layer 1) : The lowest layer of the OSI reference model is the physical layer. It is responsible for the actual physical connection between the devices. The physical layer contains information in the form of bits. It is responsible for the actual physical connection between the devices. When receiving data, this layer will get the signal received and convert it into 0s and 1s and send them to the Data Link layer, which will put the frame back together.