$ aptitude update && aptitude safe-upgrade
$ update-initramfs -vtuk 2.6.24-9-pve
$ /usr/sbin/update-grub # update de grub
$ cat /boot/grub/grub.cfg # controle
http://openhelp.net/tag/quota/
On rencontre un problème de permission
$ date -s 03:42:56
date: cannot set date: Operation not permitted
On stop le container
$ vzctl stop <veid>
Stopping container ...
Container was stopped
On active la capacité sys_time
$ vzctl set <veid> --save --capability sys_time:on
Saved parameters for CT <veid>
Redémarrage du container
$ vzctl start <veid>
Starting container ...
Container is mounted
blabla ...
Remplaçons le fichier de zone pour régler l'heure
$ rm /etc/localtime
$ ln -s /usr/share/zoneinfo/Europe/Paris /etc/localtime
$ date -s 03:45:37
$ Sat Jun 25 03:45:37 CEST 2011
http://manpages.ubuntu.com/manpages/karmic/man8/vzctl.8.html
Source des fichiers de config /etc/default/fw-rules/openvz
Ouvrir un nouveau port pour acceder a une machine virtuel
- Copier le fichier /etc/default/fw-rules/openvz/vz-backup.conf
- Ouvrir le port 8080 (par exemple) pour acceder depuis internet au serveur web de la machien virtuelle CreateApp NAME=backup ZONE=zOne EXT=tcp:8080 INT=10.10.101.2:80
Voir cette superbe doc http://www.fridu.org/fulup-posts/40-hosting-a-sysadmin/52-openvz-virtualization#openvz
http://wiki.openvz.org/Updating_Debian_template
$ cd /vz/private/555
$ tar --numeric-owner -czf /vz/template/cache/debian-4.0-i386-minimal.tar.gz .
Template ubuntu 10.04 : http://blog.bodhizazen.net/linux/download-ubuntu-10-04-openvz-templates/
Q: Je souhaite router plusieurs IPs vers une même VM.
R: Ceci n'est (pas) encore possible dans l'interface proxmox, il faut que tu utilises les commandes de OpenVZ:
$ vzctl set 101 --ipadd DEUXIEME_IP --save
R: Eventuellement remplacer 101 par l'id du VPS, visible dans l'interface web ou avec la commande
$ vzlist --all
Pour plus d'infos sur l'interface réseau OpenVZ, voir http://wiki.openvz.org/Virtual_network_device
$ iptables -t nat -A PREROUTING -p tcp -d 188.155.36.X --dport 80 -i eth0 -j DNAT --to-destination 10.10.101.2
$ iptables -t nat -A POSTROUTING -s 10.10.101.3 -o eth0 -j SNAT --to 188.155.36.X
http://wiki.openvz.org/Using_NAT_for_container_with_private_IPs
# RAM is 4k pages, so 131072*4k = 512M
$ vzctl set 1 --save --vmguarpages 131072
$ vzctl set 1 --save --oomguarpages 131072
$ vzctl set 1 --save --privvmpages 131072:196608
--vmguarpages 512M:600M" will translate to 131072:153600 (barrier:limit)
- http://wiki.openvz.org/Resource_shortage
- http://wiki.openvz.org/Setting_UBC_parameters
- http://wiki.openvz.org/Memory_page
- http://wiki.openvz.org/UBC_configuration_examples_table
Q: $ vzctl stop VE se termine toujours en Unable to stop container: operation timed out
R: Checker un problème de mémoire
$ vzctl exec 101 cat /proc/user_beancounters
process regarder colonnes kmemsize, privvmpages et numproc
Q: $ pvectl vzset 103 --disk 10 retourn Container already locked
$ /usr/bin/pvectl vzset 103 --disk 10
vzctl set 103 --diskspace 10485760:11534336 --diskinodes 2000000:2200000 --save
**Container already locked**
unable to set parameters - command failed - 2304
R: On supprime le fichier qui bloque la VM
$ mv /var/lib/vz/lock/103.lck /tmp/
- http://www.proxmox.com/forum/archive/index.php/t-913.html
- http://wiki.openvz.org/Traffic_shaping_with_tc
- http://wiki.openvz.org/Traffic_accounting_with_iptables
- http://tcng.sourceforge.net/
Restore the above backup to CT 600:
$ vzrestore /space/backup/vzdump-777.tar 600
Ou
$ vzdump --restore /space/backup/vzdump-777.tar 600
http://wiki.openvz.org/Backup_of_a_running_container_with_vzdump
$ vzmigrate --online <host> VEID
http://wiki.openvz.org/Checkpointing_and_live_migration
/dev/sda1 2,0G 763M 1,1G 41% / (ext3)
/dev/mapper/pve-lv1 178G 789M 169G 1% /var/lib/vz (LVM)
/dev/mapper/pve-dump 49G 180M 46G 1% /var/lib/vz/dump (LVM)
/dev/mapper/pve-freespace 2,0G 35M 1,9G 2% /var/freespace (LVM)
Après installation
$ umount /var/freespace
$ lvremove /dev/pve/freespace
Autre exemple :
http://www.maximegaillard.com/2251-proxmox-faire-des-sauvegardes-en-mode-snapshot.html
$ umount /var/lib/vz
$ fsck -fC /dev/pve/data
$ resize2fs -p /dev/pve/data 900G
$ lvm lvresize /dev/pve/data --size 900G
$ mount /var/lib/vz
$ vgscan
Reading all physical volumes. This may take a while...
Found volume group "pve" using metadata type lvm2
$ vgchange -a y pve
(make pve available)
$ lvdisplay
--- Logical volume ---
LV Name /dev/pve/lv1
VG Name pve
$ mkdir /mnt/pve/lv1
$ mount /dev/pve/lv1 /mnt/pve/lv1
http://www.novell.com/coolsolutions/appnote/19386.html
UID PID PENDING BLOCKED IGNORED CAUGHT STAT TTY TIME COMMAND
0 9413 0000000000000000 0000000000000000 0000000000000000 0000000000000000 S+ pts/0 0:00 | \_ grep --color=auto D
0 9266 0000000000000000 0000000000000000 0000000000001000 0000000000014000 S ? 0:00 | | | \_ tcpserver -vHRD 127.0.0.1 89 /home/vpopmail/bin/vpopmaild
0 22909 0000000000000100 0000000000000000 fffffffe57f0d8fc 00000000280b2603 Ds ? 0:06 init [2]
0 22984 0000000000000100 0000000000000000 0000000000000000 0000000000004002 D ? 0:12 \_ [init-logger]
1 23176 0000000000000100 0000000000000000 0000000000011000 0000000000000002 Ds ? 0:00 \_ /sbin/portmap
103 23246 0000000000000100 0000000000004002 0000000000001000 00000001800006e0 Dl ? 0:11 \_ /usr/bin/mongod --dbpath /var/lib/mongodb --logpath /var/log/mongodb/mongodb.log --config /etc/mongodb.conf run
0 23265 0000000000000100 0000000000000000 0000000001001206 0000000180014c21 Dl ? 0:00 \_ /usr/sbin/rsyslogd -c4
1 23274 0000000000000100 0000000000000000 0000000000000000 0000000000014003 Ds ? 0:00 \_ /usr/sbin/atd
0 23294 0000000000000100 0000000000000000 0000000000000000 0000000000010001 Ds ? 0:01 \_ /usr/sbin/cron
0 23305 0000000000000100 0000000000000000 0000000001001001 0000000180004002 Dl ? 3:11 \_ /usr/bin/python /usr/bin/fail2ban-server -b -s /var/run/fail2ban/fail2ban.sock
0 23316 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 Ds ? 0:00 \_ nginx: master process /usr/sbin/nginx
33 23317 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:26 | \_ nginx: worker process
33 23318 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:25 | \_ nginx: worker process
33 23319 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23320 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:26 | \_ nginx: worker process
33 23322 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23323 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:26 | \_ nginx: worker process
33 23325 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:26 | \_ nginx: worker process
33 23326 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23327 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23328 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:26 | \_ nginx: worker process
33 23330 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23331 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23371 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:26 | \_ nginx: worker process
33 23375 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23376 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23377 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23379 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:28 | \_ nginx: worker process
33 23382 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23383 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23384 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:28 | \_ nginx: worker process
33 23385 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23386 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23388 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23389 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
33 23391 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:24 | \_ nginx: worker process
33 23393 0000000000000100 0000000000000000 0000000000001000 0000000018016a07 D ? 0:27 | \_ nginx: worker process
0 23452 0000000000002100 0000000000000000 0000000000001000 000000018001644f Ds ? 0:03 \_ /usr/lib/postfix/master
101 23460 0000000000002100 0000000000000000 0000000001001000 0000000180002000 D ? 0:01 | \_ qmgr -l -t fifo -u
0 23461 0000000000000100 0000000000000000 0000000000001000 0000000180014005 Ds ? 0:00 \_ /usr/sbin/sshd
0 6143 0000000000000100 0000000000000000 0000000001001000 0000000180014807 Ds ? 2:54 \_ /usr/bin/python /usr/local/bin/supervisord
33 6146 0000000000000100 0000000000000000 0000000001001000 0000000188314a07 D ? 1:50 \_ /etc/virtualenv/sites/.virtualenvs/nasa/bin/python /etc/virtualenv/sites/.virtualenvs/nasa/bin/gunicorn_django -c deploy/gunicorn.conf.py
33 6155 0000000000000100 0000000000000000 0000000001001000 0000000188304206 D ? 0:10 | \_ /etc/virtualenv/sites/.virtualenvs/nasa/bin/python /etc/virtualenv/sites/.virtualenvs/nasa/bin/gunicorn_django -c deploy/gunicorn.conf.py
33 6156 0000000000000100 0000000000000000 0000000001001000 0000000188304206 D ? 0:10 | \_ /etc/virtualenv/sites/.virtualenvs/nasa/bin/python /etc/virtualenv/sites/.virtualenvs/nasa/bin/gunicorn_django -c deploy/gunicorn.conf.py
33 6147 0000000000000100 0000000000000000 0000000001001000 0000000188314a07 D ? 1:51 \_ /etc/virtualenv/sites/.virtualenvs/mumblr/bin/python /etc/virtualenv/sites/.virtualenvs/mumblr/bin/gunicorn_django -c deploy/gunicorn.conf.py
33 6157 0000000000000100 0000000000000000 0000000001001000 0000000188304206 D ? 0:11 | \_ /etc/virtualenv/sites/.virtualenvs/mumblr/bin/python /etc/virtualenv/sites/.virtualenvs/mumblr/bin/gunicorn_django -c deploy/gunicorn.conf.py
33 6158 0000000000000100 0000000000000000 0000000001001000 0000000188304206 D ? 0:11 | \_ /etc/virtualenv/sites/.virtualenvs/mumblr/bin/python /etc/virtualenv/sites/.virtualenvs/mumblr/bin/gunicorn_django -c deploy/gunicorn.conf.py
33 6148 0000000000000100 0000000000000000 0000000001001000 0000000188314a07 D ? 1:51 \_ /etc/virtualenv/sites/.virtualenvs/heping/bin/python /etc/virtualenv/sites/.virtualenvs/heping/bin/gunicorn_django -c deploy/gunicorn.conf.py
33 6159 0000000000000100 0000000000000000 0000000001001000 0000000188304206 D ? 0:09 \_ /etc/virtualenv/sites/.virtualenvs/heping/bin/python /etc/virtualenv/sites/.virtualenvs/heping/bin/gunicorn_django -c deploy/gunicorn.conf.py
33 6160 0000000000000100 0000000000000000 0000000001001000 0000000188304206 D ? 0:10 \_ /etc/virtuale
$ for i in `ps asxf| grep D | awk {'print $2'}`; do kill -9 $i; done
Not kill app
$ vzctl restart 104
Container already locked
$ l /vz/lock/104.lck
-rw------- 1 root root 7 nov 10 23:58 /vz/lock/104.lck
$ rm /vz/lock/104.lck
rm: détruire fichier régulier `/vz/lock/104.lck'? y
$ vzctl chkpnt 104 --kill
Can not join cpt context 0: No such file or directory