Enabling WordPress SSL using Cloudflare Flexible SSL

Enabling SSL within WordPress is not a simple task if you rely upon the “flexible SSL” offering from CloudFlare. You will end up with SSL infinite redirects and will likely be unable to get into your admin interface without editing config files.

Getting this to work though, with the help of a plugin, can be relatively straight-forwards.

First, you need to install the following plugin on your WordPress installation: https://wordpress.org/plugins/cloudflare-flexible-ssl/

Next, within the CloudFlare configuration for your domain, browse to “Page Rules” – you want to create a new Page Rule:

Enter http://www.<your domain>.<name>/* – for example http://www.cb-net.co.uk/* and then, from the drop down box, select “Always use HTTPS” – finally, click “Save and Deploy:”

That’s it.. browse to your site’s URL and confirm the traffic is automatically redirected to HTTPS.

You do not have to change the WordPress config – to re-enable HTTP access remove the CloudFlare rule.

Enabling Duo Dual / Multi-Factor Authentication (MFA) for Guacamole Docker

Before proceeding, be sure to check out my post on getting guacamole up and running using Docker images – here.

Updated 12/04; reflected availability of 0.9.12-incubating version of guacamole.

First, you’ll need to register for a Free Duo account, go to: https://duo.com/

Create a new Auth API application: Dashboard > Applications > Protect an Application > Web SDK

  • Scroll down, under Settings and change the name to “Guacamole,” or something of your choice.
  • Copy out the following information (you’ll need this for the guacamole.properties file):
    • Integration Key
    • Secret Key
    • API hostname

Finally, generate a duo “application key” on your docker host – note you do not have to input this anywhere on your Duo configuration.

dd if=/dev/random count=1 | sha256sum

Now from your docker host we will create a skeleton extensions directory and guacamole.properties file that will be passed through to the guacamole docker image. Don’t worry, we’ll only add the Duo-specific config/ extension files here, the docker images will sort the rest out for us!

We will create this skeleton home directory under: /var/docker/config/guacamole/

# From Docker HOST execute these commands

cd ~/
mkdir -p /var/docker/config/guacamole/extensions/

wget http://apache.mirrors.tds.net/incubator/guacamole/0.9.12-incubating/binary/guacamole-auth-duo-0.9.12-incubating.tar.gz

tar zxvf guacamole-auth-duo-0.9.12-incubating.tar.gz

mv guacamole-auth-duo-0.9.12-incubating/guacamole-auth-duo-0.9.12-incubating.jar /var/docker/config/guacamole/extensions/

cd /var/docker/config/guacamole
vi guacamole.properties

### Duo MFA Config
duo-api-hostname: <as per duo config>
duo-integration-key: <as per duo config>
duo-secret-key: <as per duo config>
duo-application-key: <generate using command above>

# Now save/ close the text file

Finally, we’ll now drop and recreate the guacamole docker image with Duo support – note this will stop access / any running sessions. Note how we pass through the config folder and then define it as a path within the container which GUACAMOLE_HOME then uses.

Be sure to verify syntax of this command – i.e.

  • Database name
  • Database user account/ password
  • guacd and mysql linked docker container names
docker stop guacamole
docker rm guacamole

docker run --name guacamole --link guacd:guacd --link guac-mysql:mysql \
-e MYSQL_DATABASE='guacamole' \
-e MYSQL_USER='guacamole' \
-v /var/docker/config/guacamole:/config \
-e GUACAMOLE_HOME=/config \
-e MYSQL_PASSWORD='<your password>' \
-d -p 8080:8080 guacamole/guacamole

The guacamole container should now be started and you should be able to login/ assign MFA to your guacamole account.

Be sure to reset your browser cache as otherwise you will be presented with an error when logging on.

Hey, where did my VM LAN connectivity go?!

I was over-due my monthly patching on my KVM/QEMU host based on Ubuntu 16.10, so decided to diligently update packages and reboot last night… at least that was the plan.

After the host came back up I noted that all of my KVM-based workload lost network connectivity. Host <> VM was working, but VM<>VM and LAN<>VM were failing.

I confirmed the bridge config (br0) had survived the upgrade, that the bridge was up, and in fact showed the VMs being attached…  so I started to dig deeper.

I then came across this guide on bridge setup: https://wiki.libvirt.org/page/Networking#Debian.2FUbuntu_Bridging – the main stand-out was this section:

Finally add the ‘/etc/sysctl.conf’ settings

net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

Closely followed by:

To ensure that the bridge sysctl settings get loaded on boot, add this line to ‘/etc/rc.local’ just before the ‘exit 0’ line. This is a work around for Ubuntu bug #50093.

Also to stop Circumventing Path MTU Discovery issues with MSS Clamping

 *** Sample rc.local file ***
 /sbin/sysctl -p /etc/sysctl.conf
 iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS  --clamp-mss-to-pmtu
 exit 0

I had done neither of these steps previously, and the host had been running for a few months/ various updates without issue.

Now, getting rc.local working on Ubuntu 16.10 was… “interesting” to say the least, and even when working, rc.local executes too early for this to be a “clean” fix. I’ll share my steps, and eventual workaround, but it isn’t pretty!

The Workaround

Create service file for rc-local :

 sudo vi /etc/systemd/system/rc-local.service

Contents as below:

Description=/etc/rc.local Compatibility

ExecStart=/etc/rc.local start


Now create the rc.local file:

vi /etc/rc.local

Contents as below (keep the header and the “sleep 10” line, without this the script will fail):

#!/bin/sh -e
# rc.local
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# bits.
# By default this script does nothing.

sleep 10
/sbin/sysctl -p /etc/sysctl.conf
#echo "0" > /proc/sys/net/bridge/bridge-nf-call-iptables
#echo "0" > /proc/sys/net/bridge/bridge-nf-call-ip6tables
#echo "0" > /proc/sys/net/bridge/bridge-nf-call-arptables

iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
exit 0

Enable execution on the rc.local file:

chmod +x /etc/rc.local

Now reload-daemons, enable rc-local and then, when finished, reboot:

systemctl daemon-reload
systemctl enable rc-local


Ubuntu Linux and OpenVPN Client, using UFW to force traffic via VPN tunnel interface

Updated 24/07/17; included startup configuration to ensure automatic docker container connectivity via VPN post reboot/ startup.

First, you’ll need to obtain your “.ovpn” configuration file from your VPN provider. Find and replace <config file> with the name of the file excluding the file extension.

There are two stages to this guide:

  1. VPN Client Connection/ Configuration
  2. UFW Firewall Configuration (to ensure traffic can only use VPN and prevent DNS Leak)

If you are using docker containers be sure to check the considerations at the end of this article.

VPN Client Connection/ Configuration

Prepare the configuration file:

# Disable IPv6
echo "#disable ipv6" | sudo tee -a /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

# Move config file to /etc/openvpn
mv <config file>.ovpn /etc/openvpn/<config file>.conf

# Create .secrets file
echo '<username>' >> /etc/openvpn/.secrets
echo '<password>' >> /etc/openvpn/.secrets
chmod 600 /etc/openvpn/.secrets

# Modify config file
vi <config file>.conf

# Ensure you have no other auth-user-pass lines defined
auth-user-pass .secrets

# Add redirect-gateway to force traffic down TUN interface

# Add DNS server update script execution to config file
script-security 2
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf

# Save/close config file

We’ll now configure OpenVPN client to automatically connect to this VPN interface on startup:

# Edit /etc/default/openvpn, un-comment AUTOSTART="all" then save/close 
vi /etc/default/openvpn

# Start / enable openvpn service at boot
sudo systemctl start openvpn
sudo systemctl enable openvpn

Confirm the VPN tunnel is up and public IP is that of the VPN provider:

# Look for a tun0 interface, if found you are connected!

# Check public IP is "hidden"/ different vs. machine not on the VPN
curl ipinfo.io/ip

UFW Firewall Configuration

We’ll configure UFW to allow only allow outbound traffic to the VPN provider (“tun0” in this example) interface.

First we must enable forwarding:

vi /etc/default/ufw

Next we’ll configure the necessary UFW rules to facilitate the outbound traffic to the VPN provider, but block everything else. You’ll need to change:

  • <VPN Server IP> to IPv4 address as-per the “remote xx.xx.xx.xx” line in your OpenVPN configuration file.
  • <port> to the remote OpenVPN port  (usually 1194 or 443)
  • <LAN subnet> to network/subnet that represents your network – i.e.
# Defaults
ufw default deny incoming
ufw default deny outgoing

# Allow SSH from local LAN
ufw allow from <LAN subnet> to any port 22

# UFW rule to ensure we only hit the VPN
ufw allow out to <VPN server IP> port <VPN server port>
ufw allow out to <LAN subnet>
ufw allow out on tun0

# Enable the firewall
ufw enable

If DNS lookups on the client fail you’ve missed the configuration lines from your OpenVPN configuration file, as above, you want to force DNS lookups over the VPN otherwise you’re leaking DNS requests, reducing the privacy value of your VPN.

# Add DNS server update script execution to config file 
script-security 2 
up /etc/openvpn/update-resolv-conf 
down /etc/openvpn/update-resolv-conf

Considerations when using Docker Containers

If you want your docker containers to sit behind the VPN, ensure you use the–net=host” argument. As per: https://docs.docker.com/engine/userguide/networking/default_network/container-communication/

If your containers have no internet connectivity at startup it is likely because docker started before the VPN connection was established. Credits for this solution here.

sudo mkdir /etc/systemd/system/docker.service.d/
sudo touch /etc/systemd/system/docker.service.d/depend.conf
sudo vi /etc/systemd/system/docker.service.d/depend.conf

# New conf file should only contain lines below

# Now save the file and exit vim

sudo systemctl daemon-reload

# Test container connectivity following a reboot

Docker’s forward rules permit all external source IPs by default.

By default containers will use the docker0 interface and thus when your VPN goes down, they will still have external/ internet access. This statement only applies when using the default docker0 interface, not when binding the container to the “host” interface.

My experience shows that the default docker forward rule associated with the docker0 interface  overrides any UFW rule (i.e. as defined above).

You can test container connectivity using the commands below:

# Test there is NO connectivity from container when VPN is down
systemctl stop openvpn
docker exec -it <container_name> /bin/bash

# Test this IS connectivity from container when VPN is up
systemctl start openvpn
docker exec -it <container_name> /bin/bash

Running guacamole from a Docker Container on Ubuntu 16.04 LTS / 16.10

Updated Feb 2017 to reflect guacamole/guacd and guacamole/guacamole Docker images, rather than the glyptond images.

Updated 26/05/17 to include Tomcat hardening, as-per https://www.owasp.org/index.php/Securing_tomcat

Want to get multi-factor authentication? Check out my post here for Docker support/ deployment of Duo MFA for Guacamole.

I’ve been looking at how I can move some/ all of my QEMU virtualised workloads to docker containers – the main drivers behind this being:

  • Reducing the administrative overhead of updating an additional operating system
  • Reducing the compute overhead of running an additional operating system on top of the host O/S

I also looked at whether this solution wold run in a docker-enabled Ubuntu 16.04 LXD container and, whilst the mysql and guacamole images downloded, the guacd image failed with an “operation not permitted error” meaning I was unable to use the image inside an LXD container.

I use Apache guacamole for remote access to my infrastructure and, on finding there were guacamole containers for the client and server elements, I thought I would look to move this workload from a dedicated Ubuntu Server 16.04 LTS Virtual Machine to a docker container.

This guide assumes you have installed docker as outlined here: http://www.cb-net.co.uk/linux/installing-docker-on-ubuntu-16-04-lts-16-10/

Downloading / Deploying the Container

Be sure to define/ update the commands below with:

  • A new mysql root user password (find and replace <root password> )
  • A new mysql guacamole user password (find and replace <guac user password> )

We will now create/ configure and start three containers:

  1. A mysql database instance: guac-mysql
  2. A guacamole-server container: guacd
  3. A guacamole-client container: guacamole
# Pull the guacamole (and related) docker images
sudo docker pull guacamole/guacd
sudo docker pull guacamole/guacamole
sudo docker pull mysql 

# Create script to prepare MySQL Database
docker run --rm guacamole/guacamole /opt/guacamole/bin/initdb.sh --mysql > initdb.sql

# Make a scripts folder to pass-through to container
mkdir /tmp/scripts
cp initdb.sql /tmp/scripts

# Create/ start mysql instance
docker run --name guac-mysql -v /tmp/scripts:/tmp/scripts -e MYSQL_ROOT_PASSWORD=<root password> -d mysql:latest 
history -c

# Create mysql db, user and prepare mysql instance for guacamole
docker exec -it guac-mysql /bin/bash

mysql -u root -p'<root password>'
CREATE USER 'guacamole' IDENTIFIED BY '<guac user password>';

cat /tmp/scripts/initdb.sql | mysql -u root -p'<root password>' guacamole
history -c

# Now ctrl-d to exit docker container shell

# Start guacd 
docker run --name guacd -d guacamole/guacd

# Start guacamole client
docker run --name guacamole --link guacd:guacd --link guac-mysql:mysql \
-e MYSQL_DATABASE='guacamole' \
-e MYSQL_USER='guacamole' \
-e MYSQL_PASSWORD='<guac user password>' \
-d -p 8080:8080 guacamole/guacamole

# Harden tomcat, as-per https://www.owasp.org/index.php/Securing_tomcat
sudo docker exec -it guacamole /bin/bash

sed -i 's/redirectPort="8443"/redirectPort="8443" server="" secure="true"/g' /usr/local/tomcat/conf/server.xml

sed -i 's/<Server port="8005" shutdown="SHUTDOWN">/<Server port="-1" shutdown="SHUTDOWN">/g' /usr/local/tomcat/conf/server.xml
rm -Rf /usr/local/tomcat/webapps/docs/
rm -Rf /usr/local/tomcat/webapps/examples/
rm -Rf /usr/local/tomcat/webapps/manager/
rm -Rf /usr/local/tomcat/webapps/host-manager/
chmod -R 400 /usr/local/tomcat/conf

You can now browse to http://<docker host IP>:8080/guacamole/ and login using the credentials guacadmin/guacadmin.

Managing the Containers

Replace “guac-mysql” below with the other container names used above to manage guacd, guacamole or guac-mysql independently:

# Start a container
sudo docker start guac-mysql
# Stop a container
sudo docker stop guac-mysql
# Hard-stop a container
sudo docker kill guac-mysql
# Restart (and auto-update) a container
sudo docker restart guac-mysql
# List all running containers
sudo docker ps
# List all running AND non-running containers
sudo docker ps -a
# Remove a container
sudo docker rm guac-mysql
# Remove the mysql docker image
sudo docker rmi mysql
# Review logs for container
sudo docker logs -f guac-mysql

Running Plex from a Docker Container on Ubuntu 16.04 LTS / 16.10

Updated 24/12/16 : Plex now have an official Docker container, this guide has been updated to use this.

Updated 28/03/17 : Automatic updates not working on your container when restarting? Make sure you specify the correct tag for the docker image. If you are a plexpass subscriber pull the “plexinc/pms-docker:plexpass” image, if not pull the “plexinc/pms-docker:public” image.

Updated 22/05/17: Issue with NFS volumes not being mounted when docker service started led to media being unavailable without container/ docker restart. Changing volumes passed-through to container to include “:shared” resolves this issue.

I’ve been looking at the merits of moving my Plex server workload from a dedicated KVM/QEMU virtual machine to a docker container on the host server itself. The reasons for doing this were as below:

  • Reducing the administrative overhead of updating an additional operating system
  • Reducing the compute overhead of running an additional operating system on top of the host O/S

An additional benefit of running Plex in this manner is that on restarting the container the latest version of Plex is automatically pulled and deployed, making updates in future very, very simple.

This guide assumes you have installed docker as outlined here: http://www.cb-net.co.uk/linux/installing-docker-on-ubuntu-16-04-lts-16-10/

Downloading / Deploying the Container

To download and deploy the container you will need:

  • A storage location for the Plex configuration directory – this is persistent, i.e. it will survive containers being deleted/ recreated. The size of this will vary based on how large your media library is. On libraries with several terabytes of media you looking at tens of gigabytes of storage.
  • One or multiple mount points/ folder locations that contains your media (in this example there are multiple “-v” definitions that represent paths to TV shows, movies etc. you can have as many of these as you want)

Note below, I have used the “plexinc/pms-docker:plexpass” docker image to ensure automatic updates on container restart work. If you are not a plexpass subscriber ensure you change this to “public.” If you do not specify a tag, automatic updates on the container will not work.

# Pull the linuxserver/plex docker image
sudo docker pull plexinc/pms-docker:plexpass

# Get a Plex Claim Token by going to this URL and replace <CLAIM> below
# https://www.plex.tv/claim/

# Create a new linuxserver/plex docker container
docker create \
--name plex \
--net=host \
-e TZ=Europe/London \
-e PUID=1000 -e PGID=1000 \
-v <path to config>:/config \
-v <path to music>:/data/music:shared \
-v <path to tv series>:/data/tvshows:shared \
-v <path to movies>:/data/movies:shared \
-v <path to home videos>:/data/homevideos:shared \

# Configure docker container to always update
docker update --restart=always plex

# Start the plex container
docker start plex

You’ll notice I have defined volumes as “:shared” which enables NFS volumes passed-through to the container to be mounted after the container starts without issue.

You can now browse to http://<docker host IP>:32400 and login using you plex.tv account.

If you want to use the PlexPass version of Plex modify the server settings and restart the container using the command shown below.

Managing the Plex Container

# Start the plex container
sudo docker start plex
# Stop the plex container
sudo docker stop plex
# Hard-stop the plex container
sudo docker kill plex
# Restart (and auto-update) the plex container
sudo docker restart plex
# List all running containers
sudo docker ps
# List all running AND non-running containers
sudo docker ps -a
# Remove the plex container (note you can redploy and will not lose anything/ config wise)
sudo docker rm plex
# Remove the linuxserver/plex docker image
sudo docker rmi linuxserver/plex
# Review logs for plex container
sudo docker logs -f plex

Installing Docker on Ubuntu Server 16.04 LTS / 16.10

I’ve been looking at moving some of my application/ process workloads from KVM/ QEMU Virtual Machines to docker containers – simply to reduce unnecessary overhead and complexity.

Installing docker on Ubuntu Server 16.04 or 16.10 is surprisingly straight-forwards, it is also possible (from my experience) to run docker alongside KVM/QEMU on the host server, as well as running docker containers within KVM/QEMU virtual machines.

# Update host O/S
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates

# Create apt source for docker
sudo apt-key adv \
--keyserver hkp://ha.pool.sks-keyservers.net:80 \
--recv-keys 58118E89F3A912897C070ADBF76221572C52609D 
echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" | sudo tee /etc/apt/sources.list.d/docker.list

# Pre-reqs installation
sudo apt-get update
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual

# Install docker
sudo apt-get update
sudo apt-get install docker-engine
sudo service docker start

# UFW config
sudo ufw status

# If UFW enabled modify /etc/default/ufw
vi /etc/default/ufw

# Set docker to start on boot
sudo systemctl enable docker

Changing the guacamole MySQL User Password

From an SSH shell execute the following commands to change the MySQL user password:

mysql -u root -h localhost -p'<root password>'
USE guacamole;
SET PASSWORD FOR 'guacamole'@'localhost' = PASSWORD('<new password>');

Now modify /etc/guacamole/guacamole.properties ensuring you update the value for “mysql-password: <password>”

# MySQL properties
mysql-hostname: localhost
mysql-port: 3306
mysql-database: guacamole
mysql-username: guacamole
mysql-password: <password>

Finally restart tomcat8

systemctl restart tomcat8

Creating an SMB / CIFS Share on Ubuntu 16.04 LTS

As part of my KVM/QEMU setup I needed to be able to copy media from a Windows PC to a Linux Ubuntu Server, for ease I wanted to do this via SMB/ CIFS – I was also using NFS for linux <> linux file sharing, so opted to use the same bind mount point for both NFS and SMBD.

Relevant /etc/fstab entries as below:

# Physical drive mount point
/dev/sdb1 /mnt/media1 ext4 defaults 0 2

# Media1; NFS/ SMBD mountpoint
/mnt/media1 /export/media1 none bind 0 0

Install smbd on Ubuntu server:

apt-get install samba
cp /etc/samba/smb.conf ~
vi /etc/samba/smb.conf

Create a samba login for your user account – in my case, my linux user account was “chris” – you will use this to access the share form your windows client (expect a prompt for username/ password when you browse to the share):

sudo smbpasswd -a chris

Configure smbd – modify paths/ username accordingly:

vi /etc/samba/smbd.conf
# add this to the bottom of the file

comment = My Share
path = /export/media1
browsable = yes
valid users = chris
read only = no
create mask = 0755

Finally, start SMBD and set to start automatically on boot – once started, you should be able to browse to “\\<IP/DNS Name of Ubuntu Server>”

# start smbd
service smbd restart

# enable smbd on boot
systemctl enable smbd

Creating and Managing VMs in Ubuntu Server 16.10

I recently bought and setup a Asrock DeskMini with a i5 6400 (will take any desktop S1151 65W CPU), 16GB RAM and an SSDs to act as a KVM/ QEMU host at home. At some point I’ll share the end-to-end setup, but for now suffice to say I am really happy with this very flexible and tiny box of computing power!

I did however want to share some useful VM setup and management commands I have been using, some useful background information:

  • Host O/S Ubuntu Server 16.10 – with “Virtual Machine host” option select at install time.
  • Network configuration as below
apt-get install -y bridge-utils

cat /etc/network/interfaces

 # The loopback network interface
 auto lo
 iface lo inet loopback

 # The primary network interface
 auto br0
 iface br0 inet static
             bridge_ports enp0s31f6
             bridge_stp off
             bridge_fd 0
  • Storage layout as below:
 ├─sdb4 /var/kvm/images
 ├─sdb2 [SWAP]
 ├─sdb3 part /
 └─sdb1 part /boot/efi

Install virtinst:

apt-get install virtinst

Create directories to store vhd files and iso files used for installing VMs

mkdir -p /var/kvm/images/vm
mkdir -p /var/kvm/images/iso


Creating New Virtual Machines

Ubuntu Server 16.04 LTS (from an ISO)

First download Ubuntu server 16.04 LTS ISO:

cd /var/kvm/images/iso
wget http://releases.ubuntu.com/16.04.1/ubuntu-16.04.1-server-amd64.iso

Create the virtual machine – requires the use of sudo – this machine will have 4 vCPUs, 4GB RAM and a 32GB vhd.

virt-install \
 --virt-type=kvm \
 --hvm \
 --name vlinux1 \
 --ram 4096 \
 --disk path=/var/kvm/images/vm/vlinux1.qcow2,size=32,bus=virtio,format=qcow2 \
 --vcpus=4 \
 --os-type linux \
 --os-variant ubuntu16.04 \
 --network bridge=br0 \
 --graphics vnc,listen= \
 --noautoconsole \
 --console pty,target_type=serial \
 --cdrom /var/kvm/images/iso/ubuntu-16.04.1-server-amd64.iso

Complete the installation using TightVNC (or another VNC client) to connect via <kvmhostip>:59000

Once completed, the virtual machine will shutdown. Once shutdown the VNC graphics will no longer function, even when you restart the mahcine.

You can configure persistent VNC-based graphics by modifying the XML file associated with the virtual machine – stored under /etc/libvirt/qemu/

vi /etc/libvirt/qemu/vlinux1.xml

# Now remove all lines relating to <graphics> and replace with single line as below:

<graphics type='vnc' port='-1' autoport='yes' listen=''/>

When you start the machine, you’ll be able to reconnect via VNC. The port number automatically increments for every VM configured like this – based on the order in which the machines start up/ are started up.


Managing Virtual Machines

Use virsh to manage and configure VMs – examples below – all require the use of sudo.

# List powered-on / running virtual machines
virsh list --all

# Start / power-on virtual machine "vlinux1"
virsh start vlinux1

# Reset virtual machine "vlinux1"
 virsh reset vlinux1

# Hard power-off virtual machine "vlinux1"
virsh destory vlinux1

# Shutdown virtual machine "vlinux1"
 virsh shutdown vlinux1

# Remove from inventory virtual machine "vlinux1"
 virsh undefine vlinux1

# Set power-on at host start-up for virtual machine "vlinux1"
 virsh autostart vlinux1

# Display VM information for virtual machine "vlinux1"
virsh dominfo vlinux1