how to mount qcow2 image

August 14, 2018 - Reading time: ~1 minute

Step 1 - Enable NBD on the host

modprobe nbd max_part=8

Step 2 - Connect the QCOW2 as a network block device

qemu-nbd --connect=/dev/nbd0 /var/lib/vz/images/100/vm-100-disk-1.qcow2

Step 3 - List partitions inside the QCOW2

fdisk /dev/nbd0 -l

Step 4 - Mount the partition from the VM

mount /dev/nbd0p1 /mnt/somepoint/

You can also mount the filesystem with normal user permissions, ie. non-root:

mount /dev/nbd0p1 /mnt/somepoint -o uid=$UID,gid=$(id -g)

Step 5 - After you're done, unmount and disconnect

umount /mnt/somepoint/
qemu-nbd --disconnect /dev/nbd0
rmmod nbd

reload .bashrc settings without logging out and back in

August 10, 2018 - Reading time: 4 minutes

You can enter the long form command:

source ~/.bashrc

or you can use the shorter version of the command:

. ~/.bashrc

or you could use:
exec bash

To complement and contrast the above commands with . ~/.bashrc and exec bash:

Both solutions effectively reload ~/.bashrc, but there are differences:

  • . ~/.bashrc or source ~/.bashrc will preserve your current shell session:

    • Except for the modifications that reloading ~/.bashrc into the current shell (sourcing) makes, the current shell process and its state are preserved, which includes environment variables, shell variables, shell options, shell functions, and command history.
  • exec bash, or, more robustly, exec "$BASH"[1], will replace your current shell with a new instance, and therefore only preserve your current shell's environment variables (including ones you've defined ad hoc, in-session).

    • In other words: Any ad-hoc changes to the current shell in terms of shell variables, shell functions, shell options, command history are lost.

Depending on your needs, one or the other approach may be preferred.


[1] exec bash could in theory execute a different bash executable than the one that started the current shell, if it happens to exist in a directory listed earlier in the $PATH. Since special variable $BASH always contains the full path of the executable that started the current shell, exec "$BASH" is guaranteed to use the same executable.
A note re "..." around $BASH: double-quoting ensures that the variable value is used as-is, without interpretation by Bash; if the value has no embedded spaces or other shell metacharacters (which is not likely in this case), you don't strictly need double quotes, but using them is a good habit to form.


boot drops to a (initramfs) prompts/busybox

July 23, 2018 - Reading time: ~1 minute

While at initramfs console, I passed a command exit to come out of the shell. The same console was presented before me but this time with the exact name of the partition that got corrupted.

BusyBox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) 
Enter 'help' for a list of built-in commands.

(initramfs) exit

/dev/mapper/ubuntu--vg-root: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options) 
fsck exited with status code 4. 
The root filesystem on /dev/mapper/ubuntu--vg-root requires a manual fsck. 

BusyBox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash)
Enter 'help' for a list of built-in commands.

(initramfs) fsck /dev/mapper/ubuntu--vg-root -y

fsck from util-linux 2.27.1
e2fsck 1.42.13 (17-May-2015)
/dev/mapper/ubuntu--vg-root contains a file system with errors, check forced.

After the checking is done, I rebooted the system.

BusyBox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash)
Enter 'help' for a list of built-in commands.

(initramfs) reboot

If reboot doesn't work, try exit.

and that's it, I got back into the filesystem without any errors.


how to download specific files from some url path with wget

July 11, 2018 - Reading time: ~1 minute

wget -r -l1 --no-parent -A ".deb" http://www.shinken-monitoring.org/pub/debian/

-r recursively
-l1 to a maximum depth of 1
--no-parent ignore links to a higher directory
-A "*.deb" your pattern


generate a list of a site's URLs using wget

June 12, 2018 - Reading time: ~1 minute

You can use wget to generate a list of the URLs on a website.

Spider example.com, writing URLs to urls.txt, filtering out common media files (css, js, etc..):

wget --spider -r http://www.example.com 2>&1 | grep '^--' | awk '{ print $3 }' | grep -v '\.\(css\|js\|png\|gif\|jpg\|JPG\)$' > urls.txt

Note that this gives a list that duplicates URLs.

If you mirror instead of spider you seem to get a more comprehensive list without duplicates:

wget -m http://www.example.com 2>&1 | grep '^--' | awk '{ print $3 }' | grep -v '\.\(css\|js\|png\|gif\|jpg\|JPG\)$' > urls.txt

This will download all pages of the site into a directory with the same name as the domain.


eject / safely remove vs umount

April 10, 2018 - Reading time: ~1 minute

If you are using systemd then use udisksctl utility with power-off option:

power-off

Arranges for the drive to be safely removed and powered off. On the OS side this includes ensuring that no process is using the drive, then requesting that in-flight buffers and caches are committed to stable storage.

I would recommend first to unmount all filesystems on that usb. This can be done also with udisksctl, so steps would be:

udisksctl unmount -b /dev/sda1
udisksctl power-off -b /dev/sda

If you are not using systemd then old good udisks should work:

udisks --unmount /dev/sda1
udisks --detach /dev/sda