First tests inside the virtual machine
Boot from disk image
The first objective in the second stage was to run the virtual LibreMesh in a virtual machine.
The tool used to perform the virtualization was QEMU, and the operating system chosen to run inside the virtual machine was Debian 11.
To do this, the following commands were sent from the console on the host:
- sudo apt install qemu qemu-utils qemu-system-x86 qemu-system-gui //qemu installation process
- qemu-img create debian.img 10G //creation of the hard disk image
- wget https://cdimage.debian.org/cdimage/daily-builds/daily/arch-latest/ amd64/iso-cd/debian-testing-amd64-netinst.iso //downloads the boot image
- qemu-system-x86_64 -hda debian.img -cdrom debian-testing-amd64-netinst.iso -boot d -m 512 //runs the virtual machine
In the virtual machine it was necessary to reinstall applications such as qemu-system-x86, git, and to clone the LibreMesh repository (https://github.com/libremesh/lime-packages) with the corresponding updates; In addition, necessary tools such as ansible, clusterssh, ifconfig and bridge-utils were installed.
As we did before on the host, the next step was to do the following tests on the VM:
– Start a node:Just as it was done on the host, to start a virtual LibreMesh, the qemu_dev_start script from the lime-packages repository was executed and it worked without problems. However, it should be noted that when you want to access LimeApp through the browser on the host, this is impossible as there is no way to access from the host to the node in the virtual machine or vice versa.
– Give internet to the node: since Debian used SLIRP as the default network backend, it already had the dhcp server configured, so any virtual had access to the internet.
However, the use of this network backend had some limitations such as:
– ICMP traffic doesn’t work (so you can’t ping inside a guest)
– On Linux hosts, ping works from within the guest, but needs some initial setup
– the guest is not directly accessible from the host or external network
– Run the node cloud: When running the LibreMesh node cloud on the host, there was a problem that the dhcp server wanted to wake up on a port in use.
As mentioned in the previous point, Debian used SLIRP as a network backend, so the port in use problem would arise again. This is how the need arises to run Debian Guest by passing it two tap interfaces so that an IP and internet access can be manually and statically configured.
Settings on the virtual machine
Once the first tests were done, the next goal was to be able to access the LimeApp of a node that was created inside the VM from the host browser.
This was achieved by changing the configuration with which Debian Guest was started and making the connections specified below.
Connection between Host and Debian Guest:
To solve this, a bridge between the network interfaces of the host and the Debian Guest was created.
The idea was to bring up the virtual Debian by passing two backend taps to it, one for lan and one for wan. The lan tap would emulate the connection of the host by Ethernet cable to some node of the network and the wan would be the necessary emulation of the network’s internet connection.
Thus, the following commands were executed on the host:
- ip link add name bridge_tap type bridge
- ip addr add 10.13.0.2/16 dev bridge_tap
- ip link set bridge_tap up
- ip tuntap add name lan0 mode tap
- ip link set lan0 master bridge_tap
- ip link set lan0 up
- ip tuntap add name wan0 mode tap
- ip addr add 126.96.36.199/24 dev wan0
- ip link set wan0 up
- iptables -t nat -A POSTROUTING -o wlo1 -j MASQUERADE
- iptables -A FORWARD -m conntrack –ctstate RELATED,ESTABLISHED -j ACCEPT
- iptables -A FORWARD -i wan0 -o wlo1 -j ACCEPT
- echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
Finally, the virtual machine was runned with the following command:
- qemu-system-x86_64 \
- -hda debian.img -enable-kvm -cpu host -smp cores=2 -m 2048 \
- -netdev tap,id=hostnet0,ifname=”lan0″,script=no,downscript=no \
- -device e1000,netdev=hostnet0 \
- -netdev tap,id=hostnet1,ifname=”wan0″,script=no,downscript=no \
- -device e1000,netdev=hostnet1
It was also necessary to modify the etc/network/interfaces file by manually assigning the wan and lan taps of the virtual IP addresses within the range 10.13.0.0/16 for the lan tap and 188.8.131.52/24 for the wan tap .
Also, as the internet connection had a default route, we had to disable and stop the connman.service process so that it could take the route that had been assigned with the ips.
For this, it was executed:
- sudo systemctl stop connman.service
- sudo systemctl disable connman.service
And in the /etc/resolved.conf file, the line where DNS appeared was uncommented and the IP of Google’s public DNS was placed so that Debian would have access to the internet.
Connection between Debian Guest and LibreMesh Virtuals
The configuration here was achieved by creating another bridge within Debian, and bridging the same Debian’s lan tap interface with the lan interface of any of the nodes within the cloud. In this case, “Host A” was chosen and the following was executed (since it’s a test before we arrive to a final solution involving every node, it could have been made in any host):
- ip link add name bridge_lime type bridge
- ip link set bridge_lime up
- ip link set ens3 master bridge_lime
- ip link set lm_A_hostA_0 master bridge_lime
Then in the host, to the bridge created previously (bridge_tap) an ip was added in the range of the node.
- ip addr add 10.235.0.43/16 dev bridge_tap
In this first stage, several advances were achieved, such as the understanding of the problems of testing LibreMesh on any computer; the choice of tools and resolution of said problems creating a cleaner environment for the execution of Mesh networks.
Achieving this, it was possible to access the LimeApp of a node raised in the virtual Debian from the host browser.
What will be sought in the near future is to improve the fact that a cloud node has access to the internet and to automate the installation of Debian and all the configuration achieved in scripts so that it works on different hosts.
Thanks for reading!