Videoodyssee Project Update

Hello folks šŸ‘‹

The first phase of my GSoC is pretty exciting and challenging , together with my mentor I decided to complete the video processing part of the project in the first phase of the project.

Before started working on the project me and my mentor Andi BrƤu figured out a video processing workflow so that we get a bird’s-eye view of the project and can figure out systems need to be implemented for the project.

Video Processing Workflow

Video processing workflow

Systems Involved in the project:

1 . Videoodyssee Frontend : A React frontend application for the users to submit the video data and admin dashboard for the admins.

2. Videoodyssee API : A Node.js REST API implemented using Express framework.

3. Videopipeline : A GoCD server with processing pipeline to process and publish the video.

Video pipeline

After evaluating several CI/CD tools we chose to use the GoCD tool to build the pipeline as it suits better for the video processing pipeline we are looking to build.

Tasks completed:

  • We created a config repo in GitHub to store the pipeline code so that whenever we change the pipeline code GoCD server will automatically pull the changes and builds the new pipeline.
  • Automated the installation of the GoCD server and GoCD agent in our remote machines using ansible playbooks.
  • Implemented processing pipeline upto the video encoding part.
  • Changed the previous video processing bash scripts to make them work with the current GoCD pipeline.
Video pipeline

Videoodyssee Frontend

For the users to submit the video details we need a frontend application which takes the data from the user and sends it to the REST API which will eventually trigger the pipeline using the GoCD API to start the video upload workflow.

We chose to use the React to implement the frontend application as it is quick and easy. The frontend application will have a upload form for normal users and admin dashboard for the admins for all administration tasks like approving/rejecting videos , updating video details etc.

Tasks Completed:

  • Completed the video upload form so that a user can submit the details of a new video .
  • Automated the deployment of the frontend application to GitHub pages using GitHub Actions by implementing a deploy workflow.
  • UI design of the admin dashboard is also completed.

Below is how the admin dashboard will look like:

Admin Dashboard

Videoodyssee API

We used Node.js Express framework to implement the REST API which will handle the requests from the Videoodyssee frontend. We chose Express as it is quick and easy to implement a REST API using Express framework.

Tasks completed :

  • Implemented a route which will take the video details from the frontend and triggers the GoCD processing pipeline.
  • Automated the deployment process of the pipeline using Ansible by implementing videoodyssee-api playbook.

Conclusion:

Finally the first phase of GSoC is very exciting and challenging for me and my mentor Andi BrƤu really helped me a lot in making design choices and development. I hope the second phase of GSoC would be as exciting as the first phase.

Tasks for the second Phase:

  • Completing the remaining part of the processing pipeline upto the publishing step.
  • Automating the pipeline deployment process using Ansible .
  • Implementing the Unit testing in the REST API.
  • Completing the Admin Dashboard frontend and backend.

Try LibreMesh without having a router- Midterm

First tests inside the virtual machine

Boot from disk image

The first objective in the second stage was to run the virtual LibreMesh in a virtual machine.

The tool used to perform the virtualization was QEMU, and the operating system chosen to run inside the virtual machine was Debian 11.

To do this, the following commands were sent from the console on the host:

  • sudo apt install qemu qemu-utils qemu-system-x86 qemu-system-guiĀ  //qemu installation process
  • qemu-img create debian.img 10G //creation of the hard disk image
  • wgetĀ  https://cdimage.debian.org/cdimage/daily-builds/daily/arch-latest/ amd64/iso-cd/debian-testing-amd64-netinst.iso //downloads the boot image
  • qemu-system-x86_64 -hda debian.img -cdrom debian-testing-amd64-netinst.iso -boot d -m 512Ā  //runs the virtual machine

In the virtual machine it was necessary to reinstall applications such as qemu-system-x86, git, and to clone the LibreMesh repository (https://github.com/libremesh/lime-packages) with the corresponding updates; In addition, necessary tools such as ansible, clusterssh, ifconfig and bridge-utils were installed.

As we did before on the host, the next step was to do the following tests on the VM:

– Start a node:Just as it was done on the host, to start a virtual LibreMesh, the qemu_dev_start script from the lime-packages repository was executed and it worked without problems. However, it should be noted that when you want to access LimeApp through the browser on the host, this is impossible as there is no way to access from the host to the node in the virtual machine or vice versa.

– Give internet to the node: since Debian used SLIRP as the default network backend, it already had the dhcp server configured, so any virtual had access to the internet.

However, the use of this network backend had some limitations such as:

– ICMP traffic doesn’t work (so you can’t ping inside a guest)

– On Linux hosts, ping works from within the guest, but needs some initial setup

– the guest is not directly accessible from the host or external network

– Run the node cloud: When running the LibreMesh node cloud on the host, there was a problem that the dhcp server wanted to wake up on a port in use.

As mentioned in the previous point, Debian used SLIRP as a network backend, so the port in use problem would arise again. This is how the need arises to run Debian Guest by passing it two tap interfaces so that an IP and internet access can be manually and statically configured.

Settings on the virtual machine

Once the first tests were done, the next goal was to be able to access the LimeApp of a node that was created inside the VM from the host browser.

This was achieved by changing the configuration with which Debian Guest was started and making the connections specified below.

Connection between Host and Debian Guest:

To solve this, a bridge between the network interfaces of the host and the Debian Guest was created.

The idea was to bring up the virtual Debian by passing two backend taps to it, one for lan and one for wan. The lan tap would emulate the connection of the host by Ethernet cable to some node of the network and the wan would be the necessary emulation of the network’s internet connection.

Thus, the following commands were executed on the host:

  • ip link add name bridge_tap type bridge
  • ip addr add 10.13.0.2/16 dev bridge_tap
  • ip link set bridge_tap up
  • ip tuntap add name lan0 mode tap
  • ip link set lan0 master bridge_tap
  • ip link set lan0 up
  • ip tuntap add name wan0 mode tap
  • ip addr add 172.99.1.1/24 dev wan0
  • ip link set wan0 up
  • iptables -t nat -A POSTROUTING -o wlo1 -j MASQUERADE
  • iptables -A FORWARD -m conntrack –ctstate RELATED,ESTABLISHED -j ACCEPT
  • iptables -A FORWARD -iĀ  wan0 -o wlo1 -j ACCEPT
  • echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward

Finally, the virtual machine was runned with the following command:

  • qemu-system-x86_64 \
  • -hda debian.img -enable-kvm -cpu host -smp cores=2 -m 2048 \
  • -netdev tap,id=hostnet0,ifname=”lan0″,script=no,downscript=no \
  • -device e1000,netdev=hostnet0 \
  • -netdev tap,id=hostnet1,ifname=”wan0″,script=no,downscript=no \
  • -device e1000,netdev=hostnet1

It was also necessary to modify the etc/network/interfaces file by manually assigning the wan and lan taps of the virtual IP addresses within the range 10.13.0.0/16 for the lan tap and 172.99.0.0/24 for the wan tap .

Also, as the internet connection had a default route, we had to disable and stop the connman.service process so that it could take the route that had been assigned with the ips.

For this, it was executed:

  • sudo systemctl stop connman.service
  • sudo systemctl disable connman.service

And in the /etc/resolved.conf file, the line where DNS appeared was uncommented and the IP of Google’s public DNS was placed so that Debian would have access to the internet.

Connection between Debian Guest and LibreMesh Virtuals

The configuration here was achieved by creating another bridge within Debian, and bridging the same Debian’s lan tap interface with the lan interface of any of the nodes within the cloud. In this case, ā€œHost Aā€ was chosen and the following was executed (since itā€™s a test before we arrive to a final solution involving every node, it could have been made in any host):

  • ip link add name bridge_lime type bridgeĀ 
  • ip link set bridge_lime upĀ 
  • ip link set ens3 master bridge_limeĀ 
  • ip link set lm_A_hostA_0 master bridge_lime

Then in the host, to the bridge created previously (bridge_tap) an ip was added in the range of the node.

  • ip addr add 10.235.0.43/16 dev bridge_tap

Conclusion

In this first stage, several advances were achieved, such as the understanding of the problems of testing LibreMesh on any computer; the choice of tools and resolution of said problems creating a cleaner environment for the execution of Mesh networks.

Achieving this, it was possible to access the LimeApp of a node raised in the virtual Debian from the host browser.

What will be sought in the near future is to improve the fact that a cloud node has access to the internet and to automate the installation of Debian and all the configuration achieved in scripts so that it works on different hosts.

Thanks for reading!

Update on Traffic Monitoring and Classification Via XDP/eBPF – GSoC22

The first phase of my GSoC journey focused on get packets level statistics. The works I have done can be described as follow, (1)building the openWrt testbed to run XDP code, (2)following the issues and threads in openWrt forum and community to get familiar with the barriers of running eBPF code on openWrt, (3)leveraging XDP kernel code in official XDP-projects to collect data of wireless traffic(4) implementing my own XDP kernel code and user space loader to collect statistics like throughputs and etc. (5) designing two scenarios, co-channel interference and channel fading, to validate the variation of packets level statistics. The following sections describe my explorations in details.(All the tests are performed on Thinkpad X201i with x86_64 openWrt OS)

XDP Capacity

Like many eBPF developers on openWrt, I encountered lots of barriers when setup a user space loader. It includes big and little endian problems, XDP and eBPF library chaos, architecture related issues and etc. All the problems shown are responsible to our implementation of eBPF/XDP traffic monitoring tool, because we cannot bypass the official XDP support.

Previously, we just have one native XDP loader on openWrt – iproute2. It is indeed a method to load XDP object file, while it would do nothing about user space code, which provide the most convenient method to manipulate different statistics.

When crossing compile the XDP kernel program to BPF object file, the implicit include path of lib headers brings lots of chaos.

Getting the data collected by kernel code

Since we just had official xdp-tools support on openWrt recently. There are two ways for packets-level information collection.

  1. collecting data from /sys/kernel/debug/tracing/trace . Debugging level print is a way to retrieve data collected by kernel programs. Specifically, using bpf_trace_printk helper function. However, it is a little different for XDP kernel program to output information to debugfs. The main reason is that XDP kernel objects are driven by XDP events, that is packets ingress, we are only able to call bpf_trace_printk when the packets come in, which limits the flexibility of statistics poll. So, collecting data from debugfs can be seen a trade-off between official xdp-tools support and collection capacity.

The procedure of this way can be described as follow:

  • crossing compile the XDP kernel program to object file using openWrt SDK or even using host clang
  • uploading the object file to openWrt router and loading it using iproute2 or xdp-loader in xdp-tools package
  • getting data from debugfs and do post-processing
  1. collecting data via user space xdp-loader. Another way to collect XDP kernel data, which is also demonstrated by xdp-project, is to implement user space loader to load the kernel program and get statistics simultaneously. My method is to leverage xdp-tools‘ APIs such as attach_xdp_program provided in util to implement user space xdp-loader. The reason is xdp-tools is not stable ,while porting xdp_load_and_stats.c to openWrt is equivalent to manipulating xdp-tools package which have been done by others. However, I just followed the PR xdp-tools: include staging_dir bpf-headers to fix compiling with sdk by PolynomialDivision Ā· Pull Request #10223 Ā· openwrt/openwrt (github.com) to get my xdp-loader work on x86_64. In user space, packets level data collection related struct is
 struct record {
       __u64 rx_bytes;
       __u64 rx_packets;
       __u64 pps;  //packets per second
   }

The user space loader is like

   static bool load_xdp_stats_program(...) {
       ...
       if (do_load) {
           err = attach_xdp_program(prog, &opt->iface, opt->mode, pin_root_path);
       }
       ...
       if (do_unload) {
           err = detach_xdp_program(prog, &opt->iface, mode, pin_root_path);
       }
       ...
       stats_poll();
   }

As mentioned above, we have tried two ways to build up an entire system running XDP code to collect packets level information. Method (1) is more of a hacky approach while method (2) is still in progress.

Dedicated use cases

Due to the rate adaption mechanism in mac80211 subsystem, there are many situations that will affect the wireless transmission link, thereby affecting the throughput of the wireless network cards, which is reflected in the number of packets and bytes received and sent. We designed two scenarios, co-channel interference and channel fading, both related to packets level statistics, which could be our future classification samples.

For co-channel interference scenarios, we have two routers corresponding to two laptops. In the scenario without co-channel interference, router A is set to channel 1, the laptop C and router A form a wireless link using iperf3. At this point, the router B and laptop D are powered off. In the scenario with co-channel interference, the router B is set to channel 3, which brings spectral-overlap.

For channel fading scenarios, we use a pair of receiver and transmitter, then change the distance between them to validate variation of throughput, and also put some occluders on the line of sight to change the transmission state of the wireless link.

Conclusion

It has to be said that running eBPF code on openWrt is really a unique experience, and I also saw the efforts to official eBPF support in openWrt community. It is a tough journey to try different user space loader and build up a monitoring environment. Since I spent lots of time setting up my first eBPF environment and running the entire XDP program on my openWrt PC. The schedule for the next phase will be tight.

Next phase of GSoC is clear:

  1. There are some other information that xdp hook not supported like signal strength , SNR and etc. I will explore eBPF capacity to get such data
  2. We still have no bug free XDP support on openWrt so far. I will participate in community to do related work

Thanks for reading!

GSoC 2022 – Implement elRepo.io unit testing – Midterm evaluation

Hi Freifunk and GSoC communities!

This first month on GSoC where totally exciting! Toghether with my mentors we faced a lot of challenges implementing the unit testing for elRepo.io. In the following lines I’m going to describe all the work done!

Milestones acomplished:Ā 

– Refactor of retroshare-dart-wrapper to be more testeable and implement null-safety to it.

https://gitlab.com/andrearuizrull/retroshare-wrapper-dart/-/merge_requests/1

– Implement null-safety for elRepo-lib.

https://gitlab.com/andrearuizrull/elrepo-lib/-/merge_requests/1

– Implement null-safety for elRepo-android

https://gitlab.com/andrearuizrull/elRepo.io-android/-/merge_requests/1

– Start developing unit tests

https://gitlab.com/andrearuizrull/elRepo.io-android/-/blob/feature/unit_testing/test/ui/

Narrated step by step progress

First steps, obviously, where familiarize myself with elRepo.io stack, the libraries, making my Flutter environment working, compiling elRepo.io for first time… The interaction with my mentors where key for this steps, deciding what libraries to use for testing.

Once I started to develop tests on this branch we realized that, for Dart language, static and top level functions where not easily mockable, my mentors suggest me to do a refactor of the retroshare-wrapper was needed in order to make the tests because we had to mock the API calls.Ā 

So we designed a new wrapper, trying to bring compatibility with the other pieces of the elRepo.io. We used Dart http.client as inspiration to create the new RsClient and we implemented it.

This refactor constrained us to upgrade Dart minimum SDK version and migrate all the code to null-safety, which, subsequently, made us to upgrade Dart SDK and migrate to null-safety all the other projects.

On the way, we solved some “Todo’s” on the code and fixed a few bugs. And write some simple tests to check that with the refactor, the API calls can be mocked as we need.

After this big refactor we manually tested the app until everything works properly and it have no more null safety errors.

Finally, I started to write unit tests for elRepo-android, starting from the login screen, and magically, first test passed!

But tests are a whole world and I had to study how to test stuff like the Navigator, how to don’t test platform related tests (which should be tested on the integration tests), I learned how to mock classes using generated code, or how to mock the providers etc… 

A lot of interesting stuff! šŸ¤“

Some thoughts

This first phase bring me some conclusions or ideas I would like to share. 

The test for the login success history where very difficult to implement for me: a lot of API calls made, with a lot of spaghetti code: a function on elRepo-android call a function on elRepo-lib that call a function on the retroshare-dart-wrapper. All this architecture is needed for the app, but some questions rised:

– If we mock only the API calls, tests are larger and more difficult to implement, but it also tests elrepo-lib and the retroshare-dart-wrapper.  

– elRepo-lib is still using top level functions, which are not mockable. To mock elrepo-lib directly could boost our test implementations on the app side, but it need a big refactor, i have to discuss with my menthors if this is prioritary now. 

– This difficult-to-test-histories are flags that point where the code could be improved and splitted. This will improve performance, scalability and maintainability.

Now, I’m waiting next meeting with my mentors for instructions to push forward the development.

Midterm Evaluations: Call A Friend.

Initial Goals:

Starting off with the Meshenger app, we have a Master branch with all the features working and a Development branch with added stability and necessary changes but a broken call feature.

The goal of GSoC’22 was to fix the call related issues, convert the codebase to Kotlin, design and implement a new UI and make a new release for the App.

Progress so far:

Working on the first phase of GSoC’22, I started off with reading the required documentations and existing code, understanding the architecture and workflow of the app.

Then I started with UI design in Figma, in like 2 weeks I was ready with a new UI design for the app https://drive.google.com/file/d/1G3oJXJlj8jKSiGBmmtsG1CSDQ6ipTAtr/view?usp=sharing

After completing the UI design in Figma, I worked on converting the Master branch and Development branch codes to Kotlin, so we can easily understand how the calling feature work and fix it for the Development branch. I currently try to investigate how the WebRTC library works in the Master branch so I can follow up the same for Development branch to fix the call feature.

Later on I implemented the New UI in the source code of the Development branch and added a feature so if the user skips the add-a-name option, he’s given a random username.

Then I fixed the app crashes along with the Dark theme.

Currently I am debugging the app to figure out why the calling feature isn’t working.

In the next phase I would be working on fixing the call feature, go for testing and then make a new release on F-Droid and even on the Play Store.

Update on TX-power control in WiFi networks – GSoC’22

Hi, Jonas here again! The first period is over and thus it’s time for an update about my GSoC’22 project ‘TX-power control in WiFi networks’. In this blogpost I will cover what I have done and achieved so far, some initial testing and evaluations, and what are the next steps for the remaining project time.

(1) Linux kernel structures for TPC

Major objective of the past weeks was to create a foundation for TPC in the Linux kernel. As I already explained in my first blog post, the Linux kernel, i.e. the mac80211 layer, has only rudimentary support for TPC per virtual interface, but not per packet or per MRR stage.

To create this foundation, it is necessary to both modify existing kernel structures and develop new structures for TX-power annotation and related information. Although the project just aims at providing TPC per packet, I decided to be more-or-less future-proof and extend this to TPC per MRR stage. There are several wifi chipsets supporting this already, and there will be more in the future.

Preliminary considerations

A major challenge when trying to implement new extensions in the network stack is the existing sk_buff structure. This structure represents a socket buffer (SKB), meaning anything related to network packets, their control and status information, the data etc. For historical and performance reasons this structure has a fixed size, has fixed-size members and is aligned properly for cache lines. But this makes it pretty hard to introduce any extensions, e.g. new members per packet as this would violate the size-constraints and lead to either a huge performance loss or a not-working network stack. This applies to both the control path and the status path. A solution for this and how this is handled is provided in a).

Another aspect that needs to be considered are the different TPC capabilities of wifi chipsets. They often differ in:

  • TPC supported? / per packet / per MRR stage
  • TX-power levels and power step size

Instead of using a ‘smallest common denominator’ solution for all wifi chipsets, which would not make use of the extended capabilities of some chipsets but just a common subset for all, another approach was chosen. To achieve the best coverage of all different capabilities, the structures and annotations for TPC are designed as abstract as possible.

(a) TX-power annotation

To annotate TX-power per packet and per MRR stage, the kernel structure ieee80211_sta_rates is used. This structure was introduced several years ago to overcome the limitations of sk_buff and the fixed-size control buffer for each network packet. In contrast to the control buffer, this structure is not attached to each sk_buff but rather attached to an STA and can be filled by RC algorithm and read in the TX path of a driver asynchronously. Prior to this method, the driver had to call the RC algorithm each time to retrieve the rateset for a packet.

struct ieee80211_sta_rates {
	struct rcu_head rcu_head;
	struct {
		s8 idx;
		u8 count;
		u8 count_cts;
		u8 count_rts;
		u16 flags;
+               u8 tx_power_idx;
	} rate[IEEE80211_TX_RATE_TABLE_SIZE];
};

To the internal rate array, which already allows to pass information per MRR stage, a member for the TX-power is added. The data type unsigned 8-bit is large enough at the moment but can be easily widened if necessary.

For backwards compatibility and supporting TPC with probing, a modification of the previously mentioned control buffer per packet is also necessary. Although this does not allow an annotation per MRR stage, it is still required for compatibility reasons as some drivers still do not use the new rate table, and for probing. Probing still uses the information directly embedded into the control buffer instead of the rate table for the first rate entry / MRR stage or applies this information to the whole packet. To support this, a new member for TX-power is added to the control buffer structure:

struct ieee80211_tx_info {
    ...
    struct {
        struct ieee80211_tx_rate rates[IEEE80211_TX_MAX_RATES];
        ...
+       u8 tx_power_idx;
    } control;
    ...
}

(b) Supporting different TX-power capabilities

To support different TX-power levels, ranges and step widths, the TX-power in the mac80211 layer is always specified as an index into a list of TX-power levels. This way, the mac80211 layer and RC algorithms can handle different capabilities in an abstract way, reducing code complexity and keeping possible performance effects to a minimum.

The list of supported TX-power levels is provided by the driver at time of initialization. Instead of populating a dynamically sized list with all possible TX-power levels, which would require space for each level, a driver needs to provide so-called ‘TX-power range descriptors’. A corresponding C implementation of such a descriptor for wireless TX-power was developed:

struct ieee80211_hw_tx_power_range {
    u8 start_idx;
    s8 start_pwr;
    u16 n_levels;
    s8 pwr_step;
}

With this structure, the driver can define several TX-power level ranges by specifiying an starting index, a starting power, the number of levels in this range and a step width. TX-power levels are always specified in 0.25 dBm steps to be able to define fine-grained power levels. Drivers can define several TX-power ranges with each a different step width, ascending or descending power with ascending indices, etc.

struct ieee80211_hw {
    ...
+   struct ieee80211_hw_tx_power_range *tx_power_ranges;
+   u8 n_power_ranges;
};

A pointer and a length indicator are added to the ieee80211_hw structure which must be filled by a wireless driver before registering a new wifi device in the mac80211 layer.
This changes modify what was introduced by one of the commits I mentioned in my first blog post. After some implementation progress we realized, that this is much more optimal than what we initially planned to use, which was in fact a dynamically sized list containing all supported power levels.

(c) Other modifications

Some wifi chipsets do not even allow any kind of TPC, thus it should be avoided to perform calculations and keep statistics for such interfaces. To achieve this two new support flags were added. One of these also needs to be set before registering the wifi hardware to indicate TPC support. These flags are called IEEE80211_HW_SUPPORTS_TPC_PER_PACKET and IEEE80211_HW_SUPPORTS_TPC_PER_MRR. If both flags are not set, the driver and / or the wifi hardware does not allow or support TPC. TPC algorithms then do not need to be initialized or, in case of a joint rate and TX-power variant, the algorithm can deactivate its TX-power part.

Due to the changes that were introduced in with the commits mentioned in my first blog post, the usage of rate_info is preferred over the usage of ieee80211_tx_rate. But most parts of mac80211 and also most drivers still use ieee80211_tx_rate. And there was no utility function for the conversion between ieee80211_tx_rate and rate_info so far. Thus, this implementation provides such an utility function for this purpose, especially as it is required in the TPC implementation in ath9k. It is included in the mac80211 layer und thus available to all parts of the wireless stack, including other drivers.

(2) TPC support in ath9k for Atheros 802.11 a/b/g/n chipsets

To verify the implementation and to provide a first step towards TPC supported by several wireless drivers, the ath9k wireless driver, which is responsible for Atheros 802.11 a/b/g/n AR9xxx chipsets, is extended to make use of the new mac80211 capabilities. TPC support in ath9k was already partially implemented before, but has been disabled up to now.

ath9k has a fairly easy power range. It supports 0 dBm up to 32 dBm in 0.5 dBm steps, thus the power range is linear and the range 0 … 63 can be directly used as indices for TX-power, making it easier to set and read TX-power. This range can be easily described with a single instance of the aforementioned TX-power range descriptor. Overall seen is TPC in ath9k rather easy to implement in the control path.

A bit more challenging was the status path, especially after receiving an ACK for a packet and before the TX status is reported. For performance reasons, this is already done asynchronously. Completed SKBs are filled with information, attached to a queue and then later asynchronously dequeued and processed for TX status report. Due to the already mentioned size limitations in the SKB, TX-power could not be easily reported the same way as rates are. As a workaround, a new structure was created for this purpose but also for future extensions.

struct ath_tx_status_ext {
    u8 tx_power_idx[4];
}

In the status buffer in SKB, the driver can place pointers to internal data for several purposes. Thankfully, ath9k didn’t make use of all these pointers, thus a reference to an instance of this structure can be placed in the SKB and is then available when the TX status report queue is processed. The structure can be extended in the future and has no size limitations.

(3) Setting fixed TX-power with debugfs

Appropriate structures for minstrel_ht and a TPC algorithm have not been yet implemented as this will be part of the following weeks. To be able to already test TPC and to provide a way for setting a fixed TX-power for other purposes, minstrel_ht was modified to already accept a fixed TX-power for all packets set via debugfs. Following the Unix philosophy ‘everything is a file’, debugfs is a simple way for kernel modules to interact with user space applications by providing debug information or accepting parameters. Debugfs can be used like a filesystem, reading from file to get debug information or writing to a file to set e.g. a fixed rate or a fixed TX-power for wireless drivers. Many kernel modules make use of debugfs, e.g. minstrel_ht RC algorithm or the ath9k driver.

The fixed TX-power set via debugfs is written to the rate table of an STA on each update and then used by the wireless driver. minstrel_ht already uses a debugfs-file to support fixed rate setting, thus another file is added to support the same for TX-power.

echo 63 > /sys/kernel/debug/ieee80211/phy0/rc/fixed_txpwr_idx

By writing, e.g., 63 to the file mentioned above, the TX-power will be set to the maximum possible power for ath9k in all packets that retrieve the TX-power from minstrel_ht. Similarly, when writing 0 to the same file, all affected packets will use the least possible TX-power. This may not be the final path of the file, there will likely be a file per STA, not per PHY, and it may be located in a slightly different subdirectory.

(4) First tests and evaluations

Some tests were already performed with an APU board with ath9k wifi chipsets acting as the access point. This is a very basic but rather realistic setup, the clients are not exactly close to the AP, there is one wall and around 3-6 meters distance between. Three clients were then used:

  • iPhone 11
  • Xiaomi MiR 4A
  • Xiaomi Redmi AC1200

iPerf3 is used to generate traffic and measure throughput depending on the currently set TX-power. tcpdump is used on a monitor interface to measure the RSSI for all captured packets depending on the currently set TX-power. Below you can see two measurement plots of the experiments with iPhone 11 and Xiaomi MiR 4A. For the Redmi the measurement did not lead to any remarkable change in throughput.

Plots of throughput and RSSI in relation to adjusted TX-power from AP to client (left: iPhone 11, right: Xiaomi MiR 4A)

Both plots show an increase in RSSI when the TX-power is increased up to a specific point. Also the throughput usually increases, but this also depends on the actual noise, connection quality and other disturbances. Especially in case of the MiR 4A, the connection was rather bad and the throughput fluctuated more often due to instabilities, etc.

For all clients, it can be seen that after reaching a TX-power index above 45, the measured RSSI won’t increase anymore and just fluctuates due to noise. This is likely because of TX-power regulations, e.g., in Germany and many other countries the maximum TX-power for channel 36 in 5GHz band is limited to 23 dBm which would be a TX-power index of 46. Although a higher index can be set, the actual TX-power is limited by this regulatory. This will later also be included in the TX status to avoid any confusing or incorrect information.

Another observation can be made when the increasing TX-power index still leads to a higher RSSI, but the throughput stays at a level. In fact, this means that further increasing the power has no positive effect, does not lead to more throughput. It just leads to more interference in a network consisting of multiple devices and thus decreasing the overall network performance and “wasting” energy. This of course heavily depends on the capabilities of AP and STA as seen in the tests. The throughput with smaller devices, which have, e.g., less and smaller antennas, decreases much more with decreasing TX-power than with bigger device which have, e.g., more and larger antennas. For the Redmi AC1200, for example, no remarkable decrease in throughput measured could be measured while decreasing the TX-power.

(5) Conclusion and outlook

After this first period, the project achieved some remarkable progress. A mac80211 implementation with appropriate modifications and new structures was developed. By making use of this implementation in ath9k driver and performing some tests with different clients, it can be seen that it works, the TX-power can be adjusted and has an immediate effect. Also the major point of TPC could be observed: TX-power in wifi networks can be decreased for several STA without a remarkable decrease in throughput, but lower overall interference in the whole network and thus higher overall network performance.

For the second period, there are several goals:

  • optimization of the mac80211 and ath9k implementations
  • implementing TPC in mt76 driver for Mediatek wifi chipsets
  • propose changes as patches to the Linux kernel mailing list
  • testing and validation of TPC
  • implementation of TPC algorithm

minstrel_ht will be used as the starting point to implement a joint rate and TX-power control algorithm that tries to find the best rate-power combination for an STA. For this, some structures and calculations inside minstrel_ht have to be modified. In addition, setting a fixed TX-power will be possible per STA, in the current state it is only possible to set it per interface. This requires some further adjustments to the debugfs usage inside minstrel_ht.

As the implementation progress goes on, tests and evaluations are becoming more and more important to see whether the implementation performs as expected and which performance and/or stability gains are possible by using TPC in combination with RC. There will be extended tests with appropriate TX-power and signal measurements, also covering overall WiFi network throughput.

Thanks for reading!

Update: Completing the Retroshare Web Interface – GSoC’22(First Phase)

Repository/Pull Requests

Progress

The Retroshare Web Interface is being developed as a part of the Google Summer of Code Program. Weekly Meetings with Mentor Cyril Soler and community involvement have led to steady progress of this project. Primary focus is to provide as many features as in the Retroshare QT interface.

Initial Weeks:

  • Solved some of the previous existing issues with Mail Tab.
  • Got familiar with the workflow and setup.
  • It took me some time to be familiar with MITHRIL JS framework for frontend due to limited documentation.

Here are some key developments done in the Web Interface:

  • Channel Tab: Navigation for My, Subscribed, Popular channels and Other Channels. Search for any channel is also provided.
Channel Tab

This is Channel View: Here you can view all channel details and the posts and add posts.

Subscribe and Search facility is also provided.

Channel View
Add Post with Files.

This is Post View:

View/Download all the files. Add/View comments and their replies with all details.

You can upvote/downvote and reply to any comment. The replies are also displayed in a staircase manner.

Post View
Add Reply/Comment

Forum Tab(Under Progress)

Navigation for Subscribed, Popular and My forums is provided along with Search forums Option.

Forum View: View all the details about the forum. Add New Thread and view all the threads.

Subscribe Option is also provided.

Forum View

Thread View (under Progress)

Here you can view all the threads and the replies.

Further work (to be done) : Add thread/reply. Mark read/unread facility.

Thread View

Due to the structured, clean code and similar facility the Boards Tab is also being developed by the community member @Defnax based on Channels Tab.

Work to be Done

The second phase of GSOC’22 will see the following developments:

  • The Forums Tab will be developed primarily in the next phase of the Program.
  • After Forums, Files Tab will be fixed and some functionalities will be added.
  • If time permits, focus on Configuration Tab will be necessary to provide facility to change config options from the web interface itself.

Concluding Thoughts:

The mentors and I feel the progress for this project has been steady. The first phase of GSOC’22 has been fruitful and has led me to learn many new things and technologies which I would have never explored on my own. There were obstacles and difficulties in the start but slowly with constant communication and availability of my mentor and the community, the development is now stable.

Thank You!

Update on Minstrel TX Rate Control in User space – GSoC ’22

Hi everyone! As the first evaluation of GSoC ’22 is almost here, I’m writing this blog post to provide a detailed update on the progress of the Minstrel HT WiFi rate control in user space. If you are unfamiliar with WiFi rate control, then you can have a look at my first blog post.

1) Addition of new estimators

WiFi rate controls usually comprise of averaging filter/estimators to update the packet transmission statistics with respect to the newer packet counts in real-time. As such, the Minstrel HT rate control also has an estimator which is currently a variant of the Butterworth Filter.

Butterworth Filter

Previously, the user space Minstrel HT consisted of only the Exponentially Weighted Moving Average (EWMA) filter. However, the current kernel Minstrel HT algorithm has replaced EWMA with a new estimator based on the SuperSmoother (Butterworth) filter developed by John F. Ehlers. As such, the Butterworth filter has now been added to the user space Minstrel HT with the period set to 16.

The image above shows the formula of the Butterworth filter used to calculate the average success probability of a data rate in Minstrel HT. The curr_prob denotes the success probability of a data rate in the current update interval (50 milliseconds) and, for the first success probability data, the new_avg is set to the first success probability as no previous average probability exists.

Exponentially Discounted Averaging and Variance

In addition to the Butterworth filter, an exponentially discounted filter has also been implemented in user space Minstrel HT for research purposes. Consider two data rates with the given statistics, during the last ‘t’ timesteps, as packet counts: rate1 with 4 attempts and 5 successes, and rate2 with 280 attempts and 350 successes. The success probability of both these rates is 80%, however, rate2 seems to be more reliable as it has more observations.

This new filter can discount with respect to the number of observations and also with respect to the time of these observations, using two different discounting parameters: Ī±, Ī² āˆˆ [0,1]. The formula of the Exp. Discounted filter shown below is for incremental calculation and, at time t0, the following values are initialised to 0: Ī¼0 = s0 = W0 = t0 = 0.

By choosing Ī± and Ī² we can trade-off between emphasizing the number of observations and how recent these observations are. Extreme cases are ignoring the number of observations (Ī± = 1) and also ignoring the time steps (Ī² = 1). Currently, the value of Ī² is set to 1 in the user space Minstrel HT and the value of Ī± is dynamic. The value of alpha depends on the number of observations in the current update interval.

2) Changes to the output

Prior to GSoC ’22, the output of the user space Minstrel HT was a simple printout of the rate statistics dictionary during every update interval which wasn’t easy to read or interpret. The output has now been changed to match the kernel Minstrel HT debug output.

Human Readable Statistics (rc_stats)

The human-readable rate statistics table shows the average success probability and average throughput for each data rate using the three filters implemented in user space Minstrel HT, namely Exponentially Weighted Moving Average (EWMA), Exponentially Discounted filter, and the Butterworth filter.

Statistics as CSV file (rc_stats_csv)

Along with the human-readable statistics, at the end of every update interval, the user space Minstrel HT also stores the rate table in a CSV format to the ‘rc_stats_csv’ file. The CSV file can then be used to analyze the performance of different filters and also compare the user space Minstrel HT with its kernel counterpart.

The rate table is separated by the delimiter ‘*’ between each update interval. Furthermore, along with the rate table, the CSV format also consists of a timestamp, at the top, to indicate when the rate table was printed to the CSV file.

File Structure for the output

By default, the output of Minstrel HT in user space is saved to a “data” folder inside the same directory which contains the Python-WiFi-Manager driver script. Running the manager and Minstrel HT on two access points, for example, would result in the following output structure inside the “data” folder:

3) Compatibility with the updated Python-WiFi-Manager

As described in the first blog post, the user space Minstrel HT relies on the WiFi-Manager to perform rate control functionalities in the user space. At the end of June, the WiFi-Manager package was restructured resulting in Minstrel HT being incompatible with the updated version. As such, the user space Minstrel HT has been reprogrammed for compatibility.

4) Configuration File

The user space Minstrel HT package now consists of a configuration script named ‘config.py’ which can be used to modify the settings of the rate control such as disabling output, changing the property of Minstrel HT, and also the parameters of the different averaging filters.

Additionally, the configuration script lets the user select the filter to use for rate control. The table below describes some of the configuration parameters:

Config ParameterDescription
rate_stats_csv_outputDenotes whether to save rc stats as CSV
rate_stats_outputDenotes whether to save rc stats as human-readable table format
EWMA_statsDenotes whether to show EWMA stats in rate table output
Butterworth_statsDenotes whether to show Butterworth Filter stats in rate table output
Exp_Disc_statsDenotes whether to show Exponentially Discounted Averaging and Variance stats in rate table output
Filter_EWMADenotes whether to use EWMA for rate control
Filter_ButterworthDenotes whether to use Butterworth filter for rate control
Filter_Exp_DiscDenotes whether to use Exponentially Discounted Averaging and Variance for rate control

Note: Only one of Filter_EWMA, Filter_Butterworth, and Filter_Exp_Disc can be True at once and the user space Minstrel HT will not execute if this constraint is not fulfilled. In the case of this constraint not being fulfilled, user space Minstrel HT will give control to the kernel Minstrel HT.

5) First analysis of the estimators

After the implementation of the two additional filters described in (2), an experiment was conducted for 10 minutes and the rate stats output collected in the CSV file was analyzed using the ‘seaborn’ visualization tool in Python. The goal of the experiment was to compare the estimated throughput of the three filters: Exponentially Weighted Moving Average, Exponentially Discounted, and the Butterworth filter.

Description of the Experiment

The experiment was conducted on a custom Banana Pi router, Mediathek 7622 WiFi chip to be exact. The user space Minstrel HT was run from Macbook Pro 13ā€™ 2020 (base model) using the wireless link. Furthermore, an iperf3 connection was also set up between the Macbook Pro and the custom Banana Pi router for the entirety of the experiment. The experiment yielded rate statistics for a total of 24 data rates.

Results

For conciseness, this sub-section only consists of a few handpicked results from the experiment. The average throughput achieved by each data rate was plotted against time in a line graph, box plot, and scatter plot.

Concluding Thoughts

The first phase of GSoC ’22 has been mostly about making the user space Minstrel HT compatible with the updated WiFi-Manager, implementing new filters/estimators, and also making the output more readable and convenient for analysis. Furthermore, the user space Minstrel HT has been added with a configurable script allowing users to customize the rate control to the desired settings. The first phase ended with the first analysis of the estimated throughput using the three different filters: Exponentially Weighted Moving Average (EWMA), Exponentially Discounted, and Butterworth filter.

The second phase of GSoC ’22 will mostly entail research and analysis of the user space Minstrel HT. Along with that, the user space Minstrel HT will be further extended with functionalities from the kernel variant such as calculating the number of retransmission, random sample tables, and reducing the number of spatial streams.

Thanks for reading!

Try LibreMesh without having a router-GSoC 2022

Introduction

Hello everyone, my name is Irina and I am working in this edition of GSoC 2022 on the project ā€œTry LibreMesh without having a routerā€ with Altermundi. I will start this post by explaining what LibreMesh is, what are some of the obstacles that appeared when we were testing it, some solutions that we did to those problems, the knowledge acquired in community bonding, and what  objectives are in the next weeks. 

What is LibreMesh?

LibreMesh is a framework for creating OpenWrt-based firmware for wireless mesh nodes. LibreMesh works in a decentralized way and is used as a base for community networks. These mesh networks allow the connected nodes to route each other’s traffic.

Problems when testing LibreMesh

In the first steps, it was necessary to install QEMU, download the LibreRouterOS images and its dependencies,. LibreRouterOS has an automatic configuration  to be able to run virtual mesh networks on any computer. Then the LibreMesh test was based on the following steps:

1. Run a node:  this has been made without major issues. 

However, we discovered a bug when we were running the node: if you want to close the virtual node the network interface isnā€™t removed from the network configuration of the host. This is because the script that closes it does not eliminate that network interface.

2. Set up a node with internet access: for this it was necessary to install dnsmasq. In this case there were some inconveniences since giving this access required to use a port that was occupied by a dnsmasq process and a systemd-resolve, so it was necessary to free that port by killing those processes. By doing that, the host loses its DNS capabilities so weā€™re planning to solve this bug by allowing the user to give the node internet access without killing the processes.

3. Set up a cloud of nodes: it was necessary to install ansible and clusterssh.

When running the node cloud, it threw an error because it couldnā€™t find the location where the image files should be. To solve this, we modified some lines of the qemu_cloud_start.yml script:

  • rootfs: files/generic-rootfs.tar.gz
  • ramfs: files/ramfs.bzImage

placing as suffix ā€œ./ā€ to each one

  • rootfs: ./files/generic-rootfs.tar.gz
  • ramfs: ./files/ramfs.bzImage

also in the same script it was necessary to change the following lines to use the LibreRouterOS Images:   

Another problem that appeared was that in the script there were calls to ifconfig

def if config (self, cmd):

return self.module.run_command ([ ‘ifconfig’] + cmd)

Since ifconfig is not installed in versions later than Ubuntu 18.04, we have to install it through this commands:

  • sudo apt-get update
  • sudo apt-get install net-tools

Period Community bonding

During this period I had a first approach to my mentor GermƔn Ferrero and an accompaniment from my advisor TomƔs Assenza, who helped me in understanding and solving the different problems mentioned.

Each meeting has been very effective and useful since I was able to familiarize myself and learn about virtualization. The goals to be met in the following weeks were very clearly set out in the plan to achieve all the goals by the end of the project.

That is why I appreciate the kindness, predisposition, accompaniment, and pleasant help of both of them.

Goals for next weeks

– Explore different virtualization and containerization technologies such as Linux Containers, Docker, Virtual Box.

– Run some node on these tools in order to compare which of them turns out to be easier for such a fact.

Minstrel TX Rate Control in User space – GSoC ’22

Introduction

Hi everyone! I’m Prashiddha. I have recently graduated from Jacobs University Bremen with a BSc. Computer Science and minors in Robotics and, Global Economics and Management. For the past year, I have been involved in the research and development of open-source software at SupraCoNeX, primarily focusing on facilitating rate control in user space, which will soon be public.

For GSoC’22, I’ll be working on implementing and testing Minstrel HT, the default WiFi rate control for Linux-based OpenWRT OS, in user space. This first blog post intends to cover details on the necessary background to understand the project and its implementation.

What is WiFi Rate Control?

A typical WiFi network consists of at least a sender and a receiver that communicate through the propagation of radio frequencies within the license-free ISM band. The radio waves carry information in binary as an encoding, and the sender devices can choose from several modulation and transmission parameters such as coding rate, bandwidth, and guard interval. The choice of a transmission scheme between the WiFi devices determines the theoretical network throughput or data rate. A metric called the Modulation Coding Scheme (MCS) Index has been defined to help better understand the WiFi data rates and the RF environment of the network. The MCS index is based on the parameters of the transmission schemes mentioned above.

With newer IEEE 802.11 standards such as IEEE 802.11ax, there are hundreds of available MCS rates for transmission. At first glance, it may seem like maximum data transmission could be easily attained using only those rates which yield the highest theoretical throughput. However, the modulations which achieve high data rates only work best when the link between the WiFi devices is robust. Furthermore, compared to wired-based communication, the wireless communication channel demonstrates higher dynamics and is prone to interference, especially if multiple wireless devices share the medium uncoordinated. As such, the performance of WiFi networks is far from optimal, and there have been significant efforts to develop WiFi rate control algorithms that dynamically adapt transmission data rates in response to the varying wireless channel conditions.

Motivation

In Linux-based OpenWRT WiFi devices, the mac80211 subsystem in the kernel space is responsible for rate control. This includes the implementation of rate control algorithms like the Minstrel HT in the kernel space. The kernel space provides full access to the deviceā€™s hardware and memory. Development of modules is hence subject to the risk of complete system failure due to bugs or failure in particular modules and submodules. Additionally, development in kernel space is restricted to the use of integer value operations. Due to the instability and risk involved in accessing the floating-point unit, floating-point operations are avoided in kernel space. Lastly, capabilities for prototyping and debugging required research and development are highly restricted in this space.

Given the limitations and lack of ease of development in kernel space, the need for a user space rate control algorithm is apparent. With this, my GSoC’22 project will focus on implementing a user space variant of Minstrel HT with experiments designed to compare the performance with its kernel space counterpart.

Deliverables of the project

The end goals of my GSoC ’22 projects are as follows:

  • Software Architecture of the user space Minstrel HT implementation in Python.
  • Proper Documentation and Guide on working with the Minstrel HT package.
  • Ready to run demo script to showcase the potential of user space Minstrel HT.
  • Detailed analysis of WiFi rate control experiments for performance comparison between kernel and user space Minstrel HT.

What’s been already done?

Prior to GSoC ’22, I had already been heavily involved in the implementation of Python-WiFi-Manager, which acts as an intermediary to provide necessary information and functionalities for WiFi rate control in user space. The WiFi-Manager package is still under development and will soon be released as an open-source OpenWRT package. Furthermore, as a part of my bachelor thesis, I’ve already implemented a working version of the user space Minstrel HT in Python using WiFi-Manager.

Since the rate control algorithm in user space needs to be designed such that it can work on multiple access points simultaneously, initially, multiple experiments were conducted to evaluate various methods of parallelizing rate control from the WiFi-Manager, namely async task, thread pool, and process pool. The results have indicated async task to be the best scheme for parallelizing rate control from WiFi-Manager.

With this, the user space Minstrel HT in Python has been developed to be executed as an async task and the first results seem to indicate towards a comparable performance with its kernel counterpart. However, the first experiments were far from foolproof and require better-designed experiments to yield a concrete result.

What’s next?

The user space Minstrel HT is not complete and still requires implementation of additional features in order to be identical to the kernel Minstrel HT in terms of functionality.

  • Changing the output of user space Minstrel HT to a live printout of the rate statistics table.
  • Adding option to store the output in a CSV file to aid in comparison with the kernel Minstrel HT.
  • Extending user space Minstrel HT with functionalities from the kernel variant such as calculating the number of retransmission, random sample table, and reducing the number of spatial streams.
  • Adding a new estimator called Butterworth Filter which is currently used by the kernel Minstrel HT.
  • Modifying user space Minstrel HT to accommodate for various WiFi rate control experiments if needed.

Concluding Thoughts

With the start of GSoC ’22 coding period, I’ll begin with modifying the output of user space Minstrel HT to the printout of live rate statistics along with the option to save it in a CSV file, identical to the kernel Minstrel HT. This will be followed by extending the user space Minstrel HT with the remaining functionalities from the kernel variant. In the end, the project will mainly focus on experimentations for performance analysis of Minstrel HT in user space.

With this, I’d like to conclude the first blog on the user space Minstrel HT GSoC ’22 project with freifunk. Please feel free to reach out and connect with me. šŸ™‚

Thanks for reading!