GSoC ’23: Final Report on Joint Power and Rate Control in Userspace

Hello, everyone! In this concluding blog post, I’m excited to highlight the achievements in the field of Joint Power and Rate Control in User Space, and also offer insights into the future of this research and development endeavor. If this is your first encounter with my work, I strongly encourage you to explore the introductory blog posts from GSoC ’22 and GSoC ’23, as they provide a comprehensive overview of resource allocation in IEEE 802.11 networks.

GSoC ’22 blog posts: Introduction Mid-Term Final

GSoC ’23 blog posts: Introduction Mid-Term

Passive-Minstrel-HT in user space

After the first-half of the GSoC ’23 coding timeline, I did some more modifications to the passive user space Minstrel-HT such that it is more robust and also compatible with the new WPCA API. In my mid-term report, I conducted passive measurements on a link between a BananaPi with an MT7615 chip as the Access Point (AP) and a Xiaomi Redmi 4A Gigabit Edition with MT7621 as the Station (STA). However, it’s worth noting that this setup did not support aggregation in the transmission frames.

Since the kernel Minstrel-HT also consider real-time Aggregate MAC Protocol Data Unit (AMPDU) length to calculate the estimated throughput, it was crucial to test to test the behaviour of the user space Minstrel-HT (Py-Minstrel-HT) with its kernel counterpart in an aggregation context. To do this, the experiment setup was changed to involve two identical TP-Link WDR4900 routers, both equipped with ATH9K chips. One router operated as an Access Point (AP), and the other served as a Station (STA)

Initially, the rate selection between kernel Minstrel-HT and Py-Minstrel-HT for the pure ATH9k link with frame aggregation showed a significantly higher disparity compared to previous experiments on the MT76 link, where errors were consistently below 2%. Upon closer examination, it became evident that during the refactoring of Python-WiFi-Manager and Py-Minstrel-HT, there had been a change in how AMPDU length was calculated, resulting in an incorrect calculation.

Furthermore, upon observing a consistently higher error rate associated with the maximum probability rate annotated at the end of the Multi-Rate Retry chain (MRR), I discovered a peculiarity in the kernel Minstrel-HT code. Specifically, when encountering initial attempt statistics for a rate that wasn’t used, the code assigned a previous average success probability of 0. This behavior occurred exclusively in situations where data rates had inherited their success probabilities from higher rates within the same group. It remains uncertain whether this aspect of the code logic was intentional or not.

To enhance the robustness of the measurements, I made adjustments to the execution of Passive Py-Minstrel-HT. Now, before factoring in transmission data, it follows a new sequence. Initially, it explicitly sets the lowest supported data rate, then resets the kernel rate statistics, and subsequently waits until the effect of the kernel rate statistics becomes evident in the API output trace. This revised approach ensures strict alignment between the transmission statistics in user space and those in kernel space right from the outset. As a result of these modifications, the rate selection between the kernel and user space Minstrel-HT is now identical.

In the following section, I present two comparison experiments conducted between kernel Minstrel-HT and Py-Minstrel-HT. The results are presented in terms of the number of errors, discrepancies in rate selection between the kernel and user space Minstrel-HT, and the percentage of errors observed at each Multi-Rate Retry (MRR) stage. These MRR stages, numbered from 0 to 3, are populated with rates ranked in descending order of their estimated throughput, with the final MRR stage reserved for the maximum probability rate to enhance robustness.

Experiment 1

MRR Setting info:
0 {'correct_instances': 19997, 'incorrect_instances': 0, 'percent_error': 0.0}
1 {'correct_instances': 19997, 'incorrect_instances': 0, 'percent_error': 0.0}
2 {'correct_instances': 19997, 'incorrect_instances': 0, 'percent_error': 0.0}
3 {'correct_instances': 19997, 'incorrect_instances': 0, 'percent_error': 0.0}
4 {'correct_instances': 19997, 'incorrect_instances': 0, 'percent_error': 0.0}

Experiment 2

MRR Setting info:
0 {'correct_instances': 19981, 'incorrect_instances': 0, 'percent_error': 0.0}
1 {'correct_instances': 19981, 'incorrect_instances': 0, 'percent_error': 0.0}
2 {'correct_instances': 19981, 'incorrect_instances': 0, 'percent_error': 0.0}
3 {'correct_instances': 19981, 'incorrect_instances': 0, 'percent_error': 0.0}
4 {'correct_instances': 19981, 'incorrect_instances': 0, 'percent_error': 0.0}

Given the congruent behavior observed between the user space and kernel space Minstrel-HT in terms of statistics collection and rate selection, this sets a robust foundation for expanding the capabilities of the user space Minstrel-HT with power control. This extension will allow for a performance comparison between the user space Minstrel-HT and kernel Minstrel-HT, offering valuable insights into their respective impacts.

Extending Minstrel-HT with Power Control (Joint Controller)

In this section, I will detail the implemented joint power and rate controller, which draws its inspiration from Minstrel-Blues, a framework originally developed by Prof. Thomas HΓΌhn in 2011. The joint controller has demonstrated its effectiveness by notably reducing interference and enhancing spatial reuse, particularly in scenarios involving multiple access points that utilize the joint controller. The subsequent sub-sections will present the algorithm in a format mirroring the structure of the proposed joint controller, facilitating a straightforward comparison between the two.

Configurable Parameters

The parameters listed in the table are the precise variable names used when specifying the rate control options (rc_opts) to Py-Minstrel-HT via WiFi-Manager.

Initialisation

During initialisation, the lowest supported rates are set using the reference power, unless Py-Minstrel-HT is executed with fixed power mode. Given that the implementation already incorporates a reference power, the ceiling mode has been eliminated from the extended controller.

Updating Rate Statistics

During each update interval, the extended Minstrel-HT algorithm updates the statistics for all the rates and power levels currently in use. Moreover, it also updates the success probability for unused rates at each power level, provided that the rate group contains at least one higher rate with attempt statistics from the same group and the same power level.

Following the statistics update, the selection of the reference power and sample power for all rates is adjusted based on their success probability and the constraints specified in the rc_opts parameters. If the success probability of the sample power is within the the inc_prob_tol of the reference power, the sample power is decrease by pwr_dec. On the other hand, if the success probability of the sample power is lower the the dec_prob_tol of the reference power, the sample power is increased by pwr_inc to increase throughput. The reference power is also updated in the same way but the tolerance being based on 100% (1.0) success probability.

For example, if the successful probability of the reference power is 0.95, the dec_prob_tol is 0.1 and the pwr_dec is 1, then the reference power will decrease by 1 dB because the probability of the reference power is greater than (1.0 – 0.1)=0.9. Finally, the best power level for all the rates is set to sample_power + opt_pwr_offset.

Rate and Power Sampling

The rate sampling process remains unchanged, as it continues to be carried out by the Minstrel-HT algorithm just as it was before the extension. However, there is a modification in that the power annotation for the sampled rates is now set to the reference power. Furthermore, the algorithm has been expanded to include power sampling, which cycles through the Multi-Rate Retry (MRR) selection process in a cyclic manner.

Selecting Best Rates for the MRR chain

In order to select the best rates, the algorithm defines a linear utility function that exposes trade-off between throughput (benefit) and interferences to other transmissions (cost) of each rate.

The benefit function is defined based on its estimated throughput and the current maximal throughput.

The cost factor serves as a representation of the interference costs associated with achieving a specific throughput at a given power level. In essence, this interference cost depends on two key factors: the extent of interference coverage and the duration of the interference event. However, due to the inherent complexity of modeling such effects, the following definition opts for a simplified approach, approximating the interference area solely based on power(ratei). Similarly, the duration of interference is approximated as 1 divided by the throughput (1/throughput(ratei)).

Using the utility function, the extended joint controller selects the best rates which have the highest utility, contrary to the Minstrel-HT algorithm which only considers the estimated throughput.

Analysing Performance of the Joint Controller

In this section, I will demonstrate the performance of the joint controller in comparison to the kernel Minstrel-HT and the Py-Minstrel-HT without power control extension.

Experiment Description

The experiments consisted of two variations based on the measurement tool:

Iperf3 and tcpdump

The traffic for the measurement was generated using iperf3, with the server hosted on the STA and the client on the AP. Consequently, the traffic flow traversed from the AP to the STA, aligning with the experiment’s objective of running resource controllers on the AP. Furthermore, the network traffic was monitored using tcpdump with a snapshot length of 150 bytes per packet to limit the amount of payload data captured. This data was then stored in a pcap file, which we subsequently parsed to extract the achieved throughput values.

The experiment setup involved two TP-Link WDR4900 routers, both featuring ATH9K chips. One router operated as an Access Point (AP), while the other functioned as a Station (STA).

Flent

Flent is a tool that serves as a wrapper around netperf and similar utilities to execute predefined tests, aggregate the outcomes, and generate plots. It also retains the raw data, providing users the opportunity to analyze and visualize the data, alongside the automated plots. The measurement traffic was generated using flent, with the server hosted on the AP and the client on the STA. However, the traffic direction was inverted using the “–swap-up-down” option. Notably, Flent can measure packet latency, a capability unavailable with iperf3.

The experiment setup involved a TP-Link WDR4900 router as an Access Point (AP) with a Macbook Pro 13′ 2017 as the Station (STA).

Results

The results have revealed a rather surprising trend: the joint controller consistently delivers improved throughput, even in single-link experiments. When compared to the kernel Minstrel-HT and Py-Minstrel-HT without the power control extension, the joint controller demonstrates a noteworthy throughput gain in UDP experiments, consistently falling within the range of 10-25%. These results highlight the joint controller’s capability to improve aggregation and dynamically select power levels.

Additionally, the results showcase various runs of the joint controller, each with a utility weight factor set to 1, 10, and 100. It is worth noting that the experiment using Flent utilised TCP traffic.

Iperf3 and tcpdump
Flent

Conclusion and Outlook

The analysis of the joint controller shows immense promise, and I’m excited about the prospect of continuing to run and test it with multiple interfering routers even beyond the conclusion of GSoC ’23. Regrettably, newer WiFi chips are becoming increasingly closed-source, limiting access to information and functionality related to resource control for the general public. Unfortunately, these newer chips still support power control and provide real-time estimates of throughput. As a result, our plan is to develop an independent power controller after completing Minstrel-Blues in the user space.

GSoC’23 (Final Report) : Qaul Matrix Bridge Tutorial

I am happy to share the completion of the Google Summer of Code Program for 2023 and into this post, You will find about how you can use my project for interconnecting the Qaul and the Matrix chat applications.

I would recommend reading about the project via my previous blog posts.

Requirements

You should have an account on matrix that can act as a bridge bot.

For more secure communications you can opt for running your own matrix homeserver but that is not necessary since our bridge works well on the default matrix server as well.

You should have the binary to the bridge either as code or distributed package.

We are still working on packaging the binary for end users and then in place of cargo run –bin qaul-matrix-bridge you can simply run our binary.

Initialization and Configuration

As of now, We are supporting the bridge only as a daemon process binary without any control on it via CLI or GUI. All the logics are integrated on matrix-sdk or ruma and qaul or libqaul.

On the server where you wish to start a binary between a local qaul network, You need to have one node running the binary and rest will follow along the way.

After installing the qaul project you can

cargo run --bin qaul-matrix-bridge {homeserver-url} {bot-matrix-id} {bot-account-password}

- homeserver-url : The URL for matrix homeserver. Default can be [https://matrix.org]
- bot-matrix-id : The user account of the bot on matrix. Eg : @qaul-bot:matrix.org then id is [qaul-bot]
- bot-account-password : The password for your bot account on matrix

Inviting the Bridge to Matrix Room

Once the bridge is running up. You have to go to your own matrix account and create a new room. Please make sure to turn off any encryption. Now invite the bot to your matrix room and it automatically joins the room.

1. Create a matrix room and disable end to end encryption

2. Invite the bot account into the room

Navigating through Matrix Menu

We have multiple menu options available in our Matrix Menu. You can see the list of all the possible functionalities with !help command.

!help

!qaul

!users

Here the users are displayed over a local peer network in qaul. Qaul Matrix Bridge Bot is the default bridge user unless opted for other users. Along with their name the random strings which you can see are the PeerID of a user on qaul-network.

!invite

!group-info

!remove

Not just messages πŸŽ‰ but exchange files too

Closing Notes

I am glad to share the prototype of the bridge. It was a great experience learning from different matrix specifications and their protocol design. In future, We are looking forward to incorporate Qaul Menu, distribute the binaries over various mediums, Styling the bot responses via emotes and lot more.

A special thanks to my mentor MathJud who have helped with serious problems whenever we faced it and with him I have learned very important principles when designing a chat application. He also introduced me to Matrix Camp in Chaos Communication Camp where we presented the prototype (bit buggy at that time) to people who are working on Matrix protocol.

I would also encourage people to come and contribute to the Qaul project because this project serves an important use case to allow communication during Internet break-outs. Developing a solution to stay hidden from government will help sustain the openess of speech and freedom.

GSoC’23: Implementation of Web Interface of Retroshare – Final Report

Hello again folks πŸ‘‹,
My journey in Google Summer of Code has come to an end, and I am excited to share all the things I have accomplished during this program and the progress done in the Implementation of Web Interface of Retroshare.

Project Description

About Retroshare

Retroshare provides a decentralized, encrypted connection with maximum security between nodes where they can chat, share files, mail, etc. It uses GXS (Generic eXchange System) that provides asynchronous distribution, authentication, privacy, security of generic data. It is designed to provide maximum security and anonymity to its users beyond direct friends. Likewise, it is entirely free and open-source software.

It is a C++ software program that comprises a headless lib called libretroshare. And, this lib helps in making a headless server (retroshare-service), a standalone app with a user interface built using Qt, an android client and more.

Project Goal

The main goal of my project was Implementation of Web Interface of Retroshare in which I had to improve the Web UI and add missing features from the Qt counterpart of Retroshare. The milestones which I had to achieve during GSoC are described below.

Milestones Achieved

File Section

  • Implemented File Search feature using JavaScript Proxy which was pending from a long time.
  • Implemented feature to view which chunks are getting downloaded in the progress bar and the state of the chunks getting downloaded. Also, fixed the Download action buttons and implemented the feature to manage the chunk strategy for a file getting downloaded.
  • Implemented the Share Manager feature, which lets you add and manage the shared directories and manage different level of access and permissions for each directory. It is in progress and will be complete soon.

Config Section

  • Implemented the mail config panel to create and manage all the mail tags.
  • Implemented features to set Dynamic DNS and configure Tor/I2P Socks Proxy from Web UI and fixed NAT and some other existing options in the network config section.

Mail Section

  • Improved and made the mail composer reusable to help user to search and select the nodes (contacts) and improved the UI of composer.
  • Implemented feature to reply to any mail from Web UI by reusing the mail composer.
  • Implemented feature to view all the Attachments in one place and also to view the attachments sent in a mail.
  • Fixed mail view and the order of mail in all mail sections.

Forums Section

  • Implemented Caching in Forums section to reduce repeated numerous API calls and reuse the already fetched data and invalidate the cache and refetch the data again. It is also a WIP.
  • Fixed the bug which was causing infinite calls to the same endpoint and freezing or crashing the Web UI.

I also improved the whole User Interface of the Web UI and made it more user-friendly. Furthermore, I refactored the directory structure, tidied up the repository and optimized the build.

You can view all of my contributions at once – GitHub Repo.

Other Contributions

Apart from the contributions in the Web UI, I also contributed to the libretroshare where I

  • Added code to generate the jsonapi endpoints for the file section and other sections in Web UI.
  • Implemented getChunkStrategy() endpoint in the libretroshare in C++ and also added more code to generate other API endpoints.
  • Fixed MIME type for content-type in HTTP header and added routes for new directory structure.

What’s Next?

The Web Interface of Retroshare has improved so much since I have undertaken this project, and It’s almost ready for release in the next release cycle of Retroshare. Still, there is a lot of room for improvement on the Web UI and I will keep adding more features in it.
I will complete all the features which are currently in progress and even after GSoC I would love to keep contributing and improving this truly wonderful open sourced project created solely for the welfare of society.

Wrapping Up

My journey in GSoC with Retroshare has been an incredible learning experience for me. I got to learn so much all thanks to my mentor Cyril Soler, M. Saud and fellow community members. GSoC has bridged the gap between aspiring open source contributors and industry level experts and has enabled folks like me to gain experience by working on enterprise level projects under the guidance of wonderful people.

This summer has turned out to be a pivotal point in my career and has boosted my confidence and helped me to strengthen my arsenal by learning new skills, gaining hands-on experience and improving my programming skills.

As I keep moving forward in my coding journey after this project, I’m excited to utilize the knowledge and experience I’ve gained here in future projects. I plan to stay engaged in this community after GSoC and My aim is to continue contributing actively to the project’s growth and achievements.

GSoC’23: Automation tools for LibreMesh firmware build and monitoring – final

Previous post: https://blog.freifunk.net/2023/07/08/gsoc23-automation-tools-for-libremesh-firmware-build-and-monitoring-part-2/

Project results

These are the repositories with the produced code, where the first is the main:

https://gitlab.com/a-gave/libremesh-ansible-playbooks

https://gitlab.com/a-gave/libremesh-ansible-collection

https://gitlab.com/a-gave/ansible_openwrt_buildroot

Playbooks and roles to build releases

In this final part other than improving the code, I also used it a lot to prepare a list of firmware images with the new releases of LibreMesh: the v2020.3 based on OpenWrt 19.07.10 and the upcoming new relase v2023.1-rc1 that support the latest OpenWrt 22.03.5.

I extended and tested the automation tools to build for all devices of a defined target/subtarget to make the precompiled firmwares for the latest releases of LibreMesh. The list of target/subtarget is based on the advice of one of the last LibreMesh meeting about the architectures most used, that mainly match also those with a lot of low-cost devices.

Building for all devices has meant to encounter these kind of issues:

  • the `default` set of lime-packages doesn’t fit in the factory and/or the sysupgrade image of a device and this cause a build failure.
  • as the previous but the devices fail silently without interrupting the build
  • multicore/parallel compiling randomly fails

For the first problem, there isn’t in the OpenWrt Buildroot a mechanism to predict the resulting firmware size i.e. based on the list of the selected packages. This seems related to the compression tools used to produce the firmware that could change depending on the devices, and leads to have successful builds for a device with an image size of 7168k while another with the same image size will fail. This means to not be able to predict if the produced LibreMesh firmware will fit in the device memory, and the devices that cause a build failure should be identified singularly.

I take notes of devices that cause a build failure for the selected list of target/subtarget for the mentioned OpenWrt and LibreMesh releases using the default set of LibreMesh packages. Firstly providing a mechanism to exclude the devices that throws an error. This doesn’t mean that the other devices are somehow ‘supported’ but simply that the build doesn’t fail. Since this work of mapping of all devices is still in development it is available at the branch dev of the collection of roles:

Since the support of LibreMesh for OpenWrt 22.03.5 is still in testing, and the amount of time and space needed to reproduce all the images for the selected architectures may be considerable. I put in place two mechanism:

  • having a list of `supported_devices`, that can be used to rebuilt only a sublist of devices, ideally those of people/community who can test them.
  • I started to setup a set of docker images for differents OpenWrt targets/subtargets to build with the default packages of LibreMesh, that I briefly explain.

Dockerized buildroot for each target/subtarget

There is a set of Dockerfiles that I’m including in the set of ansible playbooks/roles to speed up the process of building and to save space among different LibreMesh releases based on the same OpenWrt releases.

https://github.com/a-gave/libremesh_openwrt_buildroot_docker

It is thinked to be able to rebuild different versions of LibreMesh with a separated environment but avoiding rebuilding the same OpenWrt tools and toolchain more times. For instance if the build of LibreMesh for all devices of the OpenWrt target ath79/generic takes 1 hour and has a size of 28.8GB, a subsequent builds with minor changes will take around 30 min and increase the size of the produced docker image by only 15GB. In a similar way a docker image with pre-compiled tools, toolchain, pre-extracted kernel and precompiled kmods and packages, that build without specifying a device but only the target/subtarget will take 11.2GB of space but allow to build then an image, that has pre-selected the set of LibreMesh suggested packages, within 4 minutes. This time also depends on the amount of additional packages selected and on the availability of computing resources for the builder machine.

Other contributions

In this period I also contributed to LibreMesh project providing metrics for this analysis of the effect of changing the default distance for long wireless links.

https://github.com/ilario/wifi-distance-setting-exploration

This is a kind of regression for outdoor devices, still unfixed due to two fact:

  • neither OpenWrt nor LibreMesh, has an easy way to determine if the device is manufactored to be used indoor (tipically a router) or outdoor (an antenna).
  • to improve the performances of routers, a LibreMesh choice was to lower the default distance, but so remain disadvantaged the antennas. This means they are more susceptible to have a broken wireless link (as if it were a cut cable) if the device is accidentally reset and so they require a custom build with different defaults.

Conclusion

A thanks to Ilario and Stefca for having been my mentors, to the organizations of FreiFunk and LibreMesh that made this work possible, and to all folks of OpenWrt and Gluon that contribute to free and open source networks.

GSoC ’23: Migrating luci-app-mjpg-streamer to JavaScript: A Comprehensive Guide

The latest OpenWRT versions introduced a new web interface system that eliminated the need for Lua. Instead, the client’s browser handles the rendering and computation, allowing routers to focus on their primary tasks. This change has the advantage of eliminating the lua runtime, saving storage space, and having faster routers. In the previous CBI-based system, pages were rendered on the router and sent as HTML to the browser, increasing the load on the routers. This inefficiency can result in performance problems. To aid in this transition, LuCI offers the LuCI-JavaScript API, which is now utilized for constructing web interfaces.

luci-app-mjpg-streamer

I have successfully migrated luci-app-mjpg-streamer to JavaScript, making it a valuable example for building or migrating LuCI apps. This tutorial covers the essential aspects of the process, providing a comprehensive guide.

Below is the tree view representation of the directory structure for the app:

.
β”œβ”€β”€ Makefile
β”œβ”€β”€ htdocs
β”‚Β Β  └── luci-static
β”‚Β Β      └── resources
β”‚Β Β          └── view
β”‚Β Β              └── mjpg-streamer
β”‚Β Β                  └── mjpg-streamer.js
β”œβ”€β”€ po
β”‚Β Β  β”œβ”€β”€ ar
β”‚Β Β  β”‚Β Β  └── mjpg-streamer.po
β”‚Β Β  β”œβ”€β”€ ...
β”‚Β Β  
β”‚Β Β   
β”‚Β Β      
└── root
    └── usr
        └── share
            β”œβ”€β”€ luci
            β”‚Β Β  └── menu.d
            β”‚Β Β      └── luci-app-mjpg-streamer.json
            └── rpcd
                └── acl.d
                    └── luci-app-mjpg-streamer.json

How to migrate your app :

ACLs

In the file  root/usr/share/rpcd/acl.d/luci-app-mjpg-streamer.json, we provide all the necessary access permissions for our application to function properly.

{
	"luci-app-mjpg-streamer": {
		"description": "Grant UCI access for luci-app-mjpg-streamer",
		"read": {
			"uci": [
				"mjpg-streamer"
			]
		},
		"write": {
			"uci": [
				"mjpg-streamer"
			]
		}
	}
}

For example, in my previously migrated app: luci-app-olsr, when there is a need to grant public access to specific pages, we ensure that all essential access permissions are appropriately configured in root/usr/share/rpcd/acl.d/luci-app-olsr-unauthenticated.json.

These permissions are necessary for the operation of our application when a user is not yet authenticated-

{
	"unauthenticated": {
		"description": "Grant read access",
		"read": {
			"ubus": {
				"uci": ["get"],
				"luci-rpc": ["*"],
				"network.interface": ["dump"],
				"network": ["get_proto_handlers"],
				"olsrd": ["olsrd_jsoninfo"],
				"olsrd6": ["olsrd_jsoninfo"],
				"olsrinfo": ["getjsondata", "hasipip", "hosts"],
				"file": ["read"],
				"iwinfo": ["assoclist"]

			},
			"uci": ["luci_olsr", "olsrd", "olsrd6", "network", "network.interface"]
		}
	}
}
 

To learn more about how ACL (Access Control List) works, you can refer to this resource: OpenWRT’s docs  It is important to consider applying the principle of least privilege when configuring ACLs.

MENU

In the file root/usr/share/luci/menu.d/luci-app-mjpg-streamer.json, we define the location where our view will be displayed in the admin menu. This is utilized for admin specific views

{
	"admin/services/mjpg-streamer": {
		"title": "MJPG-streamer",
		"action": {
			"type": "view",
			"path": "mjpg-streamer/mjpg-streamer"
		},
		"depends": {
			"acl": [
				"luci-app-mjpg-streamer"
			],
			"uci": {
				"mjpg-streamer": true
			}
		}
	}
}

The path indicates the location where the JavaScript view to be rendered is present with respect to the htdocs/luci-static/resources/view directory.

FORMS

To explore the JavaScript APIs offered by LuCI, you can visit the following link: LuCI client side API documentation. A recommended starting point is the core luci.js class.

LuCI forms allow you to create UCI or JSON-backed configuration forms. To create a typical form, you start by creating an instance of either LuCI.form.Map or LuCI.form.JSONMap using new. Then, you can add sections and options to the form instance. Finally, invoking the render() method on the instance generates the HTML markup and inserts it into the Document Object Model(DOM). For a better understanding of how LuCI forms work, you can refer to the following : LuCI.form.

This is an example demonstrating the usage of LuCI.form within one of the admin’s views, using a small portion of the mjpg-streamer.js code. The full code for the file can be found here.

'use strict';
'require view';
'require form';
'require uci';
'require ui';
'require poll';

/* Copyright 2014 Roger D < rogerdammit@gmail.com>
Licensed to the public under the Apache License 2.0. */

return view.extend({
	load: function () {
		var self = this;
		poll.add(function () {
			self.render();
		}, 5);

		document
			.querySelector('head')
			.appendChild(
				E('style', { type: 'text/css' }, [
					'.img-preview {display: inline-block !important;height: auto;width: 640px;padding: 4px;line-height: 1.428571429;background-color: #fff;border: 1px solid #ddd;border-radius: 4px;-webkit-transition: all .2s ease-in-out;transition: all .2s ease-in-out;margin-bottom: 5px;display: none;}',
				]),
			);

		return Promise.all([uci.load('mjpg-streamer')]);
	},
	render: function () {
		var m, s, o;

		m = new form.Map('mjpg-streamer', 'MJPG-streamer', _('mjpg streamer is a streaming application for Linux-UVC compatible webcams'));

		//General settings

		var section_gen = m.section(form.TypedSection, 'mjpg-streamer', _('General'));
		section_gen.addremove = false;
		section_gen.anonymous = true;

		var enabled = section_gen.option(form.Flag, 'enabled', _('Enabled'), _('Enable MJPG-streamer'));

		var input = section_gen.option(form.ListValue, 'input', _('Input plugin'));
		input.depends('enabled', '1');
		input.value('uvc', 'UVC');
		// input: value("file", "File")
		input.optional = false;

		var output = section_gen.option(form.ListValue, 'output', _('Output plugin'));
		output.depends('enabled', '1');
		output.value('http', 'HTTP');
		output.value('file', 'File');
		output.optional = false;

		//Plugin settings

		s = m.section(form.TypedSection, 'mjpg-streamer', _('Plugin settings'));
		s.addremove = false;
		s.anonymous = true;

		s.tab('output_http', _('HTTP output'));
		s.tab('output_file', _('File output'));
		s.tab('input_uvc', _('UVC input'));
		// s: tab("input_file", _("File input"))

		// Input UVC settings

		var this_tab = 'input_uvc';

		var device = s.taboption(this_tab, form.Value, 'device', _('Device'));
		device.default = '/dev/video0';
		//device.datatype = "device"
		device.value('/dev/video0', '/dev/video0');
		device.value('/dev/video1', '/dev/video1');
		device.value('/dev/video2', '/dev/video2');
		device.optional = false;

                  //... This snippet represents only a small portion of the complete code.

		var ringbuffer = s.taboption(this_tab, form.Value, 'ringbuffer', _('Ring buffer size'), _('Max. number of pictures to hold'));
		ringbuffer.placeholder = '10';
		ringbuffer.datatype = 'uinteger';

		var exceed = s.taboption(this_tab, form.Value, 'exceed', _('Exceed'), _('Allow ringbuffer to exceed limit by this amount'));
		exceed.datatype = 'uinteger';

		var command = s.taboption(
			this_tab,
			form.Value,
			'command',
			_('Command to run'),
			_('Execute command after saving picture. Mjpg-streamer parses the filename as first parameter to your script.'),
		);

		var link = s.taboption(this_tab, form.Value, 'link', _('Link newest picture to fixed file name'), _('Link the last picture in ringbuffer to fixed named file provided.'));

		return m.render();
	},
});

Flexible Views

For enhanced flexibility in our pages, we have the option to manually define the HTML, which I have used in the status views. This approach allows us to have more control over the page structure and content, providing greater customization possibilities

This is an example demonstrating the usage of flexible views within one of the status’s views, using a small portion of the topology.js code. The full code for the file can be found here.

'use strict';
'require uci';
'require view';
'require poll';
'require rpc';
'require ui';


return view.extend({
	callGetJsonStatus: rpc.declare({
		object: 'olsrinfo',
		method: 'getjsondata',
		params: ['otable', 'v4_port', 'v6_port'],
	}),

	fetch_jsoninfo: function (otable) {
		var jsonreq4 = '';
		var jsonreq6 = '';
		var v4_port = parseInt(uci.get('olsrd', 'olsrd_jsoninfo', 'port') || '') || 9090;
		var v6_port = parseInt(uci.get('olsrd6', 'olsrd_jsoninfo', 'port') || '') || 9090;
		var json;
		var self = this;
		return new Promise(function (resolve, reject) {
			L.resolveDefault(self.callGetJsonStatus(otable, v4_port, v6_port), {})
				.then(function (res) {
					json = res;

					jsonreq4 = JSON.parse(json.jsonreq4);
					jsonreq6 = json.jsonreq6 !== '' ? JSON.parse(json.jsonreq6) : [];
					var jsondata4 = {};
					var jsondata6 = {};
					var data4 = [];
					var data6 = [];
					var has_v4 = false;
					var has_v6 = false;

					if (jsonreq4 === '' && jsonreq6 === '') {
						window.location.href = 'error_olsr';
						reject([null, 0, 0, true]);
						return;
					}

					if (jsonreq4 !== '') {
						has_v4 = true;
						jsondata4 = jsonreq4 || {};
						if (otable === 'status') {
							data4 = jsondata4;
						} else {
							data4 = jsondata4[otable] || [];
						}

						for (var i = 0; i < data4.length; i++) {
							data4[i]['proto'] = '4';
						}
					}

					if (jsonreq6 !== '') {
						has_v6 = true;
						jsondata6 = jsonreq6 || {};
						if (otable === 'status') {
							data6 = jsondata6;
						} else {
							data6 = jsondata6[otable] || [];
						}

						for (var j = 0; j < data6.length; j++) {
							data6[j]['proto'] = '6';
						}
					}

					for (var k = 0; k < data6.length; k++) {
						data4.push(data6[k]);
					}

					resolve([data4, has_v4, has_v6, false]);
				})
				.catch(function (err) {
					console.error(err);
					reject([null, 0, 0, true]);
				});
		});
	},
	action_topology: function () {
		var self = this;
		return new Promise(function (resolve, reject) {
			self
				.fetch_jsoninfo('topology')
				.then(function ([data, has_v4, has_v6, error]) {
					if (error) {
						reject(error);
					}

					function compare(a, b) {
						if (a.proto === b.proto) {
							return a.tcEdgeCost < b.tcEdgeCost;
						} else {
							return a.proto < b.proto;
						}
					}

					data.sort(compare);

					var result = { routes: data, has_v4: has_v4, has_v6: has_v6 };
					resolve(result);
				})
				.catch(function (err) {
					reject(err);
				});
		});
	},
	load: function () {
		return Promise.all([uci.load('olsrd'), uci.load('luci_olsr')]);
	},
	render: function () {
		var routes_res;
		var has_v4;
		var has_v6;

		return this.action_topology()
			.then(function (result) {
				routes_res = result.routes;
				has_v4 = result.has_v4;
				has_v6 = result.has_v6;
				var table = E('div', { 'class': 'table cbi-section-table' }, [
					E('div', { 'class': 'tr cbi-section-table-titles' }, [
						E('div', { 'class': 'th cbi-section-table-cell' }, _('OLSR node')),
						E('div', { 'class': 'th cbi-section-table-cell' }, _('Last hop')),
						E('div', { 'class': 'th cbi-section-table-cell' }, _('LQ')),
						E('div', { 'class': 'th cbi-section-table-cell' }, _('NLQ')),
						E('div', { 'class': 'th cbi-section-table-cell' }, _('ETX')),
					]),
				]);
				var i = 1;

				for (var k = 0; k < routes_res.length; k++) {
					var route = routes_res[k];
					var cost = (parseInt(route.tcEdgeCost) || 0).toFixed(3);
					var color = etx_color(parseInt(cost));
					var lq = (parseInt(route.linkQuality) || 0).toFixed(3);
					var nlq = (parseInt(route.neighborLinkQuality) || 0).toFixed(3);

					var tr = E('div', { 'class': 'tr cbi-section-table-row cbi-rowstyle-' + i + ' proto-' + route.proto }, [
						route.proto === '6'
							? E('div', { 'class': 'td cbi-section-table-cell left' }, [E('a', { 'href': 'http://[' + route.destinationIP + ']/cgi-bin-status.html' }, route.destinationIP)])
							: E('div', { 'class': 'td cbi-section-table-cell left' }, [E('a', { 'href': 'http://' + route.destinationIP + '/cgi-bin-status.html' }, route.destinationIP)]),
						route.proto === '6'
							? E('div', { 'class': 'td cbi-section-table-cell left' }, [E('a', { 'href': 'http://[' + route.lastHopIP + ']/cgi-bin-status.html' }, route.lastHopIP)])
							: E('div', { 'class': 'td cbi-section-table-cell left' }, [E('a', { 'href': 'http://' + route.lastHopIP + '/cgi-bin-status.html' }, route.lastHopIP)]),
						E('div', { 'class': 'td cbi-section-table-cell left' }, lq),
						E('div', { 'class': 'td cbi-section-table-cell left' }, nlq),
						E('div', { 'class': 'td cbi-section-table-cell left', 'style': 'background-color:' + color }, cost),
					]);

					table.appendChild(tr);
					i = (i % 2) + 1;
				}

				var fieldset = E('fieldset', { 'class': 'cbi-section' }, [E('legend', {}, _('Overview of currently known OLSR nodes')), table]);

                //... This snippet represents only a small portion of the complete code.

				var result = E([], {}, [h2, divToggleButtons, fieldset, statusOlsrLegend, statusOlsrCommonJs]);

				return result;
			})
			.catch(function (error) {
				console.error(error);
			});
	},
	handleSaveApply: null,
	handleSave: null,
});

Feel free to reach out to me via email if you have any doubts or questions. I’m here to help! Stay tuned for more valuable content as I continue to share useful information and resources. Thank you for your support!

GSoC’23 Final Report : LuCI Migrate to JavaScript-Based Framework

Hello!

I’ve had a wonderful and enriching experience over the last 5 months while working on my Google Summer of Code Project, “LuCI Migration to JavaScript-Based Framework.” As Google Summer of Code 2023@Freifunk draws to a close, I am excited to announce the successful completion of my project, which involved migrating several LuCI apps to JavaScript. I wish to extend my sincere gratitude to my mentor, Andreas BrΓ€u. His constant support and guidance have been extremely helpful.

Project Goals

LuCI is an open-source framework that is widely used to build web interfaces for embedded devices such as WiFi routers. In the CBI-based old system, pages were rendered on the router and delivered as HTML to the browser, which caused a higher load on the embedded devices. This makes the system less efficient and can lead to performance issues.

The latest OpenWRT versions introduced a new web interface system that eliminated the need for Lua. Instead, the client’s browser handles the rendering and computation, allowing routers to focus on their primary tasks. This change has the advantage of eliminating the Lua runtime, saving storage space, and having faster routers. In the previous CBI-based system, pages were rendered on the router and sent as HTML to the browser, increasing the load on the routers. This inefficiency can result in performance problems. To aid in this transition, LuCI offers the LuCI-JavaScript API, which is now utilized for constructing web interfaces.

Project Results

As part of the project, I accomplished the successful migration of the following apps to JavaScript:

  • luci-app-olsr (OLSR configuration and status module)
  • luci-app-uhttpd (OLSR configuration and status module)
  • luci-app-olsr-viz (OLSR Visualization)
  • luci-app-babeld (LuCI support for babeld)
  • luci-app-mjpg-streamer (MJPG-Streamer service configuration module)

The migration of luci-app-olsr stood out as an incredibly exciting and valuable learning experience. Notably, this application now boasts a performance improvement of four to five times compared to its previous version. I also created a comprehensive tutorial based on the migration process of luci-app-olsr, which can serve as a valuable reference for writing or migrating other LuCI apps.

The tutorial covers the essential aspects of the process, providing a comprehensive guide. This app is an extensive application that includes both status views and an admin backend.
Below is the tree view representation of the directory structure for the app:

Explore My Contributions

My work can be located within the following commits, and all reviewed applications have been merged. These will soon be accessible to users in the upcoming OpenWRT releases.

What Next

While the work has been carried out within the scope of GSoC, I am committed to continuing the migration of additional apps to JavaScript even after the program’s conclusion. I will maintain an active presence in the community and actively seek out intriguing projects to contribute to.

Wrapping Up

Following the migration of these commonly used apps in LuCI, a significant enhancement in performance has been achieved. Project objectives have been successfully met, leading to reduced router workload and an improved user experience, particularly for those with lower-specification routers. The new system also offers increased developer flexibility. Leveraging a client-side JavaScript framework provides developers with versatile options for future customization and extension of the LuCI web interface.

This shift establishes a standardized approach for developers to interact with router services, configure data retrieval, and support the development and maintenance of LuCI-based applications. Such advancements are particularly valuable for community networks reliant on lower-spec devices. The heightened performance and decreased device load simplify network management, bolstering the efficacy of LuCI-based tools. In summary, the migration of LuCI to JavaScript yielded substantial benefits for the OpenWrt community and users. These include improved performance, elevated developer adaptability, and potentially streamlined management of LuCI-based applications within community networks.

My engagement in this project was a source of enjoyment and knowledge, and though it demanded significant efforts, I found enjoyment in the process. and I extend special appreciation to my mentor for his unwavering support and motivation. GSoC 2023 with Freifunk has proven to be an enriching experience. My path ahead involves contributing to additional open-source projects, further app migrations to JavaScript.

GSoC ’23: OpenWrt PPA Porting to GitHub and Rebuilding

GSoC ’23: OpenWrt PPA Porting to GitLab and Rebuilding

Previous post: https://blog.freifunk.net/2023/07/02/gsoc-23-openwrt-ppa-part-2-gitlab-packaging/

GitHub Port and Feature Parity

Continuing from the successful CI build on GitLab, I started working on a GitHub port. Fortunately, only the specific syntax had to be dealt with, as the actual build code needed the same parameters and configuration scripts. This meant a full port could be done without the need to create a build recipe from scratch: https://github.com/ndren/openwrtsdkbuild/blob/master/.github/workflows/docker-build.yml. Notice how similar the original script is: https://github.com/ndren/openwrtsdkbuild/blob/master/.github/workflows/docker-build.yml.

However one major difference is where the CI artifacts are uploaded. As an actual OpenWrt router needs to consume the packages, this needs to be done in a specific folder-level setup. Fortunately, GitHub releases cleanly matches up with this:

- uses: "marvinpinto/action-automatic-releases@latest"
  with:
    repo_token: "${{ secrets.GITHUB_TOKEN }}"
    automatic_release_tag: "latest"
    title: "Package release"
    files: |
        /home/runner/work/openwrtsdkbuild/openwrtsdkbuild/artifacts/packages/*/*/*.ipk
        /home/runner/work/Packages.gz


Notice the use of a GitHub token, specific to this repository. This is setup by GitHub automatically every CI run, there is no need to manually create one.

This can, of course, be reproduced on a router running OpenWrt by following the shell script used by CI: https://github.com/ndren/openwrtsdkbuild/blob/master/test-gh-release.sh

UPLOAD_REPO="https://github.com/ndren/openwrtsdkbuild/releases/download/latest"
echo "src/gz myrepo ${UPLOAD_REPO}" >> /etc/opkg/customfeeds.conf
# Install package
opkg install "${PACK_NAME}"

To get feature parity with GitLab, we found a method to add a dropdown so that app users can use this to build packages without the need to manually edit the source code in their fork. This also makes it clear which parameters are provided to the build environment without reading the configuration files:

Neater unauthenticated package downloads

With this in mind, I was really happy when I learned that all the builds can be done without authentication when downloading. I discovered that GitLab started allowing guests to download from generic package repositories: https://gitlab.com/gitlab-org/gitlab/-/issues/299384. This makes it a lot easier to download on an actual router. Compare the following example configuration files:

Not only is it easier to type, it also does not require a PERSONAL_ACCESS_TOKEN to copy to each router, so this means that the router cannot leak the GitLab tokens (since it doesn’t need any), which is a real threat since the router may not support HTTPS cleanly in its installation of opkg. Not only easier to type by hand, it also makes more sense in a shared environment. That is, I don’t need to tell you my personal access token, you can add https://gitlab.com/api/v4/projects/ndren%2Fopenwrtsdkbuild/packages/generic/armvirt_64/0.0.3/ to your list of repos and it will work.

What is nice is that the same main code can be used to install from a GitLab or GitHub repository:

# Add repository (any repository: GitLab, GitHub, etc.)
UPLOAD_REPO="https://gitlab.com/api/v4/projects/ndren%2Fopenwrtsdkbuild/packages/generic/armvirt_64/0.0.3/"
echo "src/gz myrepo ${UPLOAD_REPO}" >> /etc/opkg/customfeeds.conf
opkg install "${PACK_NAME}"

Updated SDK

I also took the time to get set up with a newer SDK docker images on my fork of openwrtsdk: https://gitlab.com/ndren/openwrtsdk. I found a surprising amount of packages needed to be included in the newer SDK version, the latest release candidate 23.05.0-rc2. This also required an upgrade to alpine 3.15, as argp-standalone did not exist before. This lost compatibility with python2 but fortunately the OpenWrt developers ported their SDK to python3: https://github.com/openwrt/packages/issues/8892

FROM alpine:3.14
RUN apk add asciidoc bash bc binutils bzip2 cdrkit coreutils diffutils findutils flex g++ gawk gcc gettext git grep intltool libxslt linux-headers make ncurses-dev openssl-dev patch perl python2-dev python3-dev rsync tar unzip util-linux wget zlib-dev sudo xz lighttpd curl

FROM alpine:3.15
RUN apk add asciidoc bash bc binutils bzip2 cdrkit coreutils diffutils findutils flex g++ gawk gcc gettext git grep intltool libxslt linux-headers make ncurses-dev openssl-dev patch perl python3-dev rsync tar unzip util-linux wget zlib-dev sudo xz lighttpd curl alpine-sdk gzip build-base musl-dev musl-libintl musl-utils fts fts-dev musl-obstack musl-obstack-dev musl-nscd-dev musl-nscd argp-standalone

Once this was resolved for a single build building everything was only a matter of CPU time. The builds are here: https://hub.docker.com/r/andreien/openwrtsdk/tags. (Yes, that is all the architectures included!)

Closing words

I invite you to try this out! All it takes is GitHub or GitLab account and a bit of text editing and you should be able to build any OpenWrt package in a full CI environment. Feel free to file an issue if there’s anything to improve, I’m happy to help.

Thanks to everyone at Freifunk for the encouragement and in particular to my mentor Zoobab that carried me through this journey with help with code examples and design decisions. I am really happy to have learned to use Docker and write GitHub and GitLab CI scripts effectively. Thank you for having me on this adventure, I’ll see you later.

GSoC ’23: Midterm Report on Joint Power and Rate Control in Userspace

1) Introduction

If you haven’t already done so, I highly recommend reading the introductory blog article on the user space resource allocator for Linux-based OpenWRT access points. The blog gives a thorough introduction to rate and power regulation in IEEE 802.11 devices, which will be beneficial for understanding the work covered later. Happy reading!

Before extending the py-minstrel-ht package, it is crucial to ensure that the user space rate control mirrors the behavior of the kernel Minstrel-HT algorithm. This alignment will establish a reliable basis to evaluate the inclusion of power control in the Minstrel-HT algorithm through performance comparison with the kernel variant.

2) Passive Minstrel-HT

To conduct a precise evaluation of py-minstrel-ht alongside the kernel Minstrel-HT, a passive version of the user space Minstrel-HT was developed. The passive py-minstrel-ht runs alongside the kernel Minstrel-HT where the kernel algorithm performs rate control while py-minstrel-ht passively reports all of its rate selection. In order to compare the rate selection between the Minstrel-HT algorithms, the rate statistics for both rate controls must be identical. As such, he py-minstrel-ht obtains the packet counts via the “stats” API lines from the kernel rate control.

In an ideal scenario, during each rate selection, the statistics on both algorithms would be the same yielding identical rate setting. With the help of this passive variant, it has been easier to evaluate the behavior of the user space rate control and investigate the implementation errors in py-minstrel-ht. The WPCA API through minstrel-rcd can send parsable traces to the user space which can be saved. In the output traces, the WPCA also provides information on the rate selection via “best_rates” line which triggers the rate-setting process in the passive py-minstrel-ht. Similar to the API traces, the passive Minstrel-HT saves its own trace which is later analyzed in conjunction with the kernel traces. The following code section consists an example of such API trace:

wl1;174a4f945a7a9aa0;txs;a0:78:17:74:c2:5f;1;1;1;226;2;233;2;ffff;0;ffff;0
wl1;174a4f945bcf1660;txs;a0:78:17:74:c2:5f;1;1;1;265;2;273;2;ffff;0;ffff;0
wl1;174a4f945bcfaad0;stats;a0:78:17:74:c2:5f;136;0;0;0;7c;0;1b2
wl1;174a4f945bd00750;stats;a0:78:17:74:c2:5f;226;0;0;0;7c;b;436
wl1;174a4f945bd02c80;stats;a0:78:17:74:c2:5f;234;11e;344;31;31;31;53d
wl1;174a4f945bd041c0;stats;a0:78:17:74:c2:5f;233;1b0;385;7c;f8;17e1;3371
wl1;174a4f945bd05d90;stats;a0:78:17:74:c2:5f;265;0;0;0;2;290;7d6
wl1;174a4f945bd07eb0;stats;a0:78:17:74:c2:5f;273;272;592;90;11e;1e7a;3af0
wl1;174a4f945bd0bec0;best_rates;a0:78:17:74:c2:5f;273;1b3;233;231;1f2
wl1;174a4f945bd0e0d0;sample_rates;a0:78:17:74:c2:5f;274;1b7;1f6;226;234;237;266;1f7;227;238;176;1a9;1b5;0;0

The key factors that determine the rate selection are the estimated transmission success probability and estimated throughput of each MCS rate. Hence, before comparing the rate selection, the averaging, success probability, and throughput calculation must match in both algorithms. As a side note, both the Minstrel-HT implementations use the Low Pass Butterworth filter to average success probabilities over time.

The following sub-sections cover an overview of the results obtained from several runs of the passive experiments, each running for 20mins.

2.1) Probability Estimation in User space vs Kernel space

The section shows the comparison of the transmission success probability between the kernel Minstrel-HT and py-minstrel-ht.

From the analysis, it is evident that the internals of the Butterworth filter in user space acts exactly the same as the kernel algorithm. However, due to floating point restriction, the kernel Minstrel-HT scales all the floating point numbers to integers which can lose precision. This can be observed as the difference between the success probabilities is sometimes up to 0.2%.

2.2) Throughput Estimation in User space vs Kernel space

Similar to the success probability calculation and discounting in py-minstrel-ht, the estimated throughput for all rates seems to match almost exactly the kernel Minstrel-HT. However, as previously stated, there are negligible errors due to the difference in floating point precision between the two Minstrel-HT algorithms.

2.3) Rate Setting in User space vs Kernel space

Ultimately, the selection of transmission rates determines the comparability of py-minstrel-ht with the kernel Minstrel-HT. In order to compare the rate setting chosen by both algorithms, I’ve developed an analysis script that goes over the saved traces and compares the “best_rates” lines. The following sub-sections show the results of some experiments.

Experiment 1

The Minstrel-HT algorithm, for MT76 chips, utilizes the 5 rates in the Multi-rate Retry (MRR) chain. The results show the number of errors in each MRR index and its relative error percentage between the kernel and user space Minstrel-HT. The MRR indices from 0 to 3 are filled by the rates with the highest throughput estimate and the MRR index 4 is filled with the maximum probability rate for robustness.

MRR Setting info:
0 {'correct_instances': 15831, 'incorrect_instances': 1, 'percent_error': 0.0063163213744315315}
1 {'correct_instances': 15818, 'incorrect_instances': 14, 'percent_error': 0.08842849924204144}
2 {'correct_instances': 15810, 'incorrect_instances': 22, 'percent_error': 0.13895907023749368}
3 {'correct_instances': 15760, 'incorrect_instances': 72, 'percent_error': 0.4547751389590703}
4 {'correct_instances': 15783, 'incorrect_instances': 49, 'percent_error': 0.309499747347145}

Furthermore, the following plot shows the total number of errors in the MRR chain when compared with the kernel algorithm.

Experiment 2

MRR Setting info:
0 {'correct_instances': 12633, 'incorrect_instances': 13, 'percent_error': 0.10279930412778744}
1 {'correct_instances': 12626, 'incorrect_instances': 20, 'percent_error': 0.15815277558121146}
2 {'correct_instances': 12628, 'incorrect_instances': 18, 'percent_error': 0.1423374980230903}
3 {'correct_instances': 12623, 'incorrect_instances': 23, 'percent_error': 0.18187569191839315}
4 {'correct_instances': 12429, 'incorrect_instances': 217, 'percent_error': 1.7159576150561442}

2.4) Remarks

After running experiments with the passive py-minstrel-ht and the active kernel Minstrel-HT, a lot of minor errors and bugs in the user space algorithm were resolved. I’ve listed some of the issues that were solved after investigating the results from the passive experiments:

  • The probability calculation in py-minstrel-ht did not take into account the packet counts from the last update interval, unlike the kernel Minstrel-HT.
  • During the selection of the maximum probability rate, the Minstrel-HT algorithm calculates a threshold airtime value. This value acts as the minimum airtime a maximum probability rate must have. In the kernel algorithm, it is set to 18% higher than the maximum airtime between the rates in MRR0 and MRR1. However, in py-minstrel-ht, this value is erroneously set to 18% higher than the minimum airtime between the top two throughput rates.
  • Additionally, during the selection of the maximum probability rate, py-minstrel-ht mistakenly considers only rates that had at least 1 packet attempt in the historical counts. However, there are unused rates that derive their probability estimate based on higher rates in the same minstrel rate group.
  • As mentioned earlier, the unused rates also derive their probability estimate based on higher rates in the same rate group. After investigating the high error in MRR4, it was found that py-minstrel-ht was only updating the estimated success probability and not the estimated throughput, which has now been fixed.

In summary, the behavior of py-minstrel-ht aligns with the kernel Minstrel-HT, with observed errors likely stemming from minor precision discrepancies in the kernel space and potential inconsistencies in rate statistics at the start. For example, if MRR1 and MRR2 have very close expected throughput and their order is swapped in either kernel or user space Minstrel-HT, then it is extremely likely that the chosen maximum probability rate is also incorrect due to difference in the airtime threshold. Nevertheless, the Butterworth filter’s role in discounting the estimated probability average over time suggests that with longer experiment runs, the relative error in the MRR chain is expected to decrease.

It’s important to note that despite these efforts, small errors and discrepancies may still exist, and continuous evaluation and refinement of the py-minstrel-ht algorithm will be necessary to enhance its performance and accuracy further, especially with the changing kernel Minstrel-HT code base.

3) Joint Power and Rate Control Algorithm

As the power control extension in the kernel has only been properly implemented on the ath9k devices, I had to wait for the arrival of the ath9k routers as I only have access to MediaTek devices. However, due to an unexpected delay and problems in the shipping of the routers, I have mostly been working on the passive py-minstrel-ht and its analysis on MT76 chips. As such, the py-minstrel-ht hasn’t been implemented with the power control extension, however, thanks to Arne, the py-minstrel-ht has been already refactored such that it works with the new WPCA version and the updated Python-WiFi-Manager.

Nevertheless, I would like to introduce the concept of my proposed power control extension to the Minstrel-HT rate adaptation algorithm, especially the “max_tp” mode. For the max throughput mode, the implementation needs to be well thought-out for every part of the user space Minstrel-HT, not to hamper the optimal throughput

3.1) Initialisation

For the max throughput mode, initially, the py-minstrel-ht would initialize all the rates to use the maximum power level or the static power level that kernel Minstrel-HT uses.

3.2) Best Rate Selection

The set of best rates for the MRR chain could also be selected by the py-minstrel-ht as it already provides the sort_rates_by_tp function to find the best-performing MCS rates. However, the power annotation for each rate in the MRR, except the max probability rate, would be carried out by the power controller. As the max probability rate is used as the last fallback option, it could only use a static high power level.

3.3) Power Sampling and Fallback

As Minstrel-HT already probes every 20ms, in order to get the most out of power sampling, it could be only done on a small subset of sorted rates by throughput. This way the algorithm can better gather ideas about the effect of different power levels for the most used rates instead of gathering less information on more MCS rates. For example, the algorithm could only consider the top 5 throughput rates for power level sampling. However, this could easily be added as a configurable parameter in the rc_opts structure.

For each MCS rate that is considered for power sampling, the algorithm will keep track of two power levels, namely “safe” and “optimal“. The power level which gives the best transmission probability is stored in the “safe” parameter for reference. On the other hand, the “optimal” parameter will store the lowest power level which still provides performance comparable to the maximum throughput. In the beginning, the “safe” and “optimal” parameters will start at the highest power level. The power level stored in these parameters will depend on 𝛿 and 𝜎 in the following way:

where 𝛿, 𝜎 ∈ [0, 1], 𝑃𝑅(𝑋) is the success probability of rate 𝑅 at power level 𝑋, and max_prob is the highest success probability observed across the power levels. The 𝛿 constant specifies the minimum success probability that the “safe” power level needs to satisfy.

In case equation 3.1 is satisfied, then we explore a lower power level to check if it can still satisfy the “safe” success probability. All of the power levels to be sampled with an MCS rate are stored in a 𝑆𝑇𝑋𝑃 set.

Similarly, if equation 3.2 is satisfied:

In case 3.1 is not satisfied, we increase the “safe” power level considering the tolerance factor Ξ¨. We allow a tolerance around the 𝛿 limit which still denotes that the performance at the power level hasn’t completely deteriorated and hence, we simply increase the safe level by Δ𝐼 . If the 𝑃𝑅(π‘ π‘Žπ‘“π‘’) falls down even beyond this tolerance factor then we simply switch the safe level to the maximum power level supported.

Similarly, in case 3.2 is no longer satisfied, we also increase the “optimal” power level considering the same tolerance factor Ξ¨.

3.4) Updating Rate Statistics

The rate statistics used by Minstrel-HT will be updated at every update interval, i.e. 50ms, but we could update the per power level statistics independently. In order to not influence Minstrel-HT too much, the rate control algorithm could only consider π‘œπ‘π‘‘π‘–π‘šπ‘Žπ‘™ and π‘ π‘Žπ‘“π‘’ packet counts for statistics.

As using the probe API command may not generate enough packet counts to judge the actual performance of an MCS rate at a certain power level, the per power level statistics could be updated when the curr_attempts reach a certain update threshold. However, if we fix a power sample frequency that could build up enough packet counts, then we can also use a time-based per-power level interval.

3.5) Power Sampling Frequency and Sequence

Since the joint controller would mostly be working on the subset of the best throughput rates for probing power levels, having a higher power sampling frequency in comparison to rate probing may not have an adverse effect on the throughput. For instance, we could set the power sampling frequency to every millisecond or lower for time-based per power level update. However, while power sampling, the algorithm should be able to quickly identify power levels that do not work and reduce the sampling of such power levels so as to not decrease the link throughput.

I have formulated a sequence for the power sampling such that priority is set in the following order: 𝑏𝑒𝑠𝑑_π‘Ÿπ‘Žπ‘‘π‘’1 > 𝑏𝑒𝑠𝑑_π‘Ÿπ‘Žπ‘‘π‘’2 > π‘Ÿπ‘’π‘šπ‘Žπ‘–π‘›π‘–π‘›π‘” π‘Ÿπ‘Žπ‘‘π‘’π‘ . As at any point in time, for a single MCS rate, the 𝑆𝑇𝑋𝑃 set will only consist of two sample power levels: one derived from π‘ π‘Žπ‘“π‘’ and the other derived from π‘œπ‘π‘‘π‘–π‘šπ‘Žπ‘™. Hence, during the power sampling of a rate, both the power levels will be sent for sampling together.

3.6) Rate Probing

Judging from the transmit power vs throughput graph in the introductory blog, testing out whether an MCS rate works at all is best at high power levels. As Minstrel-HT already has a robust probing mechanism with three distinct sampling types, I propose not to change it in any way. However, a feature can be added where the user can also specify how high the static power level of sample rates should be. By default, it could be set to the power level that the kernel Minstrel-HT uses for sampling packets or the “safe” power level.

3.7) Configurable Parameters

4) Concluding Thoughts

The next half of the GSoC ’23 coding period will mostly be testing and extending the py-minstrel-ht with power tuning. The foundations of py-minstrel-ht rate control have been extensively tested out and the passive experiments show that the kernel and user space algorithm are comparable. If you want to know more about my project, please feel free to reach out. Thanks for reading!

GSoC: Qaul Matrix Bridge relay bot implementation

Hello there ! I have already published a blog post where you can learn about what is Qaul and why currently it needs a matrix bridge ? You can read about it here : GSoC Project blog Qaul Matrix bridge

Getting Started

Initially we thought that we would run matrix bridge as a daemon process and use Go Lang to create the bridge. But, My mentor has a friend working in element, which is a matrix client, suggested us that since the world is taking on for the Rust, Matrix-SDK are actively written in Rust and there is something called as RuMa which stands for Rust Matrix. RuMa is an amazing work and thus, We have decided to do it in rust only because our Qaul has its entire backend in Rust.

Knowing the toolkits

I had read and researched through the RuMa and Matrix-SDK-Rust projects since this is now the part which we will be using for our project instead of GoLang.

Mentor suggested that I should duplicate the qaul-cli binary and tweak it in whatever way I would need to in order to work on it. Here is the main reason why we chose the qaul-cli specifically for implementing the bridge concept.

  • It already has two workers set in place which will check for any activity on entire qaul network each 10ms.
  • We have access to CLI which we use to interact with RPC protocol and protobuff messaging.

I was reading through the documentation of the matrix-sdk crate and what I found was a beautifully commented codes for creating the bot inside examples/ directory. They were really helpful for me to initiate the coding part.

Planning the bridge

Version 0

[On Matrix]

  1. Create a bot account for Qaul and specify a server to work on.
  2. Invite the bot to the testing matrix room.

[On Qaul]

  1. Create a binary copy of qaul-cli
  2. Code the logic to login our bot into the matrix room as soon as the qaul-cli binary is running.
  3. Also Code a basic testing functionality (For Eg : Call it !ping command)

[On Matrix]

  1. Login with our personal account [@harshil1] and send a message with !ping in the room.
  2. In response we should receive all the nodes connected to the network.

Version 1

Instead of just an echo as response, We should pick the messages from both the ends. Send “Hi” from qaul and it should first sense in matrix without our personal human activity that there is some event triggered in qaul. Once event is detected, the message should show up into the matrix room.

Next we can reverse engineer the above feature and do the same in qaul by sending a message in matrix room.

Version 2

This just follows the Version 1 functionality wise but this should be implemented for 1-on-1 direct messages. In matrix and qaul both, private DMs are nothing but a group with only two members. We need to create a use case where we can send the message in groups by inviting a bot and the bot invite the user on other application and rest remains same.

Version 2+

After the above completions, We can think of double puppeting the bot so now our bot is not just qaul-bridge but a real username from the qaul node.

Progress till Mid Evaluation

We have built end to end Matrix to Qaul bridge working as expected for Version 1 and will be achieving the Version 2 within next week. Speaking in-depth about version 2, We already have a functionality to check if there is any new group requiring to connect with a matrix user and accordingly the Qaul-Bot opens Matrix room and invites wanter Matrix user. Then Matrix user is able to send messages into the qaul group. We are supposed to close the part where a message goes from Qaul into the Matrix.

Resources

If you are interested in learning our code for the bridge, I am writing a book where I have explained an approach to integrate the bridge in qaul world and it has organized chapters with snippets. For more, You can also refer to my raised Pull Request and It will give more clear insights.

Link to Book : GSoC 2.0 Journey Book – By Harshil Jani

Link to Pull Request : qaul/qaul.net/pull/563

GSoC’23 : Automation tools for LibreMesh firmware build and monitoring – part 2

Hi all!

Previous post: https://blog.freifunk.net/2023/05/27/gsoc23-automation-tools-for-libremesh-firmware-build-and-monitoring/

During this period of writing I’ve been reading up on these projects:

OpenWrt Buildroot

The main openwrt project that allows the greatest level of customization of configurations and packages, this allows to compile directly from source, and among the main features, to build firmware images for all devices, called ‘profiles’ of a given target/subtarget, or for a sub-list of these profiles. It is also the slower solution to build firmware images, compared to the OpenWrt ImageBuilder. It also allows building an OpenWrt ImageBuilder and an OpenWrt SDK.

https://openwrt.org/docs/techref/buildroot

https://openwrt.org/docs/guide-developer/start#using_the_toolchain

https://openwrt.org/docs/guide-user/virtualization/obtain.firmware.docker

Openwrt ImageBuilder (docker)

This tool allows building firmware images from precompiled packages and is also packaged as a docker image.

https://openwrt.org/docs/guide-user/additional-software/imagebuilder

https://github.com/openwrt/docker

https://hub.docker.com/u/openwrt/

OpenWrt SDK (docker)

This tool allows individual packages to be built from source and is also packaged as a docker image.

https://openwrt.org/docs/guide-developer/obtain.firmware.sdk

https://github.com/openwrt/docker

https://hub.docker.com/u/openwrt/

OpenWrt Firmware Selector

This is a GUI for selecting OpenWrt firmware images from the official repository https://downloads.openwrt.org/, and which acts as a client for the Attendedsysupgrade Server, for building custom firmware images.

This works by scanning all device profiles and building overview json files for each version from them. From this information the selection interface is then constructed.

https://gitlab.com/mwarning/firmware-selector-openwrt-org

Attendedsysupgrade Server

This is a server that accepts requests, from different clients, to build firmware images with custom lists of packages and files, these requests are then delivered to an ImageBuilder that will perform the build, the server at this point returns the sysupgrade image produced.

It is packaged as a Python project and as a docker image.

https://openwrt.org/docs/guide-user/installation/attended.sysupgrade?s[]=asu&s[]=attendedsysupgrade

https://github.com/openwrt/asu

Ansible roles to build LibreMesh

For each of these components I wrote an ansible role

  • openwrt_buildroot
  • openwrt_imagebuilder(_docker)
  • openwrt_sdk(_docker)
  • openwrt_firmware_selector
  • openwrt_asu

Each role basically goes over the steps necessary to:

  • prepare the system
  • install the component
  • configure it
  • use it

These roles are based on a file called ‘recipe’ that defines the devices for which to build.

This file is provided to the ansible playbook as a variable, and is by default dependent on which version of libremesh you intend to use and which version of openwrt.

These ‘mandatory’ variables are used to expand the list of known configurations by going for the list of supported devices and known changes needed to install libremesh on them, this information is written to the ‘target’ folders in the collection repository in turn selected based on the chosen versions of libremesh and openwrt.

The mechanism is that of the inclusion of additional variable files, containing partial information. The recipe file provided has the final say by allowing any information gathered in the other files to be overwritten. This part of the code for collecting necessary information for each device still needs to be improved.

Creating device specific configurations

The creation of single device specific configurations goes over these steps:

  • creation of devices as ansible hosts
  • creation of a definition file for each host
  • from the latter generation of a libremesh configuration file `lime-macaddress` which the way libremesh is set up ranks in the hierarchy of main libremesh configuration files found in `/etc/config` as second:
    • 0. lime-autogen: configurations actually applied, not editable
    • 1. lime-node: manual configurations at the device
    • 2. lime-<macaddress>: provisioned configurations
    • 3. lime-community: community configurations
    • 4. lime-defaults: defaults suggested by libremesh
  • vpn server upgrade

Adding devices to monitoring

The setup of the monitoring system goes over these steps:

  • customizing configurations
  • customizing scrape_options
  • definition of labels
  • definition of monitoring targets
  • definition of probing targets
  • definition of contact channels: email, telegram
  • definition of datasources and dashboards
  • generation of target lists
  • installation of components
  • placement of components in a webserver under dns names

I have added a minimal amount of documentation and published a template for using the project at:

https://gitlab.com/a-gave/libremesh-ansible-playbooks/

The product code for the entire roles collection is available at:

https://gitlab.com/a-gave/libremesh-ansible-collection/

All roles defined in this collection to build LibreMesh and to setup the monitoring system have been written and tested on debian 11 and 12.

The product code for the role corresponding to the single OpenWrt Buildroot is available at:

https://gitlab.com/a-gave/ansible_openwrt_buildroot

Example diagrams

Here is a summary of three main playbooks/roles:

Here are listed some scenarios of possibles use cases:

Example 1: simple build of libremesh for list of devices grouped by targets

Example 2: build of libremesh, and build of firmware images to meet specific devices needs

Example 3: Like the previous but with the generation of a list of targets to monitor with prometheus

Example 4: Introduce a vpn (wireguard) to monitor also devices that aren’t reachables in the same lan of hosts that do monitoring