GSoC’23: Implementation of Web Interface of Retroshare – Part 2

Hello again folks πŸ‘‹
If this is your first time here, then consider reading Part 1 of this blog, GSoC’23: Implementation of Web Interface of Retroshare.

Now, let’s continue.
The first phase of GSoC has been amazing, up until now. I was able to implement a lot of features, fixed bugs and issues, faced difficulties, got stuck and learned a lot. I am going to discuss my journey so far, so buckle up your seat belts.

Progress on the WebUI

I have been able to improve the Web Interface of Retroshare very much in terms of usability and a better UI. All thanks to my mentors and the community members.

Here is what the homepage looks right now.

File Search feature

During the start of the Coding Period, I was already working on implementing the File Search feature, so naturally It was completed in the first week of GSoC itself. This was a very difficult feature to implement as there were only a few options on how to do it correctly.

Previously, the file search results data was sent as a response to the request made to the endpoint /rsFiles/turtleSearchRequest. So, I thought of using Event Source, as It would create a persistent connection to get the data in stream. But it needed its own auth headers to be used properly. Also, there was some issue in the format of response coming from the backend, which was causing webui to crash. I had a discussion upon this with the mentors in the issue I created on GitHub.

Later the mentors discussed and decided that It would be better to make the results of file search as a stream of events which would be then captured at the frontend side and displayed. As there was already a way to handle events, It was not much of a problem. But the real issue was to update them in the UI after capturing them, as I was not sure how to do that in Mithril.js, and this was a perfect opportunity to learn something new.

And after some research, I found out that Proxy in JavaScript can be used to check for any change in the objects made using the handler defined in eventQueue. And, then I could manually trigger a re-render using `m.redraw()` in mithril.js to update the results in the UI.

// Custom Proxy code
const createArrayProxy = (arr, onChange) => {
  return new Proxy(arr, {
    set: (target, property, value, reciever) => {
      const success = Reflect.set(target, property, value, reciever);
      if (success && onChange) {
        onChange();
      }
      return success;
    },
  });
};

const createProxy = (obj, onChange) => {
  return new Proxy(obj, {
    get: (target, property, reciever) => {
      const value = Reflect.get(target, property, reciever);
      return typeof value === 'object' && value !== null
        ? Array.isArray(value)
          ? createArrayProxy(value, onChange)
          : createProxy(value, onChange)
        : value;
    },
    set: (target, property, value, reciever) => {
      const success = Reflect.set(target, property, value, reciever);
      if (success && onChange) {
        onChange();
      }
      return success;
    },
  });
};

const proxyObj = createProxy({}, () => {
  m.redraw();
});

So, after some head banging and googling, I was able to implement this feature. Furthermore, I improved the small issues in next iterations. You can see the merged PR here.

This is how the File search looks currently.

Config Mail

After that, I started working on the config section, in which I implemented the feature to create and manage custom tags for mails in the mail config. The PR I raised is here.

Config Network

Then, I implemented some missing features in existing section of Config section. In the Network config, some options weren’t working. So, after discussing with my mentor, I implemented the Hidden Network Configuration for TOR/I2P Socks Proxy here.

Config Files

I implemented the missing options as well as fixed the options which were not working in the config files section also for this, I implemented the getQueueSize() function in libretroshare.

By this time, I was also getting a hang of How things were working internally in the libretroshare. And, my mentor Cyril Soler motivated me and supported me to start writing code in libretroshare for Doxygen to generate the JSON API for the required endpoint. I raised three PRs in total, where I also implemented an endpoint in the actual C++ code.

Honestly, this was more satisfying to be able to do this rather than writing code for the webui. You can view all of my libretroshare PRs here.

Fix UI of whole Interface

This was a very big task but didn’t take that long, I did it gradually in small bits. Improved the whole UI of the web interface, and it looks much better now. See all the changes made in this PR here.
I would not include any screenshots here as It would be too much.

Fix working and UI of mail Composer

Of all the features I have implemented so far, none of this were this confusing where I was fighting with the code. I was stuck trying this on my own for two days, since the code I wrote was fine on its own. But, there was something which was causing an issue, and I was not able to understand what it was. So finally, I asked for help from a community member, M. Saud. He is also the code reviewer of all of my PRs, since as far as I know he is the only JS dev in RSWebUI team.

He found the issue and told me some ways It could be done. Those seemed too complex to me, I read some mithril.js docs and tried to figure out ways to do it. And In the end, I found a very easy way to do it. You can read the discussion here on GitHub.

This is how the mail composer looks.

This PR is related to the mail reply as well, so this is still in the Draft version, And I am currently working on it to implement the features. You can see it here. Other than this, I have been doing some minor fixes and refactoring in the whole webui along with completing the major goals in the timeline.

So, this is the progress made in the WebUI until now. And, I have learned and still learning many new things while working on it.

What’s Next?

First, I will finish the PR#80 where I have to implement the Reply feature in the Mail Section, and then will work on the next set of goals as mentioned in the timeline which are :

  • Forums Section
    • Implement the remaining features from the Qt app in the forum section.
    • Implement a better layout of the forum and posts view, comment and reply feature etc.
  • Boards and Channel Section
    • Implement the remaining features and make them more user-friendly and easy to use.
  • Implement a feature for Configuring and Visualizing own shared files with features such as managing shared directories, user permissions etc.

Apart from these goals, I have also discussed with my mentor to work on more important issues and necessary features for the upcoming release of the webui.

Now, I will see you in the next and final blog of this GSoC series.

Thanks for reading and have a great day πŸ™‚

GSoC ’23 OpenWrt PPA Part 2: GitLab packaging

GSoC ’23 OpenWrt PPA Part 2: GitLab packaging

Previous post: https://blog.freifunk.net/2023/05/13/gsoc-23-documenting-the-openwrt-compilation-process-to-set-up-a-ppa

When working on this project we’ve encountered the networking troubles: there is no networking on OpenBuildService builds. This would require packaging everything into tarballs and copying these over to the source files before building, which would not be user-friendly at all. (The purpose of this project is to make packaging easier, not harder!) This seemed impractical.

GitLab’s packaging system solves this issue cleanly. It provides a full CI interface, and allows for networking as well. It works cross-architecture too! The key thing to resolve is to get OpenWrt tools in, as these do not get official status like Debian or Ubuntu, so they need to be implemented in a way that follows GitLab conventions.

Zoobab helped a lot by producing an automated build system: https://gitlab.com/zoobab/openwrtsdkbuild. This takes in a git repository and exports binaries as artifacts. This is really useful as this is essentially the input and output of this process: some code to run on OpenWrt becomes an architecture-specific binary.

I have experimented with different ways to store these binaries (OpenWrt’s opkg expects a very specific file structure, not unlike most other package managers). Over time, I was able to port this into a repository structure: https://gitlab.com/ndren/openwrtsdkbuild/-/blob/7d92349d53befe8e2cc5ce1a89919b68050b9ce2/.gitlab-ci.yml#L50. This uses GitLab’s feature where it integrates an internal package repository with HTTP requests to read and write data. In this case it was very useful so that there is long-term storage for the package binaries. (This uses GitLab’s generic package repositories.)

It would be very useful if the CI system could verify that the binaries actually work! I’ve looked into OpenWrt’s docker repository and I found they provide a root filesystem (rootfs) exactly made for CI testing for different architectures: https://gitlab.com/ndren/openwrtsdkbuild/-/blob/7d92349d53befe8e2cc5ce1a89919b68050b9ce2/.gitlab-ci.yml#L72. With a little bit of tweaking to prepare the SDK, it worked great and meant that we could keep up with any upstream updates to the SDK or demo rootfs.

We have now a dropdown menu where we can choose the SDK and the git repo. We have to check whether a Gitlab Page could provide a better UI (with or without login page in front).

Future plans: Because the main CI system is setup this way, it easily allows for improvements. For example, exploring things like adding support for new architectures takes editing a line of YML and testing. Therefore I can start looking at self-hosted GitLab, and testing more architectures.

TODO list:

1. Try github Actions (see if it is portable)

2. Problems with gitlab package repo URLs (requires authenticated downloads which is not ideal)

3. Problem with the SDK that outputs lots of warnings about missing packages (can this be fixed without breaking the build?)

4. Check whether other OpenWRT SDK images are usable, so the build can be done in different environments.

Finally, a message from our hello world package, running in CI in Docker in Docker in aarch64 in amd64:

Hello world!
MyClass::MyClass()
MyClass::printMessage()
This is my message to print

GSoC ’23: Migrating LuCI Apps to JavaScript: A Comprehensive Guide

The latest OpenWRT versions introduced a new web interface system that eliminates the need for lua. Instead, the client’s browser handles the rendering and computation, allowing routers to focus on their primary tasks. This change brings the advantage of eliminating the lua runtime and saving storage space, having faster routers. In the previous CBI-based system, pages were rendered on the router and sent as HTML to the browser, increasing the load on the routers. This inefficiency can result in performance problems. To aid in this transition, LuCI offers the LuCI-JavaScript API, which is now utilized for constructing web interfaces.

luci-app-olsr

I have successfully migrated luci-app-olsr to JavaScript, making it a valuable example for building or migrating LuCI apps. First and foremost, I would like to express my heartfelt thanks to my mentor Andreas BrΓ€u. Without his unwavering support, I would not have been able to successfully migrate this huge application.

This tutorial covers the essential aspects of the process, providing a comprehensive guide. This app is an extensive application that includes both status views and an admin backend.

Below is the tree view representation of the directory structure for the app:

How to migrate your app :

ACLs

In the file root/usr/share/rpcd/acl.d/luci-app-olsr.json, we provide all the necessary access permissions for our application to function properly.

{
	"luci-app-olsr": {
		"description": "Grant UCI access for luci-app-olsr",
		"read": {
			"ubus": {
				"luci-rpc": [
					"*"
				],
				"olsrinfo": [
					"getjsondata",
					"hasipip"
				]
			},
			"file": {
				"/etc/modules.d": [
					"list",
					"read"
				],
				"/usr/lib": [ "list" ]
			},
			"uci": [
				"luci_olsr",
				"olsrd",
				"olsrd6"
			]
		},
		"write": {
			"uci": [
				"luci_olsr",
				"olsrd",
				"olsrd6"
			]
		}
	}
}

Similarly, in root/usr/share/rpcd/acl.d/luci-app-olsr-unauthenticated.json, we grant the required access permissions for our application when the user is not authenticated. This is used for status views.

To learn more about how ACL (Access Control List) works, you can refer to this resource: OpenWRT’s docsΒ  It is important to consider applying the principle of least privilege when configuring ACLs.

MENU

In the file root/usr/share/luci/menu.d/luci-app-olsr-backend.json, we define the location where our view will be displayed in the admin menu. This is utilized for admin specific views.

{
	"admin/services/olsrd": {
		"title": "OLSR IPv4",
		"order": 5,
		"depends": {
			"acl": ["luci-app-olsr"]
		},
		"action": {
			"type": "view",
			"path": "olsr/frontend/olsrd"
		}
	},
	"admin/services/olsrd/display": {
		"title": "Display",
		"order": 10,
		"action": {
			"type": "view",
			"path": "olsr/frontend/olsrddisplay"
		}
	},
	"admin/services/olsrd/iface": {
		"order": 10,
		"action": {
			"type": "view",
			"path": "olsr/frontend/olsrdiface"
		}
	},
	"admin/services/olsrd/hna": {
		"title": "HNA Announcements",
		"order": 15,
		"action": {
			"type": "view",
			"path": "olsr/frontend/olsrdhna"
		}
	},
	"admin/services/olsrd/plugins": {
		"title": "Plugins",
		"order": 20,
		"action": {
			"type": "view",
			"path": "olsr/frontend/olsrdplugins"
		}
	},
	"admin/services/olsrd6": {
		"title": "OLSR IPv6",
		"order": 5,
		"depends": {
			"acl": ["luci-app-olsr"]
		},
		"action": {
			"type": "view",
			"path": "olsr/frontend/olsrd6"
		}
	},
	"admin/services/olsrd6/display": {
		"title": "Display",
		"order": 10,
		"action": {
			"type": "view",
			"path": "olsr/frontend/olsrddisplay"
		}
	},
	"admin/services/olsrd6/iface": {
		"order": 10,
		"action": {
			"type": "view",
			"path": "olsr/frontend/olsrdiface6"
		}
	},
	"admin/services/olsrd6/hna": {
		"title": "HNA Announcements",
		"order": 15,
		"action": {
			"type": "view",
			"path": "olsr/frontend/olsrdhna6"
		}
	},
	"admin/services/olsrd6/plugins": {
		"title": "Plugins",
		"order": 20,
		"action": {
			"type": "view",
			"path": "olsr/frontend/olsrdplugins6"
		}
	}
}

On the other hand, in root/usr/share/luci/menu.d/luci-app-olsr-frontend.json, we specify the location where our view will be displayed in the menu. This is used for the status views of our application.

The path indicates the location where the JavaScript view to be rendered is present with respect to the htdocs/luci-static/resources/view directory.

Forms and Flexible Views

By utilizing root/etc/uci-defaults/40_luci-olsr, we ensure the creation of a straightforward configuration file for our application upon installation.

To explore the JavaScript APIs offered by LuCI, you can visit the following link: LuCI client side API documentation. A recommended starting point is the core luci.js class.

FORMS

LuCI forms allow you to create UCI or JSON-backed configuration forms. To create a typical form, you start by creating an instance of either LuCI.form.Map or LuCI.form.JSONMap using new. Then, you can add sections and options to the form instance. Finally, invoking the render() method on the instance generates the HTML markup and inserts it into the Document Object Model(DOM). For a better understanding of how LuCI forms work, you can refer to the following : LuCI.form.

This is an example demonstrating the usage of LuCI.form within one of the admin’s views, using a small portion of the olsrd.js code. The full code for the file can be found here.

'use strict';
'require view';
'require form';
'require fs';
'require uci';
'require ui';
'require rpc';

return view.extend({
	callHasIpIp: rpc.declare({
		object: 'olsrinfo',
		method: 'hasipip',
	}),
	load: function () {
		return Promise.all([uci.load('olsrd').then(() => {
			var hasDefaults = false;

			uci.sections('olsrd', 'InterfaceDefaults', function (s) {
				hasDefaults = true;
				return false;
			});

			if (!hasDefaults) {
				uci.add('olsrd', 'InterfaceDefaults');
			}
		})]);
	},
	render: function () {
		var m, s, o;

		var has_ipip;

		m = new form.Map(
			'olsrd',
			_('OLSR Daemon'),
			_(
				'The OLSR daemon is an implementation of the Optimized Link State Routing protocol. ' +
					'As such it allows mesh routing for any network equipment. ' +
					'It runs on any wifi card that supports ad-hoc mode and of course on any ethernet device. ' +
					'Visit <a href="http://www.olsr.org">olsrd.org</a> for help and documentation.'
			)
		);


		s = m.section(form.TypedSection, 'olsrd', _('General settings'));
		s.anonymous = true;

		s.tab('general', _('General Settings'));
		s.tab('lquality', _('Link Quality Settings'));
		this.callHasIpIp()
		.then(function (res) {
			var output = res.result;
			has_ipip = output.trim().length > 0;
		})
		.catch(function (err) {
			console.error(err);
		})
		.finally(function () {
               
            //... This snippet represents only a small portion of the complete code.
	
		});

		s.tab('advanced', _('Advanced Settings'));

		var ipv = s.taboption('general', form.ListValue, 'IpVersion', _('Internet protocol'), _('IP-version to use. If 6and4 is selected then one olsrd instance is started for each protocol.'));
		ipv.value('4', 'IPv4');
		ipv.value('6and4', '6and4');

		var poll = s.taboption('advanced', form.Value, 'Pollrate', _('Pollrate'), _('Polling rate for OLSR sockets in seconds. Default is 0.05.'));
		poll.optional = true;
		poll.datatype = 'ufloat';
		poll.placeholder = '0.05';

		var nicc = s.taboption('advanced', form.Value, 'NicChgsPollInt', _('Nic changes poll interval'), _('Interval to poll network interfaces for configuration changes (in seconds). Default is "2.5".'));
		nicc.optional = true;
		nicc.datatype = 'ufloat';
		nicc.placeholder = '2.5';

		var tos = s.taboption('advanced', form.Value, 'TosValue', _('TOS value'), _('Type of service value for the IP header of control traffic. Default is "16".'));
		tos.optional = true;
		tos.datatype = 'uinteger';
		tos.placeholder = '16';

        //... This snippet represents only a small portion of the complete code.

		return m.render();
	},
});

Flexible Views

For enhanced flexibility in our pages, we have the option to manually define the HTML, which I have used in the status views. This approach allows us to have more control over the page structure and content, providing greater customization possibilities

This is an example demonstrating the usage of flexible views within one of the status’s views, using a small portion of the topology.js code. The full code for the file can be found here.

'use strict';
'require uci';
'require view';
'require poll';
'require rpc';
'require ui';


return view.extend({
	callGetJsonStatus: rpc.declare({
		object: 'olsrinfo',
		method: 'getjsondata',
		params: ['otable', 'v4_port', 'v6_port'],
	}),

	fetch_jsoninfo: function (otable) {
		var jsonreq4 = '';
		var jsonreq6 = '';
		var v4_port = parseInt(uci.get('olsrd', 'olsrd_jsoninfo', 'port') || '') || 9090;
		var v6_port = parseInt(uci.get('olsrd6', 'olsrd_jsoninfo', 'port') || '') || 9090;
		var json;
		var self = this;
		return new Promise(function (resolve, reject) {
			L.resolveDefault(self.callGetJsonStatus(otable, v4_port, v6_port), {})
				.then(function (res) {
					json = res;

					jsonreq4 = JSON.parse(json.jsonreq4);
					jsonreq6 = json.jsonreq6 !== '' ? JSON.parse(json.jsonreq6) : [];
					var jsondata4 = {};
					var jsondata6 = {};
					var data4 = [];
					var data6 = [];
					var has_v4 = false;
					var has_v6 = false;

					if (jsonreq4 === '' && jsonreq6 === '') {
						window.location.href = 'error_olsr';
						reject([null, 0, 0, true]);
						return;
					}

					if (jsonreq4 !== '') {
						has_v4 = true;
						jsondata4 = jsonreq4 || {};
						if (otable === 'status') {
							data4 = jsondata4;
						} else {
							data4 = jsondata4[otable] || [];
						}

						for (var i = 0; i < data4.length; i++) {
							data4[i]['proto'] = '4';
						}
					}

					if (jsonreq6 !== '') {
						has_v6 = true;
						jsondata6 = jsonreq6 || {};
						if (otable === 'status') {
							data6 = jsondata6;
						} else {
							data6 = jsondata6[otable] || [];
						}

						for (var j = 0; j < data6.length; j++) {
							data6[j]['proto'] = '6';
						}
					}

					for (var k = 0; k < data6.length; k++) {
						data4.push(data6[k]);
					}

					resolve([data4, has_v4, has_v6, false]);
				})
				.catch(function (err) {
					console.error(err);
					reject([null, 0, 0, true]);
				});
		});
	},
	action_topology: function () {
		var self = this;
		return new Promise(function (resolve, reject) {
			self
				.fetch_jsoninfo('topology')
				.then(function ([data, has_v4, has_v6, error]) {
					if (error) {
						reject(error);
					}

					function compare(a, b) {
						if (a.proto === b.proto) {
							return a.tcEdgeCost < b.tcEdgeCost;
						} else {
							return a.proto < b.proto;
						}
					}

					data.sort(compare);

					var result = { routes: data, has_v4: has_v4, has_v6: has_v6 };
					resolve(result);
				})
				.catch(function (err) {
					reject(err);
				});
		});
	},
	load: function () {
		return Promise.all([uci.load('olsrd'), uci.load('luci_olsr')]);
	},
	render: function () {
		var routes_res;
		var has_v4;
		var has_v6;

		return this.action_topology()
			.then(function (result) {
				routes_res = result.routes;
				has_v4 = result.has_v4;
				has_v6 = result.has_v6;
				var table = E('div', { 'class': 'table cbi-section-table' }, [
					E('div', { 'class': 'tr cbi-section-table-titles' }, [
						E('div', { 'class': 'th cbi-section-table-cell' }, _('OLSR node')),
						E('div', { 'class': 'th cbi-section-table-cell' }, _('Last hop')),
						E('div', { 'class': 'th cbi-section-table-cell' }, _('LQ')),
						E('div', { 'class': 'th cbi-section-table-cell' }, _('NLQ')),
						E('div', { 'class': 'th cbi-section-table-cell' }, _('ETX')),
					]),
				]);
				var i = 1;

				for (var k = 0; k < routes_res.length; k++) {
					var route = routes_res[k];
					var cost = (parseInt(route.tcEdgeCost) || 0).toFixed(3);
					var color = etx_color(parseInt(cost));
					var lq = (parseInt(route.linkQuality) || 0).toFixed(3);
					var nlq = (parseInt(route.neighborLinkQuality) || 0).toFixed(3);

					var tr = E('div', { 'class': 'tr cbi-section-table-row cbi-rowstyle-' + i + ' proto-' + route.proto }, [
						route.proto === '6'
							? E('div', { 'class': 'td cbi-section-table-cell left' }, [E('a', { 'href': 'http://[' + route.destinationIP + ']/cgi-bin-status.html' }, route.destinationIP)])
							: E('div', { 'class': 'td cbi-section-table-cell left' }, [E('a', { 'href': 'http://' + route.destinationIP + '/cgi-bin-status.html' }, route.destinationIP)]),
						route.proto === '6'
							? E('div', { 'class': 'td cbi-section-table-cell left' }, [E('a', { 'href': 'http://[' + route.lastHopIP + ']/cgi-bin-status.html' }, route.lastHopIP)])
							: E('div', { 'class': 'td cbi-section-table-cell left' }, [E('a', { 'href': 'http://' + route.lastHopIP + '/cgi-bin-status.html' }, route.lastHopIP)]),
						E('div', { 'class': 'td cbi-section-table-cell left' }, lq),
						E('div', { 'class': 'td cbi-section-table-cell left' }, nlq),
						E('div', { 'class': 'td cbi-section-table-cell left', 'style': 'background-color:' + color }, cost),
					]);

					table.appendChild(tr);
					i = (i % 2) + 1;
				}

				var fieldset = E('fieldset', { 'class': 'cbi-section' }, [E('legend', {}, _('Overview of currently known OLSR nodes')), table]);

                //... This snippet represents only a small portion of the complete code.

				var result = E([], {}, [h2, divToggleButtons, fieldset, statusOlsrLegend, statusOlsrCommonJs]);

				return result;
			})
			.catch(function (error) {
				console.error(error);
			});
	},
	handleSaveApply: null,
	handleSave: null,
});

RPCD: OpenWrt ubus RPC daemon

rpcd is the OpenWrt ubus RPC daemon responsible for the backend server. To enable the exposure of shell script functionality via ubus, the rpcd plugin utilizes executable files located in the /usr/libexec/rpcd/ directory. When rpcd is triggered, it runs these executables, allowing the execution of various methods. For instance, consider the file root/usr/libexec/rpcd/olsrinfo.sh Here we are creating two new ubus methods getjsondata & hasipip, of the object olsrinfo

#!/bin/sh                                                                                                                                                                
. /usr/share/libubox/jshn.sh                                                                                                                                             
. /lib/functions.sh                                                                                                                                                      
                                                                                                                                                                         
case "$1" in                                                                                                                                                             
  list)                                                                                                                                                                  
    json_init
    json_add_object "getjsondata"
    json_add_string 'otable' 'String'
    json_add_int 'v4_port' 'Integer'
    json_add_int 'v6_port' 'Integer'
    json_close_object
	json_add_object "hasipip"
	json_close_object
    json_dump
    ;;                                                                                                                                                                   
  call)                                                                                                                                                                  
    case "$2" in                                                                                                                                                         
      getjsondata)                                                                                                                                                       
        json_init                                                                                                                                                        
        json_load "$(cat)"                                                                                                                                               
        json_get_var otable  otable                                                                                                                                      
        json_get_var v4_port v4_port                                                                                                                                     
        json_get_var v6_port v6_port                                                                                                                                     
                                                                                                                                                                         
        jsonreq4=$(echo "/${otable}" | nc 127.0.0.1 "${v4_port}" | sed -n '/^[}{ ]/p' 2>/dev/null)                                                                       
        jsonreq6=$(echo "/${otable}" | nc ::1 "${v6_port}" | sed -n '/^[}{ ]/p' 2>/dev/null)                                                                             
                                                                                                                                                                         
        json_init                                                                                                                                                        
        json_add_string "jsonreq4" "$jsonreq4"                                                                                                                           
        json_add_string "jsonreq6" "$jsonreq6"                                                                                                                           
        json_dump                                                                                                                                                        
        ;;
	 hasipip)
        result=$(ls /etc/modules.d/ | grep -E "[0-9]*-ipip")
        json_init
        json_add_string "result" "$result"
        json_dump
        ;;                                                                                                                                                               
    esac                                                                                                                                                                 
    ;;                                                                                                                                                                   
esac  

We use these methods by declaring an rpc as follows & then by calling them which I’ve shown in the topology.js code.

callGetJsonStatus: rpc.declare({
		object: 'olsrinfo',
		method: 'getjsondata',
		params: ['otable', 'v4_port', 'v6_port'],
	})

Feel free to reach out to me via email if you have any doubts or questions. I’m here to help! Stay tuned for more valuable content as I continue to share useful information and resources. Thank you for your support!

GSoC ’23: Joint Power and Rate Control in Userspace for Freifunk OpenWrt Mesh & Access Networks

Introduction

Hello everyone!

I’m Prashiddha, a former GSoC contributor with Freifunk in 2022 during which I extended the Py-Minstrel-HT rate control to make it further comparable with its kernel counterpart, allowing for better experimentation between the rate controls in user space and kernel space. If you would like to know more about WiFi rate control and my previous project, please feel free to start with the introduction blog from 2022.

For GSoC ’23, I’ll be working on the research and development of a resource allocation algorithm that can select the optimum transmission rates in conjunction with the optimum power level. The joint power and rate control algorithm is intended to work on OpenWRT routers, capable of making resource allocation decisions for each station connected with it.

Overview of Joint Power and Rate Control

A rate control algorithm, such as Minstrel-HT, determines the best transmission rates that are promising in providing the maximum throughput for the given link condition. These algorithms usually assign a high static power level which could potentially cause interference, especially in a highly dense network. It is already evident that, for a transmission rate, even though a higher transmission power implies a higher signal-to-noise ratio (SNR), it doesn’t necessarily mean higher throughput. Hence, it could be best to use the lowest transmission-power level that is still capable of providing the optimum throughput. As such, this could allow for better management of interference along with an increase in spatial reuse.

The graph presented in a dissertation from Prof. Thomas HΓΌhn shows the relation between the power level and measured throughput where the throughput stops increasing after a certain power level.

WiFi Resource Allocation in Userspace

As part of the SupraCoNeX research, the development of Open-source Resource Control API (ORCA) for OpenWrt access points, has enabled WiFi resource allocation from the user space. The API exposes relevant information from the mac80211 kernel subsystem, such as supported Modulation Coding Scheme (MCS) rates and packet counts (ACKs), that could be required by resource allocation algorithms to make decisions. Previously, the ORCA API could be used to only set the MCS rates for wireless transmission, however, with the recent extension, it allows the MCS rates to be set in conjunction with power levels. Consequently, it is now possible to develop a joint rate and power controller in user space.

In order to further facilitate resource allocation, a Python-based package called “Rateman” has also been developed which utilizes the minstrel-rcd to concurrently operate on multiple access points and parse the exposed kernel information from the API. The package is implemented such that the resource allocation algorithms can be executed through it while also providing them with the parsed kernel information for decision-making.

Extending Py-Minstrel-HT with power control

Since a rate control algorithm in user space already exists, namely “py-minstrel-ht”, I plan on extending the user space Minstrel-HT algorithm with an additional capability for transmit power tuning, also making it convenient to test the effects of power tuning on a rate control algorithm. The main idea behind the joint controller is to let Minstrel-HT decide the set of the best rates while a power tuning module tweaks the power levels to an optimal value. With the addition of power control, the user space Minstrel-HT can be executed with different power settings to achieve various goals. For instance, three different power modes could be realized: fixed power, maximum throughput, and power ceiling.

The fixed power and power ceiling modes are straightforward to understand and implement. The fixed power mode, as the name suggests, sets the power level of all the transmission rates to the specified value. Similarly, the power ceiling mode can be used to specify the maximum power level that can be used for wireless transmission. However, the maximum throughput mode is a bit complicated as the wireless channel is highly dynamic in nature and the controller needs to accurately assess the quality of the link in real-time. Hence, the implementation needs to be well thought-out for every part of the user space Minstrel-HT so as to not hamper the optimal throughput. As the addition of power control adds another depth to the sampling parameter, the set of possible sampling candidates will grow tremendously. However, as Minstrel-HT already probes with a frequency of 50 Hz or 20 ms, sampling too much can greatly degrade the overall performance of the link.

Deliverables

  • Extension of py-minstrel-ht with a power controller with complete documentation and execution guide.
  • Ready to run demo scripts to showcase the potential of the joint rate and power control.
  • Evaluation of the joint controller by comparing it at different modes and with different rate controls.

What’s Next?

At the beginning of the GSoC ’23 coding period, I’ll start by modifying the Rateman package such that the rate statistics dictionary is properly structured to relay successes and attempts statistics per power level per rate. Consequently, I will modify the Py-Minstrel-HT to accommodate the change in the rate statistics structure. This would allow algorithms to better assess the performance of an MCS rate at different power levels. Furthermore, I will extend the rate setting and probing functions from Py-Minstrel-HT to enable power annotation for a desired rate.

Initially, the power ceiling and fixed power modes will be implemented in order to make testing out the power tuning easier. For this, the Py-Minstrel-HT will also be extended to parse the power setting specified by the user in the rc_opts dictionary. If possible, the following questions could also be investigated before the implementation of the max throughput mode:

  • Is the power setting completely static with kernel Minstrel-HT? Does the driver play any role in independent power adjustment?
  • In general, is the throughput vs tx-power graph strictly non-decreasing? Is it possible that an MCS rate works at power level 𝑇𝑋𝑃1 but not at 𝑇𝑋𝑃2 where 𝑇𝑋𝑃1 < 𝑇𝑋𝑃2?
  • In a Minstrel-HT rate group, let 𝑅1 and 𝑅2 be two rates where 𝑅2 is a higher rate than 𝑅1. If 𝑅2 works at 𝑇𝑃1, does it imply that 𝑅2 also works at 𝑇𝑃1?

With this, I’d like to conclude the first blog on the joint power and rate controller in user space. Thanks for reading! Please feel free to reach out and connect with me πŸ™‚

GSoC’23 : Automation tools for LibreMesh firmware build and monitoring

Introduction

Hi everyone! I’m samlo. I’m a fullstack webdev that live in a rural area in Italy and dedicates part of his time to build an maintain a self-managed community network based on LibreMesh https://antennine.noblogs.org/.

For GSoC’23, I’ll be working on setup a set of ansible playbooks and roles to do common network administration tasks useful for a tech team of a community network based on LibreMesh.

This first blog post intends to cover details on the necessary background to understand the project and its implementation.

What

As state the site https://libremesh.org/, LibreMesh is a modular framework for creating OpenWrt-based firmwares for wireless mesh nodes.

It’s a list of packages (lime-packages) with support for various mesh protocols, to installing and configuring them properly, and offer to the end user a dedicated web interface (lime-app).

It has support for potentially every OpenWrt supported devices, following the documentation you find all the information to build the firmware and configure the main files and start using it.

It also has a list of network configurations used by different communities (network-profiles) that provide the information about how to configure your firmware (installing packages) and your network (e.g. editing main configuration files), or both, to join the community mesh.

Motivations

Libremesh is a set of packages you can include as feeds – via sources or precompiled packages – in a OpenWrt build system and then select those of your choice, but it’s possible to overwrite openwrt default configs only manually, and make backup of produced configurations files per build.

In this scenario is necessary to save configurations and ways to reproduce the same firmware image.

Instead of start writing a list of bash scripts to handle just our community needs I’m interested in exploring the possibility of using a configuration management and automation tool as Ansible https://docs.ansible.com.

This would lead to simplify common needs, in particular:

– automate the build of firmwares for groups of devices with specific configurations, packages, libremesh and openwrt versions.

– build test firmwares by versioning experiments

– build on localhost or on a remote machine

– manage configurations (monitoring, vpn, ssl certificates) in the same system that also build firmwares

– automate the insertion of information that may be synced between networking devices and servers

– share configurations in a set meaningful and reproducible for other people inside and outside the local community network.

An issue

Libremesh doesn’t handle a system to build consistently for every supported devices or to patch openwrt to meet the needs of particular targets or devices.

So every community should understand how to build for devices they are using.

One inspiration for this came from the project Gluon https://gluon.readthedocs.io/ that include a system to keep traces of specific packages related to openwrt targets, subtargets and device.

https://github.com/freifunk-gluon/gluon/blob/master/targets

Deliverables of the project

– Have an ansible set of playbooks and roles to build openwrt firmwares

– Have an ansible set of playbooks and roles to build libremesh firmwares

– Have an ansible role to build libremesh firmwares depending on libremesh version, openwrt version, libremesh default packages, libremesh target’s or device’s specific packages, libremesh community packages, libremesh community set of packages linked to specific list of devices.

– Have an ansible set of playbooks and roles to setup a monitoring/probing/alerting/metric-visualizer system

Concluding Thoughts

In this period of design and blueprint of the project, before starting coding I thinked a lot at use cases, who could want to use it, and to simplify contributions to keep updated the code in the future.

I look forward to publish on https://galaxy.ansible.com/ two collection of roles (openwrt and libremesh) and make available via git repository the set of playbooks to use the roles above.

I’ll update you in July.

GSoC’23 : LuCI Migrate to JavaScript-Based Framework

Project Details

LuCI is an open-source framework that is widely used to build web interfaces for embedded devices such as WiFi routers. In the CBI based old system, pages were rendered on the router and delivered as HTML to the browser, which causes a higher load on the embedded devices. This makes the system less efficient and can lead to performance issues.

To facilitate this migration, LuCI provides LuCI-JavaScript API that will be used to build web interfaces that can be rendered in the browser. Additionally, data will be provided via RPCD and UBUS. The project will involve writing new RPCD services to provide data to the client side that was formerly used directly on the router.

Project Goals

The migration of LuCI to a JavaScript-based framework will bring numerous advantages to the OpenWrt community and other users of OpenWrt-based devices. One of the primary benefits is enhanced performance and reduced load on embedded devices, such as WiFi routers. By shifting the rendering of pages to the client-side using JavaScript, instead of on the router, the workload on the router will be decreased, resulting in a better user experience, particularly for users with lower-specification routers.

Another benefit of the new system is increased flexibility for developers. The utilization of a client-side JavaScript framework provides developers with more options for customization and extension of the LuCI web interface in the future. It also establishes a standardized approach for developers to interact with the router’s services, retrieve or set configuration data, and facilitate the development and maintenance of LuCI-based applications.

Community networks, which often rely on lower-specification devices, can greatly benefit from these improvements. The improved performance and reduced load on devices will make it easier for community networks to manage and maintain their networks using LuCI-based tools.

In summary, the migration of LuCI to a JavaScript-based framework will bring significant benefits to the community and users of OpenWrt-based devices. These benefits include improved performance, increased flexibility for developers, and potentially easier management of LuCI-based applications for community networks.

Project Progress

I successfully migrated luci-app-uhttpd to JavaScript, gaining valuable experience and insights from the process which will help to migrate more advanced applications. It improved performance, enhanced user experience, and provided with greater flexibility as a developer. I’m excited to continue contributing to the growth of LuCI and further advancing OpenWrt.

I am currently working on the migration of luci-app-olsr to a JavaScript-based framework. It has been an engaging and exciting experience so far. By leveraging JavaScript, I aim to enhance the performance, usability, and customization options of the web interface for olsrd. I am excited to contribute to the improvement of this essential tool for mesh routing and network management on OpenWrt-based devices.

Community Bonding Period

During the GSoC community bonding period, I have had an incredible learning experience. I have been fortunate to have constant communication and guidance from my mentor, Andreas BrΓ€u, who has been exceptionally supportive throughout the process. Whenever I face challenges or got stuck, my mentor is there to provide valuable insights and assistance. Additionally, this journey has allowed me to become more familiar with the OpenWrt and Freifunk communities, providing me with a broader understanding of the ecosystem and related technologies. The community bonding period has been instrumental in preparing me for the successful migration of future applications and has fostered valuable connections within the community.

GSoC’23 : Implementation of Web Interface of Retroshare

Hello folks πŸ‘‹
I am Sumit Kumar Soni, a frontend developer who loves Linux, design and doing open source contributions. Being a part of GSoC fills me with excitement. Furthermore, I hope to learn, expand my knowledge and do some impactful contributions in implementation of webui of retroshare throughout this summer.

Project Context

RetroShare provides a decentralized, encrypted connection with maximum security between nodes where they can chat, share files, mail, etc. It uses GXS (Generic eXchange System) that provides asynchronous distribution, authentication, privacy, security of generic data. RetroShare is a C++ software program that comprises a headless lib called libretroshare. So, this lib helps in making a headless server (retroshare-service), a standalone app with a user interface built using Qt, an android client and more.

Moreover, a web interface has started being developed that allows users to control the headless server from their web browsers. The web interface uses an automatically generated JSON API. And, It includes all necessary functions to send and receive data from the software, communicating with libretroshare.

Goals and Deliverables

This is the homepage of Retroshare’s web interface.

The previous GSoC contributors have accomplished astonishing work on the WebUI, yet there remain many functionalities to implement and bugs to fix. This summer, I intend to implement some of the most important features and make the design more appealing and user-friendly as well.

The main deliverables during the GSoC period will be :

  • Config Section
    • Implement the panels which have not yet been implemented from the Qt application.
    • Enhance the already existing panels and fix the existing inconsistencies.
  • Mail Section
    • Implement the Reply, Reply All and Forward feature in the mail view.
    • Fix the Compose Mail popup and make it usable.
  • Forums Section
    • Implement the remaining features from the Qt app in the forum section.
    • Implement a better layout of the forum and posts view, comment and reply feature etc.
  • Boards and Channel Section
    • Implement the remaining features and make them more user-friendly and easy to use.
  • Implement a feature for Configuring and Visualizing own shared files with features such as managing shared directories, user permissions etc.

Previous Contributions

During and before the community bonding period, I contributed and added features for the implementation of webui of retroshare. And, The most recent contribution I did was the implementation of file search in Files section, which wouldn’t have been possible without the help of my mentor Cyril Soler.

I have also implemented various other features such as Attachment view for Mail section to view all the mail attachments in one place, improving the sidebar etc. Likewise, I have also tried to reduce the overall size of the WebUI by minifying the files. The PRs I raised until now can be seen here.

What’s Next?

Currently, The community bonding period is going on, and I have made myself a bit familiar with my mentors and some fellow members. The mentors are really supportive, and I couldn’t have made it to GSoC without my mentor Cyril Soler and one more amazing person, Defnax. In addition, I actively participate in weekly meetings where I report my progress, discuss different approaches.

So, for the first phase which is until the midterm evaluation I have planned to work towards the implementation of the first two features listed in the Goals and Deliverables section of this blog. It has been an amazing learning experience, and I am looking forward to achieving more amazing things this summer with Freifunk and Retroshare!

Thanks for reading and Have a great day πŸ˜ƒ

GSoC’23 Qaul : An Internet Independent communication application

Preface

In this blog post, We will discuss about a P2P chat application Qaul.net and look at the importance of protecting our communication in true sense. Also, I will throw some light on what we are planning as a Google Summer of Code 2023 project for this application to make it more accessible, robust and independent from any form of services.

Qaul.net is completely internet independent and Peer to Peer chat application where within a local network, Devices can be connected and the communication can take place. There are no chances of being wiretapped since, It does not work on the internet. Also if you get caught up in situations where the internet services are down deliberately or indeliberately then too you can become part of the network using Qaul. All you need is the working device and the application itself.

Use case

There are multiple examples of places where the governments cuts down the communication links over a particular region due to riots or suspicious activities or sometimes due to political playings. With Qaul the aim is to provide local links of communication so no one feels lefted behind. For Example, In India-Pakistan border, there is always tension of riots between the nations for accquiring regions of Kashmir. Due to this, The internet services are blocked and also the communication infrastructure is damaged at times. Here, With qaul the issue can be addressed and people can still utilize communication since it falls under very basic rights of human beings. Another use case arises in a huge crowd events. Recently, My university organized the Techo-Cultural Fest with over 7000+ student gathered in one ground. Due to this large number of people gathered at a single place, We were hardly able to use daily communication services like texting on whatsapp or calling. The base stations went crazy to handle such a huge number of traffic over a particular cell. So, What if we students used qaul to create a network of our devices. This could have helped us in texting and finding lost friends.

Enough of use cases, But you might wonder : How does it works ? Well it uses mix and match of various communication protocols and cryptographic encryptions using which the application is decentralized and internet independent.

Implementation

Each device is called node and it has a cryptographic id called qaul-id. Now for discovering the peer devices, It used mDNS (Multicast Domain Name System) using which you can get the IP addresses of the peer devices without reaching or using the root DNS server (like .com or .edu or .uk etc) and no data is transfered to anyone outside the local network. The messages stays end-to-end encrypted because of the cryptographic keys used to sign them while sending and are verified on the receiving end. For routing the messages the Distance Vector Routing protocol is used which is based on the Round trip time per connection request. So under the hoods, Every 5 secsonds each device sends ping to neighbour nodes to measure RTT and every 10 seconds each node sends the routing information to all neighbouring nodes. For routing you can use any route. It could be LAN or INTERNET or even BLE (in progress). The protocol will choose the best route and send the message over the network using that route. The device id remains throughout the time even when device is not connected to the network until the application is uninstalled. So, You can easily go offline and come back and get connected over the same local network.

Project Details

We are going to implement the Matrix Bridge for the Qaul.net this summer. So If you wonder, Why would we need that at first place ? We need it because it will help us to broadcast messages over many mediums of communication which are supported by Matrix. This would allow the messages to be shared from local network to other networks and store it based on user consent. Let’s say, My government forces to not to keep any such application like qaul but luckily they don’t spy on my slack and allows me to keep it. I can simply transfer my messags from qaul to slack using the matrix server. If you remember earlier I said, You can get connected to same local network using qaul-id which get’s lost if you uninstall application. So, If there is a bridge, You will receive the message now in any communication medium. You can communicate from Slack as well. Relay Bridge is the appropriate name for this. But only relay bridge won’t be effective solution because we are using cryptographic encryptions and decryptions for each node or device. So, How would I know your real identity ? There is another kind of bridge which is puppetting. So, You can puppet yourself as old qaul user on the Slack or Telegram instead of a ghost username and then there can be fledged two ways communication. So, In total we need Relay and Double Puppetting (both ended) bridges with Matrix. Further from Matrix other bridges with other applications are already implemented and won’t be an issue.

There is a recent news where Indian Government is banning 14 Messaging Applications. One among the list is Element which is a matrix client which I am using for all my matrix chats. The government don’t understand the difference of banning the decentralized application due to their misconceptions about the technology.

I wish, We can bring qaul to greater reaches and get it working. Even if the government bans qaul from playstore or any downloading mediums, It can still be sent from one device to another by using open networks or application file transfer. But government can never block it from its functioning since it is completely internet independent. It just spreads like a virus and can be used wherever needed.

I would like to thank mentor Mathias Jud for helping very much in explaning and getting the concepts clear about the internal working of the applications, reviewing the proposal and helping in getting the IPv6 addresses to be launced in one of the new beta versions. Looking very much forward to work on this project with lots of enthusiasm and knowledge.

GSoC ’23: Documenting the OpenWrt compilation process to set up a PPA

GSoC ’23: Documenting the OpenWrt compilation process to set up a PPA

Getting the OpenWrt PPA set up will require understanding and documenting the current approaches for portable compilation of OpenWrt packages. This is my first impression of this task.

When I verified the “hello world” program provided and verified by Zoobab correctly compiled, I have looked to see how exactly this process works. The Docker build script depends on the toolchain compilation script. They both require an internet connection, and I will have to keep this in mind when porting this to make an OpenBuildService (OBS) package: “Mentioning repositories directly is not allowed (using obsrepositories:/ is ok)“.

Interesting technical details I found while researching how to transfer the current Docker approach to OBS:
– At first I did not know what the flag V was in `make V=s`. It turns out it turns on verbose compilation output on console on the OpenWrt build system. (I was surprised when I saw this was undocumented in the new version of the guide.)
– The package manager for OpenWrt uses ipk files. The helloworld package (that has been compiled under the OpenWrt SDK) has been successfully installed inside an openwrt rootfs and runs great!
– The current system uses cascading Docker images (multi-stage builds). This is done by producing the OpenWrt SDK container image first and then a new Docker image compiles the actual system. This is useful because it means the actual package being compiled is kept separate from the source itself.
– It turns out that obs-build from OpenBuildService assumes a working chroot so one must be created manually. The actual container must be merged in separately into the upstream project, outside of obs-build.

A message from our working hello world package:

root@localhost:/# helloworld
Hello world!
MyClass::MyClass()
MyClass::printMessage()
This is my message to print


I can’t wait to learn more and get up to speed on how to approach this project. I’ll see you later!

Six exciting projects at Google Summer of Code

Google Summer of Code Logo

Freifunk as umbrella organization unites wireless communities like Ninux, qaul.net, Guifi.net, and Evernet e.G. Our communities extensively rely on OpenWRT Linux, OLSR, BATMAN, libremesh, or retroshare.Β 

We are proud to announce our participation in the Google Summer of Code (GSoC) program. This year, Freifunk has six exciting projects that will contribute to the development of the Freifunk firmware, mesh networking protocols, and user-friendly tools.

Google Summer of Code Logo

The six projects and their respective mentors are:

Project TitleContributorMentors
Automation tools for LibreMesh firmware build and monitoringsamloIlario Gelmetti, stefca
Joint Power and Rate Control in User space for Freifunk OpenWrt Mesh & Access NetworksPrashiddhaThomas HΓΌhn, Julius Schulz-Zander
LuCI Migration to JavaScript based Framework: Improved UX and Performance on OpenWrt-based DevicesAyushmanPaul Spooren, Andi BrΓ€u
Implementation of Web Interface of RetroshareSumit Kumar SoniCyril Soler, G10h4ck
Qaul Matrix BridgeHarshil JaniMathias Jud
OpenWRT PPA Part 2Mr. AndreiZoobab
Our table of projects

These projects are all aligned with Freifunk’s mission to build a decentralized, community-owned network that is free from corporate control and censorship. By participating in GSoC, Freifunk is able to tap into the talent and creativity of the wider open source community and accelerate its development efforts.

We are excited to see what our GSoC contributors will achieve this summer and we look forward to sharing their progress with the wider Freifunk community. Stay tuned for updates on our blog and social media channels!

In conclusion, Freifunk’s participation in Google Summer of Code is a great opportunity to advance the development of its mesh networking technology and tools. We are excited to see the impact these projects will have on our communities.

Our history of 11 successful summers of code during the last 15 years can be found in this blog.