qaul.net – Choosing a Web Framework

These days, Rust has quite the collection of web frameworks to choose from. The Are we web yet? framework list lists some 17 frameworks and this number is growing constantly. One of my activities since the last update has been selecting one of these frameworks to base qaul.net’s http api on.

Requirements

For qaul.net we are doing our best to keep the binary size as low as possible. The qaul.net binary should be simple to move and the larger the program gets the harder that becomes. Aside size we also want to pick a framework that’s easy to use and that’s likely to be maintained for a while (it’d be a shame to have the framework be abandoned after we complete the project).

Results

From this initial list of 17 frameworks, 6 were cut because they were either abandoned or I couldn’t get any of their examples running.

We evaluated a series of test programs for various web frameworks and found that for the most part everything is within a couple hundred kilobytes in size. actix-web was the largest in our testing coming in at around 2.8 Mb for a fairly simple example program. This makes sense, actix pulls in the actix actor framework and that probably comes with a fair bit of code the other frameworks don’t bother with. rouille and Thruster were the smallest by a decent margin coming in at 1.2 Mb for a hello world. This makes sense as rouille is based on tiny-http and foregoes any sort of async code and Thruster by default uses it’s own backend. Most of the rest of the frameworks all came in between 1.7 and 2.3 Mb, and this too makes sense. Most of these frameworks were based on hyper and even using hyper directly gets you an executable around 1.6 Mb.

There were a few frameworks I was quite interested in that I decided against because of lack of documentation or non-functional examples.

Conclusion

In the end I decided to go with iron as I find it’s middleware model to be quite pleasant and extensible and it’s fairly well documented. Iron came in squarely in the middle of the pack in terms of binary size but I had strong maintainability and usability concerns for the very smallest frameworks making this choice a matter of a couple hundred kilobytes.

Next Steps

Presently I’m working on an authentication system for the api. Hopefully next month I’ll be able to talk about that and a bit more about the modular design of the api.

Web Interface for Retroshare – Update 1

Since the first post, there has been quite a lot of progress on development of the new Web Interface for Retroshare.

As the build process is not using any JavaScript-specific tools, I spent a lot of time making sure that the development process was made as streamlined as possible. All the components are logically isolated and functionality moved to their relevant places. Using mithril also helped a lot, which has this concept of components, a mechanism to encapsulate different parts of views. Which massively helps in project organization and code reuse.

One more important feature which was completed is automatic data refresh and redrawing of views. I decided to use a combination of mithril’s lifecycle methods and JavaScript’s browser setTimeout method that can be used to create background tasks which when attached to their respective components, will periodically fetch and refresh data. The background task gets activated whenever a component’s view is rendered and gets killed when a component goes off of display. Mithril also has it’s own auto-redraw system which refreshes views when a component’s event handlers are called. But it does not refresh when component attributes are updated, or when raw promises are resolved.

Here are a few screenshots displaying the new UI style:

Next Steps

Along with the goals displayed in the first post, these will be given a higher priority for completion during the coming phase leading up to the next evaluation:

  • Node panel in config tab for handling most(if not all) network settings like node information, Net mode, NAT, Download limits, ports, etc.
  • Shares panel showing shared files, allowing to edit them, set view permissions, among other options.
  • Peers tab to display connected peers and set their reputations.
  • Modify the main Retroshare client to enable/disable the WebUI.

GSoC 2019 – Simulating eventual consistency for qaul.net

As I wrote in my prior blog post, I am working on qaul.net for this GSoC. Specifically, I’m working on simulating various components of the system so that other parts can be tested and developed independently, reducing the overall coupling between various elements.

There is, however, some inevitable coupling. My original design called for three phases of implementation: quickly simulating a network of hosts, accurately portraying a qaul.net network, and providing a HTTP API mirroring the real qaul.net service, in that order. As it turns out, the API is not well enough defined at the moment to work from the ground up on the simulator, so I’ve focused my efforts onto the API itself and the second step of the process.

visn

visn, meaning “knowledge” in Yiddish, is the testing framework I’ve created to bypass some of the initial effort of building a real simulation of the eventually-consistent real world network. Instead, visn uses synthetic event streaming to simulate incoming messages, which are translated into actual mutations of the state of a node through a resolution function.

A diagram showing synthetic events flowing through a resolution function to become real calls on the system under test, changing its state.

For example, we could define some synthetic events such as MessageReceived and FileUpdate. Our node under test, A, receives two FileUpdate events for some file (say,hello.txt), one creating the file and then one changing its contents, and a MessageReceived whose message refers to hello.txt.

It’s desirable to have the final displayed message show the updated contents of the file. To test this, we define the mapping from synthetic events to real state changes, and then write tests for each possible arrival order case.

Without visn, managing these changes becomes rather challenging. With a framework in place to manage the state and order of arrival, however, all synthetic event translation can be done in a single function, and each test is decoupled from the rest of the software, so true unit tests can be used.

The Big Picture

Now that visn is in a working state, I can use it to help build out the libqaul API and start testing its components. This will help make future development faster, as well as providing a target for more detailed network simulation in the future.

GSoC 2019 – qaul.net HTTP API

qaul.net allows spontaneous, ad-hoc networks to form between any wireless-enabled devices over whatever medium is available. Currently, the project is undergoing a Rust rewrite, which will provide enhanced security, modularity, and performance.

Project Overview

To make networking with qaul.net as easy as possible an HTTP API layer is being developed. This should enable applications to be written using qaul.net from any language with a decent HTTP implementation lowering the bar of entry substantially.

In the next couple of weeks a majority of my focus will be on making sure that we’re building from a strong base. As with all projects we want to make sure that we’re not going to end up with a hacky mess in a month or so and with qaul.net we want to keep binary size to a minimum. Rust is a language with many, many web frameworks so I’ll be spending a lot of time evaluating them looking mostly at ease of use and binary size.

Presently my top two contenders are `actix-web` and `gotham` but in the coming weeks I’ll be evaluating as many as I can so stay tuned for the results.

About Me

Hi, I’m jess, a Computer Engineering student in the Boston area.

GSoC 2019 – Building a Network Simulator for qaul.net

qaul.net allows spontaneous, ad-hoc networks to form between any wireless-enabled devices over whatever medium is available. Currently, the project is undergoing a Rust rewrite, which will provide enhanced security, modularity, and performance.

As with any rewrite, testing is a major component of the process. In the case of qaul.net, that requires a fairly sophisticated model of the underlying network protocols in use and even some of the physics behind them (like jitter, also known as “lag spikes”, which has the potential to foul up any routing protocol).

Project Overview

The problem of building a simulator is a common one, but in this case it is somewhat complicated by its dual use here, in both automated testing and as a way to quickly provide (fake) data to new applications, UIs, and other software components.

The simulator needs to be able to quickly simulate a network of hosts communicating over various media, accurately portray a qaul.net network complete with cryptographic identity management, and finally provide a HTTP API mirroring that of the real qaul.net service, so applications can develop against a known-good testing state.

Quickly simulating a network of hosts is relatively easy. Using an existing graph datastructure library (petgraph), along with ancillary data structures, the simulator will build a world state model representing hosts as vertices and connections (over Wi-Fi, Bluetooth, and even Internet overlay connections) as edges. Then, the simulator will provide a “tick”-based simulation model in which messages are passed from host to host via each medium in a simulated timestep.

Accurately portraying a qaul.net network is a little harder, but not much more so. With the network simulator built up, each host needs only to be given a small amount of behavior and a cryptographic identity to begin to act like a real qaul.net network. This behavior will be parametric, so testing all kinds of scenarios (including adversarial ones) will be possible.

Providing a HTTP API mirroring the real qaul.net service is the final step, and will probably mean wrapping the simulator in an asynchronous shell so that a Futures-based HTTP library can run alongside it. Once this is complete, the webserver can be spun up and applications can be pointed to it as though it were a real network!

Progress So Far

Up to now, I’ve spent my time getting comfortable in the qaul.net Zulip chat and evaluating the fundamental abstractions that I’ll use to build the simulator core. I currently have about a hundred lines of code written, in addition to a few “spike” components that I wrote just to test out various concepts. Starting today, though, the real code is happening!

About Me

I’m Leonora, a computer science student at Beloit College. I’ve written a lot of Rust and really like the language, but I’ve spent most of the last two years working on data analysis applications like the Open Energy Dashboard and CancerIQ. I’m super excited to be working on the next generation of qaul.net and the open, decentralized internet!

GSoC 2019 – OpenWrt Firmware Wizard

The OpenWrt project is a Linux operating system targeting wireless routers. Instead of trying to create a single, static firmware, OpenWrt provides a fully writable filesystem with package management. This frees you from the application selection and configuration provided by the vendor and allows you to customize the device through the use of packages to suit any application. For developers, OpenWrt is the framework to build an application without having to build a complete firmware around it; for users this means the ability for full customization, to use the device in ways never envisioned.

Project Overview

Currently, in order to install OpenWrt on a device, the user has to go through a very confusing process. Later, upgrading to the newer version of OpenWrt is another hassle. And often the end users are not that technically literate to carry out the process smoothly.
The aim of this project is to simplify this whole process by carrying out a number of changes to the OpenWrt buildroot, web based firmware wizard, and a plugin for upgrading the system automatically.

The Project has 3 sub-parts (in order of implementation) which are described as:

  1. Creating meta-data for each target
    To get the data for the sub-parts that follow, the following are required:
    1. Modification of buildroot makefiles to output JSON file for each target device
    2. Add additional metadata of target devices to makefiles
    3. Writing a script to consolidate JSONs for the entire build into a single file to be read buy the Firmware Selector
  2. Initial Firmware Retrieval
    A webapp that helps to select the correct image and how to apply them will help the adoption of OpenWrt.
    Writing a webapp with the following set of features:
    1. Display device manufacturer / model name / hardware version / OpenWrt release / link to images
    2. Display a help for how to apply the image depending on its type (factory/sysupgrade image, rootfs/kernel image)
    3. Select model/images by typing in part of the device model name
    4. Create images with custom package selection
  3. Firmware Upgrades
    Creating a router web interface package for LuCI to check and apply new images.
    Required Features:
    1. Search for updates
    2. Apply update

Progress made till now

Till now, a variety of things have been accomplished such as:

  • Know-how of the OpenWrt build system
  • Know-how of the LuCI plugin development
  • Finalization of JSON schema
  • Project road map for the next phase

Next Phase

As we will be moving toward the next phase of the program, new progress will be made.

The deliverable at the end of the next phase will be:

Generation of JSON that can be accessed as a data source

1. Generation of JSON for each target device along with images
2. Addition of device specific metadata for a sample set
3. Creation of a consolidated JSON file from numerous target JSON files.

About Me

I’m a 20 years old undergrad student from Delhi Technological University, New Delhi, India majoring in Computer Engineering. I have interests in computer programming and design.
I have been a student with Drupal in GSoC ‘17 and my project was to port the Vote Up/Down Module to the latest version of Drupal’s Core.
I am really excited and overwhelmed to be working with my awesome mentors Moritz Warning and Paul Spooren for the summer. Upon finishing this project, I plan to be a consistent member of the community.

GSoC 2019 – Upgrading the Meshenger App

Meshenger App Logo

Overview

Meshenger App, also known as Local Phone App, is an Android app which allows voice and video-communication without any server or Internet access and works in a local network​. Last year, a successful technical demo of the app was made under GSoC and was also published on F-Droid. This year’s GSoC target is to make the app stable, versatile and to expand the usability and user-base of the Meshenger app, which will directly benefit the community networks as the app primarily depends on it and communication using local networks will always be the foremost priority of the app. The app will be revamped and new features will be added to enhance the app, like allowing calls over the Internet, securing authentication at the initial handshake, fixing bugs/issues etc. which will improve its performance and give a great overall user experience. Here is a link to the GitHub page of the Meshenger App project- (https://github.com/meshenger-app/), which you are all welcome to explore and contribute in.

About Me

My name is Vasu Harshvardhan and I am a student currently in 2nd year, pursuing Bachelor of Technology (B.Tech) course in Electronics and Communication Engineering from Jamia Millia Islamia, New Delhi, India.

I have a special interest towards Open Source Software and have always aspired to be a part of something that could help and make everyone’s life better through technology and contributing to Open Source is clearly the best way to do so. This is the first time I am participating in GSoC as well as contributing to the Freifunk community and through this project, I plan to become a bonafide member and a long-term contributor to the Freifunk organization.

Project Goals

The three main goals of this project are:

  • Allowing audio and video communication over the Internet. As the app uses WebRTC library, a special signalling mechanism needs to be implemented for SDP handshakes between the two peers. A STUN server will also have to be enabled for obtaining the public IP addresses of both the clients. The WebRTC will then establish the P2P connection as both the peers have exchanged the signalling data and will be able to communicate with each other over the Internet.
  • Establishing a secure authentication at the initial handshake between the two devices. For this, I have decided to use the libsodium library to encrypt the SDP offer/signalling blob that is exchanged by sending it to the other app and fed to WebRTC.
  • Polishing the app by improving its UI/UX, fixing issues/bugs etc. in order to make the app stable and boost its performance.

Next Steps

During the Community Bonding Period of GSoC’19, I had researched about the implementations of the proposed features to be added, followed up the codebase of the app and had productive discussions with my wonderful GSoC mentor on this project, Moritz Warning. Following his suggestions, I have decided to first work on the security feature of the app, as it will help me to get into the flow of things and will lead to a progressive and productive development of the app.

BMX7 WireGuard Tunneling

BMX7 is an IPv6 native dynamic routing protocol for mesh networks which offers very advanced features and a small network overhead – thanks to the distance-vector strategy and a set of optimizations – and a great way to extend its functionality via plugins. BMX7 is also referenced as SEMTOR (Securely-Entrusted Multi-Topology Routing for Community Networks).

(For Axel Neumann’s presentation of BMX7 on BattleMesh, as well as the SEMTOR Whitepapers refer to the Further Info section).

Project Abstract

BMX7 offers plugins which are used for the distribution of small files, settings up tunnels or offer stats of the network structure. Currently the connection between a client node and the gateway are established via IPIP (IPv4/6 over IPv6), which is unencrypted and therefore possibly readable by attackers. As mesh networks usually operate on unencrypted wireless connections, the attack vector is considerably big.

Our proposed solution is to combine the current cryptographic stack of BMX7 with the one used by WireGuard. The process via which this will be achieved will be iterative; meaning that first binary calls from bmx7 to userland WG will be introduced, afterwards the efforts will be centered in the creation of a new plugin implementing WireGuard routing by using part of the existing cryptographic primitives and at last the effort to combine the tunnel plugin with the wg one. In all of these one rule must be set: do not implement cryptography yourself; mistakes will be made.

The detail that distinguishes our approach’s difficulty from hard to medium is cryptographic keys. It’s simpler to announce new public keys for WireGuard and have a separate plugin than replacing the existing BMX7 keys to allow signing of descriptive updates and encryption of traffic.

WireGuard Overview

To portray some of the perks of the implementation, the following sections are quoted from the WireGuard Whitepaper.

WireGuard is a secure network tunnel, operating at layer 3, implementd as a kernel virtual network interface for Linux. (..) The virtual tunnel interface is based on a proposed fundamental principle of secure tunnels: an association between a peer public key and a tunnel source IP address. It uses a single round trip key exchange, based on NoiseIK, and handles all session creation transparently to the user using a novel timer state machine mechanism. Short pre-shared static keys–Curve25519 points–are used for mutual authentication. (..) Peers are identified strictly by their public key, a 32-byte Curve25519 point. (..) The protocol provides strong perfect forward secrecy in addition to a high degree of identity hiding. (..) An improved take-on IP-binding cookies is used for mitigating denial of service attacks, to add encryption and authentication. (..) Two WireGuard peers exchange their public keys through some unspecified mechanism, and afterwards they are able to communicate. (..) It intentionally lacks cipher and protocol agility (H: opinionated protocol).

From the official WireGuard Technical Paper, Sections: Intro, 1 and 2

For further, technical details refer to Section 5 of the WG paper.

Deliverables

  • Implementation of wrapper functions to native WireGuard API for the establishment of the WireGuard tunnel.
  • Creation of a new WG tunnel plugin based on a simplified version of the existing tun plugin, by use the aforementioned wrapper functions.
  • The ultimate goal is the combination of the current tunnel plugin of BMX7 that incorporates IPv4/6 over IPv6 tunnels with the cryptographic stack of Wireguard, offering each administrator’s node to choose whether to encrypt or not through a command line parameter (tun + wg == crypt-tun).
  • Stretch Goal: Applying the optional 32-byte pre-shared key that is mixed into the public key cryptography supported by WG for an additional layer of symmetric-key crypto for the purposes of resistance against post-quantum attacks that are potential for Curve25519.

Objectives

  • Phase 1:
    • Understand BMX7 functionality, descriptions (descriptive updates) and plugin system.
    • Understand the WireGuard cryptographic stack and tunneling.
    • Establish the following Testing setups for testing and later on continuous integration:
      • LXC running OpenWrt bridged mesh.
      • Qemu OpenWrt bridged mesh networks.
    • Implement the initial wrapper functions to WG.
  • Phase 2:
    • Release a Debian package for BMX7 (as described in the issue referenced in Further Info section)
    • Polish and test the wrapper function for the establishment of the WireGuard tunnel
    • Create the WG tunnel plugin, based on the wrapper functions.
  • Phase 3:
    • Test extensively on the CI environments and troubleshoot.
    • Try to combine/reuse the cryptographic primitives of BMX7 and WG.
    • Refresh the user and developer documentation of BMX7.

About Me

I’m an open-source advocate, supporter of FOSS communities and student currently dwelling in Athens. My purposes span around the empowering of pro-consensus community networks (either technical or human), distributed and censorship resistant (focusing in the lower Balkan area communities, where adversaries for freedom in the mainstream Internet have begun to thrive).

A milestone for myself would be to meet the people that have created the software that our communities use and be able to interact and hack together.

For this reason my best efforts will go into making it to meetings such as Battlemesh in Paris, CC Camp and Bornhack happening this summer.

Further Info:

  1. Debian Packaging for BMX7 GH Issue
  2. WireGuard Network Namespaces and Routing
  3. BattleMesh 2016 BMX7 Presentation
  4. WireGuard Key Exchange
  5. WireGuard mailing list on roaming between IPv4 and IPv6
  6. WireGuard mailing list on HW-related Timestamps
  7. SEMTOR Overview
  8. SEMTOR in detail

GSoC 2019 – Load-correlated distributed bandwidth analysis for LibreMesh networks

Introduction

Performance tests are key for identifying the bottlenecks and optimize the network topology.
The main indicator is the bandwidth, but also other values can be useful like latency, number of active users for each node, load average and RAM consumption of each node.
The value of these quantities can vary greatly between the peak time and the night time, for this reason some of the measurements should be carried on in both these two moments.
Some other measurements, which can affect the user experience, could instead be run just in the night time.
To identify the night time we can’t relay on the router’s internal clock which could be years away from the actual time.
So a method for getting the network-wide peak time will be sought.
Each router in the network should separately run these tests, and for avoiding to influence each others’ results they should run at different times.
This synchronization should be possible taking advantage of the LibreMesh architecture and the shared-state service.

About me

Here’s Ilario, I studied organic chemistry in Pisa, Italy and I’m currently in a PhD on perovskite solar cells in Tarragona, Catalonia, Spain.
During the university I contributed to the mesh network eigenNet, part of the Italian community network consortium Ninux.
I started the (nowadays stalled) NinuxVerona community and once in Spain I started actively contributing to GuifiCamp and LibreMesh.

Setup of develop environment and initial interactions with LibreMesh community

After proposing a fix, I managed to build the LibreMesh firmware at its current stable release (17.06) using lime-sdk.
Then I built the latest LibreMesh code on top of forked OpenWrt 18.06 buildroot as suggested by the mentors; at first this was not possible on Arch Linux but after contacting with the community they updated the forked OpenWrt repository and it worked, thanks!
Finally, in order to be able to have the most updated OpenWrt code available, I compiled the latest LibreMesh code on top of the trunk (master branch, the unstable version) of OpenWrt buildroot, this was possible after adapting some configuration to the latest OpenWrt.
For pushing my code I forked lime-packages repository and created a gsoc2019 branch which can be accessed here.
Additionally, in case modifications to OpenWrt 18.06 core were needed, I will push them here.
All the buildroot-based compilation methods are already setup with the new branch as a feed, while the possibility of a back port to the stable LibreMesh 17.06 release will be evaluated once the project is completed.

Objectives

  • Flash with LibreMesh 4+ routers (preferably different models with different performances, if needed buy some) and setup a test network;
  • define a set of information to collect, divide it in network safe (e.g. number of clients)/network intensive (e.g. bandwidth test) and understand how to collect this data;
  • understand how a Prometheus exporter works and develop one in lua for the “network safe” quantities;
  • choose a reasonable “network safe” quantity for identifying the usage peak of the whole network (e.g. number of clients);
  • develop a script that locally identifies the peak and the night time;
  • develop the scripts for the network intensive tests, these should also store on the flash memory the results;
  • discuss with the mentors if the previous logs can be overwritten or if they should accumulate on the router for a certain period of time, in the latter case implement it;
  • implement a strategy for avoiding network intensive tests on different routers to happen at the same time;
  • if for achieving this last point a synchronization of the routers’ clocks is unavoidable, find a converging way for doing so or an available tool which does not require internet access (no NTP);
  • write a small Prometheus exporter for serving the last peak and off-peak network intensive tests results;
  • write the init service;
  • create a Makefile for the package;
  • test in a real-world community;
  • adapt the code written for LibreMesh trunk version to run also on LibreMesh 17.06 release;
  • adapt the code to plain OpenWrt, evaluate needed dependencies, if possible push the created package to upstream repository.

GSoC 2019 – Log Monitoring on LibreMesh

Log analysis is the way to find and recover the problems (known or not) of hardware, software or “rare” traffic on the network. In addition to the technical problem involved in unifying the logs of various routers and analyzing them, in community networks we frequently encounter the following additional problems:

  • Temporary disconnections. Due to geographical and / or atmospheric conditions, some equipment is temporarily disconnected from the network, so we have to design a system that allows us to “transfer” those logs.
  • Several of the routers used in community networks have limited resources to store the logs.
  • Several community networks do not have a sys admin within the network or they may not have Internet access (total or only temporary) to receive external help.

The idea of this GSoC is to develop a system that allows us to unify the logs of the routers of the network, filter them to stay only with the relevant ones to analyze and generate automatic analyzes (for example analysis of correlation of logs) to find possible problems and report them to the community.
Also, we want to develop a general dashboard of the state of the community network.

About me

I’m Franco Bellomo, a student of Exact Sciences at the University of Buenos Aires. My area of study is mathematical analysis and computational modeling. My previous free soft projects are related to academic problems so I am very happy with this new challenge.

I am an activist for free knowledge and I am very motivated to contribute to community networks.

Goals

  • Normalize and decrease the size of the logs. For this I want to perform a test between developing and training an own model (starting from a Huffman tree) in comparison with using (liblognorm) [https://www.liblognorm.com/].
  • Centralize Within the network there will be an RPi which will help us to join the standardized logs. In this step you will not consider the teams that are temporarily out.
  • General Dashboard. Visualization of the topology of the network and the status of each team.
  • Analysis of traffic and outlier in the network.
  • The records within the log are labeled (debug, info, warning, etc). We want to generate a system of auto tageo of groups of registries, that is to say that combinations of logs are potentially dangerous. For this we are going to use algotirmos of classification.
  • Extraction of features of the logs to make an unsupervised model of anomalies detection.
  • Obtain the log of the connected routers. For this we are going to use the community telephones as bridges.
  • Generate a good documentation!