GSoC 2019 – Simulating eventual consistency for qaul.net

As I wrote in my prior blog post, I am working on qaul.net for this GSoC. Specifically, I’m working on simulating various components of the system so that other parts can be tested and developed independently, reducing the overall coupling between various elements.

There is, however, some inevitable coupling. My original design called for three phases of implementation: quickly simulating a network of hosts, accurately portraying a qaul.net network, and providing a HTTP API mirroring the real qaul.net service, in that order. As it turns out, the API is not well enough defined at the moment to work from the ground up on the simulator, so I’ve focused my efforts onto the API itself and the second step of the process.

visn

visn, meaning “knowledge” in Yiddish, is the testing framework I’ve created to bypass some of the initial effort of building a real simulation of the eventually-consistent real world network. Instead, visn uses synthetic event streaming to simulate incoming messages, which are translated into actual mutations of the state of a node through a resolution function.

A diagram showing synthetic events flowing through a resolution function to become real calls on the system under test, changing its state.

For example, we could define some synthetic events such as MessageReceived and FileUpdate. Our node under test, A, receives two FileUpdate events for some file (say,hello.txt), one creating the file and then one changing its contents, and a MessageReceived whose message refers to hello.txt.

It’s desirable to have the final displayed message show the updated contents of the file. To test this, we define the mapping from synthetic events to real state changes, and then write tests for each possible arrival order case.

Without visn, managing these changes becomes rather challenging. With a framework in place to manage the state and order of arrival, however, all synthetic event translation can be done in a single function, and each test is decoupled from the rest of the software, so true unit tests can be used.

The Big Picture

Now that visn is in a working state, I can use it to help build out the libqaul API and start testing its components. This will help make future development faster, as well as providing a target for more detailed network simulation in the future.

GSoC 2019 – Evaluating options to do unit and integration tests in LibreMesh (and a first working example)

Prior experience

Some people that have been writing about unit testing in lua, and also about lua for embedded:

  • http://lua-users.org/wiki/UnitTesting
  • https://blog.freifunk.net/2019/05/26/gsoc-2019-unit-testing-libremesh/

Requirements

The set of requirements for the LibreMesh project in regards of testing are the following:

  • Must support lua 5.1, as it is the one packaged in OpenWRT
  • Must provide helpful assert test functions, like showing table diffs or formatters for outputs to understand the difference easily
  • We need mock functionality, because a lot of functions are hardware related and may not be possible to test them on hardware all the time.
  • It is desirable for the library to be a one-file import, so we can use it in the routers and in the continuous integration in the same way we do it in the desktop.

Options

We need to consider unit testing, mocking and coverage tests.

Unit Testing

There is a list of unittesting libraries in the lua package manager, luarocks:
https://luarocks.org/labels/test?non_root=on

These are the ones analyzed:

LuaUnit

URL: https://github.com/bluebird75/luaunit

Upsides:

  • No external dependencies (single file),
  • it’s well maintained,
  • popular (200k downloads luarocks, 200 stars Github),
  • supports multiple versions of Lua.
  • Has TAP support (for CI)
  • It is being used in other OpenWRT based images like OpenWISP: https://github.com/openwisp/openwisp-config/blob/master/openwisp-config/tests/test_utils.lua

Downsides:

  • It doesn’t have mocking helpers: Could be combined with a one file mocking library mockagne

Telescope

They describe themselves as a A highly customizable test library for Lua that allows declarative tests with nested contexts.
URL: https://github.com/norman/telescope/

Last release 2013. Lua 5.1 last release was done in 2012, so it is not that big of a deal… but has not received any updates since, so it might have not evolved since.

Busted

URL:

  • https://github.com/Olivine-Labs/busted
  • http://olivinelabs.com/busted/

Upsides:

  • Very well maintained by olivinelabs and contributors,
  • very popular (900k downloads luarocks, 800 github stars).
  • It has setup/teardowns and also mocks, spies, and matchers.
  • Has TAP support.
  • Has good documentation
  • It is integrated with luacov for test coverage

Downsides:

  • Must be installed using luarocks (it is a lot of files). A question has been posted to luarocks to explore the possibility of creating bundles for a package (one file with all dependencies). That would simplify its use: https://github.com/luarocks/luarocks/issues/1023

Mocking libraries

lua-mock

URL: https://github.com/henry4k/lua-mock

Mach

URL: https://github.com/ryanplusplus/mach.lua/

More or less well maintain, though it is not so popular.

Coverage reports

luacov

URL: https://github.com/keplerproject/luacov

Upsides:

  • Well maintained

Unit testing Architecture for LibreMesh (only for LibreMesh?)

The idea is to allow unit-testing packages and also the integration between them as some of the packages depend on other packages.

Context:

  • LibreMesh enables functionality selecting which packages must be installed and changing enabling/disabling the exposed features in configuration files.
  • In some packages the code is all inside the executable lua file (not like a library)
  • Some packages are independent, provide functionality without depending on lime-system. This packages are in lime-packages for convenience.
  • Packages could (should) be migrated into OpenWrt repositories. This migration may happen steps and when migrated the code may be in an independent repository for the package iteself.
  • many packages are wrappers of bash code, and this complicates the tests as you need a running system to test it out

This context is not an easy one to test as it has a lot of trade offs!

Options

Single and global tests directory

The easiest architecture is to have a global tests directory and some utility functions that allow to “install” a certain module for testing

Directory structure:

lime-packages/package/package1/
lime-packages/package/package1/...
lime-packages/package/package2
lime-packages/package/package2/...
lime-packages/tests/utils.lua
lime-packages/tests/fake_modules/nixio.fs
lime-packages/tests/test_package_1.lua
lime-packages/tests/test_package_2.lua
lime-packages/tests/test_package_1_and_2_integration.lua
lime-packages/run_tests.sh

Example of a (integration) test that uses libraries and fake modules
test_lime_proto_anygw.lua:

utils = require("test.utils")
-- installs required modules in the lua path
utils.install_limesystem_module() -- to allow access to lime.network, etc
utils.install_module("packages/lime-proto-anygw/src/anygw.lua", "lime.proto.anygw")
utils.install_module("tests/fake_modules/nixio.fs", "nixio.fs")

-- now we can load the modules
anygw = require("lime.proto.anygw")

function test_foo()
  assert anygw.foo() == 'bar'
end

Pros:

  • Easy to start with and to understand

Cons:

  • As all tests are together it is not easy to move a package to other repository or even to its own repository.

Tests inside each module and a shared tests directory to test integrations

Directory structure:

lime-packages/package/package1/
lime-packages/package/package1/tests/test_foo.lua
lime-packages/package/package2
lime-packages/package/package2/tests/test_bar.lua
lime-packages/tests/utils.lua
lime-packages/tests/fake_modules/nixio.fs
lime-packages/tests/test_package_1_and_2_integration.lua
lime-packages/run_tests.sh

Pros:

  • Each package has more independence

Cons:

  • ?

Testing in a fully working image with all packages and libraries installed

Tests can be run installing all files of the packages (by some script that parses the Makefiles, or “by hand in a helper script”).

Pros:

  • It requires less boilerplate to test package integration
  • Libraries of the target system can be used directly
  • Other packages may be installed

Cons:

  • Less control over what it is really happening
  • slower than the other options, as it needs to load a full system

Fake modules as library

Testing executable modules

Executable lua modules can be tested with a simple modification in the file creating a main() function and then using something like:

function main()
  --- the main code in here
end

-- detect if this module is run as a library or as a script
if pcall(debug.getlocal, 4, 1) then
  -- Library mode, do nothing
else
  -- Main script mode
  main()
end

Then from a test file it can be loaded like any normal module and all the functions can be accesed without executing main()

Testing environment

A docker environment (or multiple, even using “qemu-user” under docker) with the testing libraries and target lua version is loaded by the “run_tests.sh” executable.
This environment can provide some useful libraries for testing (coverage reporting,

Direct import and testing can be done for unit tests were functions are not using system libraries, or when these are simple enough to be mockable (mocking shell() calls).

luarocks router-local environment

described by this guy, thanks!

We did a trial to run busted inside a router… that would have been useful for in-router tests and also for tests inside a virtual environment.

It used the strategy of installing luarocks dependencies in a separate directory and copying them to the router.

The steps are pretty straightforward:

$ sudo apt install luarocks
$ luarocks install --tree lua_modules busted
$ cat <EOF
require 'busted.runner'()

describe('Busted unit testing framework', function()
  describe('should be awesome', function()
    it('should be easy to use', function()
      assert.truthy('Yup.')
    end)

    it('should have lots of features', function()
      -- deep check comparisons!
      assert.same({ table = 'great'}, { table = 'great' })

      -- or check by reference!
      assert.is_not.equals({ table = 'great'}, { table = 'great'})

      assert.falsy(nil)
      assert.error(function() error('Wat') end)
    end)

    it('should provide some shortcuts to common functions', function()
      assert.unique({{ thing = 1 }, { thing = 2 }, { thing = 3 }})
    end)

    it('should have mocks and spies for functional tests', function()
      local thing = require('thing_module')
      spy.spy_on(thing, 'greet')
      thing.greet('Hi!')

      assert.spy(thing.greet).was.called()
      assert.spy(thing.greet).was.called_with('Hi!')
    end)
  end)
end)

EOF
> test.lua
$ cat <EOF
-- set_paths.lua
local version = _VERSION:match("%d+%.%d+")
package.path = 'lua_modules/share/lua/' .. version .. '/?.lua;lua_modules/share/lua/' .. version .. '/?/init.lua;' .. package.path
package.cpath = 'lua_modules/lib/lua/' .. version .. '/?.so;' .. package.cpath
EOF
> set_paths.lua
$ scp -r test.lua set_path.lua lua_modules root@thisnode.info:~
$ ssh root@thisnode.info 'lua -l set_paths test.lua'

The output of this command though was not what we expected:

$ ssh root@thisnode.info 'lua -l set_paths test.lua'
lua: lua_modules/share/lua/5.1/pl/path.lua:28: pl.path requires LuaFileSystem
stack traceback:
    [C]: in function 'error'
    lua_modules/share/lua/5.1/pl/path.lua:28: in main chunk
    [C]: in function 'require'
    lua_modules/share/lua/5.1/busted/runner.lua:3: in main chunk
    [C]: in function 'require'
    test.lua:1: in main chunk
    [C]: ?

Deeper inspection showed that the library’s dependencies had C bindings that we compiled for a different arquitecture, so tha strategy was not feasable for routers anymore:

find . -name \*.so
./lua_modules/lib/lua/5.1/lfs.so
./lua_modules/lib/lua/5.1/term/core.so
./lua_modules/lib/lua/5.1/system/core.so

where term/core.so and system/core.so are system libs, but lfs.so is from LuaFileSystem.

There is a lua-only implementation of LuaFileSystem: https://github.com/sonoro1234/luafilesystem , but as it doesn’t support luarocks deeper understanding of the platform is needed to attempt to replace the only binary binding with this implementation.

Found a sister library from that one in luarocks: https://luarocks.org/modules/3scale/luafilesystem-ffi based on this repo: https://github.com/spacewander/luafilesystem. So:

$ luarocks install --tree lua_modules luafilesystem-ffi

installed it and then touched the code were the penlight library was imported in busted:

$ grep -r require.\*lfs *
path.lua:local res,lfs = _G.pcall(_G.require,'lfs')
$ pwd
/home/nico/tmp/lua_local_test/lua_modules/share/lua/5.1/pl

but this library, as it depends on ffi (a module of luajit), it depends on a C extension too.

also, luafilesystem exists as a native library in OpenWRT, so it could be included just for the sake of the exercise: https://openwrt.org/packages/pkgdata/luafilesystem … but not this time.

Docker with LuaRocks and Lua 5.1

Some docker images already exist:

  • https://github.com/akornatskyy/docker-library/
  • https://hub.docker.com/r/abaez/luarocks/
  • https://hub.docker.com/r/abaez/lua
  • https://github.com/martijnrondeel/docker-luarocks

A simple Dockerfile whould be:

FROM abaez/luarocks:lua5.1

WORKDIR /root

RUN luarocks install luacov; \
    luarocks install busted

Excellent blog post on handling Lua paths: http://www.thijsschreijer.nl/blog/?p=1025

Example of LUA_PATH to load executables (without ending in .lua): LUA_PATH="packages/safe-upgrade/files/usr/sbin/?;;". The double ;; at the end means append the default paths.

First attempt running tests

I selected safe-uprgade libremesh module to start doing unittests because I know the module as I wrote it so I already know which code would gain value being tested. Also I am confident to refactor the module if needed.

First I start using the busted unittest library with a simple test of the function get_current_partition() that must return the partition number that is currently running. As this is done from reading /proc/mtd I refactored the function so we can pass from the outside the expected content.

Content of lime-packages/safe-upgrade/tests/test_safe_upgrade.lua:

local su = require "safe-upgrade"

describe("safe-upgrade tests", function()

    it("test get current partition", function()

        proc_mtd = [[#!
        dev:    size   erasesize  name
        mtd0: 00020000 00010000 "factory-uboot"
        mtd1: 00020000 00010000 "u-boot"
        mtd2: 00180000 00010000 "kernel"
        mtd3: 00d40000 00010000 "rootfs"
        mtd4: 00b10000 00010000 "rootfs_data"
        mtd5: 000f0000 00010000 "config"
        mtd6: 00010000 00010000 "firmware"
        mtd7: 00ec0000 00010000 "fw2"
        mtd8: 00ec0000 00010000 "ART"
        ]]
        assert.is.equal(su.get_current_partition(proc_mtd), 1)

        proc_mtd = [[#!
        dev:    size   erasesize  name
        mtd0: 00020000 00010000 "factory-uboot"
        mtd1: 00020000 00010000 "u-boot"
        mtd2: 00180000 00010000 "kernel"
        mtd3: 00d40000 00010000 "rootfs"
        mtd4: 00b10000 00010000 "rootfs_data"
        mtd5: 000f0000 00010000 "config"
        mtd6: 00010000 00010000 "fw1"
        mtd7: 00ec0000 00010000 "firmware"
        mtd8: 00ec0000 00010000 "ART"
        ]]
        assert.is.equal(su.get_current_partition(proc_mtd), 2)

    end)
end)

The modifications I did to do to the safe-upgrade module are:

  • refactor get_current_partition() into get_proc_mtd() and get_current_partition(proc_mtd). This way we can inject different /proc/mtd values for testing.
  • return a table containing the module exported functions when running in library mode. In this case we are exporting get_current_partition.
  • move argparse module loading to the parse_args function that only gets executed when the module is run in script mode (not library mode)

This changes may not be the best way of handling testing but for now it allow us to move forward without digging a hole too deep:

[san@jones lime-packages]$ git diff
diff --git a/packages/safe-upgrade/files/usr/sbin/safe-upgrade b/packages/safe-upgrade/files/usr/sbin/safe-upgrade
index 8aeece4..fc6d467 100755
--- a/packages/safe-upgrade/files/usr/sbin/safe-upgrade
+++ b/packages/safe-upgrade/files/usr/sbin/safe-upgrade
@@ -17,7 +17,6 @@
 ]]--

 local io = require "io"
-local argparse = require 'argparse'

 local version = '1.0'
 local firmware_size_bytes = 7936*1024
@@ -114,10 +113,15 @@ function get_current_cmdline()
     return data
 end

-function get_current_partition()
+function get_proc_mtd()
     local handle = io.open('/proc/mtd', 'r')
     local data = handle:read("*all")
     handle:close()
+    return data
+end
+
+function get_current_partition(proc_mtd)
+    local data = proc_mtd or get_proc_mtd()
     if data:find("fw2") == nil then
         return 2
     else
@@ -289,6 +293,7 @@ end


 function parse_args()
+    local argparse = require 'argparse'
     local parser = argparse('safe-upgrade', 'Safe upgrade mechanism for dual-boot systems')
     parser:command_target('command')
     local show = parser:command('show', 'Show the status of the system partitions.')
@@ -338,6 +343,9 @@ end
 -- detect if this module is run as a library or as a script
 if pcall(debug.getlocal, 4, 1) then
     -- Library mode
+    local safe_upgrade = {}
+    safe_upgrade.get_current_partition = get_current_partition
+    return safe_upgrade
 else
     -- Main script mode

To run the test inside the docker container we have to add the module under test to the LUA_PATH (check that it is an executable module that does not ends with .lua so the expresion is ? instead of ?.lua):

(docker) [san@jones lime-packages]$ LUA_PATH="packages/safe-upgrade/files/usr/sbin/?;;" busted packages/safe-upgrade/tests/test_safe_upgrade.lua
●
1 success / 0 failures / 0 errors / 0 pending : 0.001127 seconds

Sum up

  • We did an evaluation of testing libraries and shrinked our selection to busted or luaunit. We are selecting busted as it has more pros than luaunit, mainly integrated mocking and coverage.
  • Some architectural options were proposed as starting point. We discussed them with @nicopace and we will be moving forward iterating with the Tests inside each module and a shared tests directory to test integrations idea.
  • Running tests in a local OpenWrt based device was investigated (thanks @nicopace!)
  • A working Dockerfile is proposed.
  • I did a real world example of unit testing a single function of a simple module. Little but working 🙂

GSoC 2019 – qaul.net HTTP API

qaul.net allows spontaneous, ad-hoc networks to form between any wireless-enabled devices over whatever medium is available. Currently, the project is undergoing a Rust rewrite, which will provide enhanced security, modularity, and performance.

Project Overview

To make networking with qaul.net as easy as possible an HTTP API layer is being developed. This should enable applications to be written using qaul.net from any language with a decent HTTP implementation lowering the bar of entry substantially.

In the next couple of weeks a majority of my focus will be on making sure that we’re building from a strong base. As with all projects we want to make sure that we’re not going to end up with a hacky mess in a month or so and with qaul.net we want to keep binary size to a minimum. Rust is a language with many, many web frameworks so I’ll be spending a lot of time evaluating them looking mostly at ease of use and binary size.

Presently my top two contenders are `actix-web` and `gotham` but in the coming weeks I’ll be evaluating as many as I can so stay tuned for the results.

About Me

Hi, I’m jess, a Computer Engineering student in the Boston area.

GSoC 2019 – Building a Network Simulator for qaul.net

qaul.net allows spontaneous, ad-hoc networks to form between any wireless-enabled devices over whatever medium is available. Currently, the project is undergoing a Rust rewrite, which will provide enhanced security, modularity, and performance.

As with any rewrite, testing is a major component of the process. In the case of qaul.net, that requires a fairly sophisticated model of the underlying network protocols in use and even some of the physics behind them (like jitter, also known as “lag spikes”, which has the potential to foul up any routing protocol).

Project Overview

The problem of building a simulator is a common one, but in this case it is somewhat complicated by its dual use here, in both automated testing and as a way to quickly provide (fake) data to new applications, UIs, and other software components.

The simulator needs to be able to quickly simulate a network of hosts communicating over various media, accurately portray a qaul.net network complete with cryptographic identity management, and finally provide a HTTP API mirroring that of the real qaul.net service, so applications can develop against a known-good testing state.

Quickly simulating a network of hosts is relatively easy. Using an existing graph datastructure library (petgraph), along with ancillary data structures, the simulator will build a world state model representing hosts as vertices and connections (over Wi-Fi, Bluetooth, and even Internet overlay connections) as edges. Then, the simulator will provide a “tick”-based simulation model in which messages are passed from host to host via each medium in a simulated timestep.

Accurately portraying a qaul.net network is a little harder, but not much more so. With the network simulator built up, each host needs only to be given a small amount of behavior and a cryptographic identity to begin to act like a real qaul.net network. This behavior will be parametric, so testing all kinds of scenarios (including adversarial ones) will be possible.

Providing a HTTP API mirroring the real qaul.net service is the final step, and will probably mean wrapping the simulator in an asynchronous shell so that a Futures-based HTTP library can run alongside it. Once this is complete, the webserver can be spun up and applications can be pointed to it as though it were a real network!

Progress So Far

Up to now, I’ve spent my time getting comfortable in the qaul.net Zulip chat and evaluating the fundamental abstractions that I’ll use to build the simulator core. I currently have about a hundred lines of code written, in addition to a few “spike” components that I wrote just to test out various concepts. Starting today, though, the real code is happening!

About Me

I’m Leonora, a computer science student at Beloit College. I’ve written a lot of Rust and really like the language, but I’ve spent most of the last two years working on data analysis applications like the Open Energy Dashboard and CancerIQ. I’m super excited to be working on the next generation of qaul.net and the open, decentralized internet!

GSoC 2019 – OpenWrt Firmware Wizard

The OpenWrt project is a Linux operating system targeting wireless routers. Instead of trying to create a single, static firmware, OpenWrt provides a fully writable filesystem with package management. This frees you from the application selection and configuration provided by the vendor and allows you to customize the device through the use of packages to suit any application. For developers, OpenWrt is the framework to build an application without having to build a complete firmware around it; for users this means the ability for full customization, to use the device in ways never envisioned.

Project Overview

Currently, in order to install OpenWrt on a device, the user has to go through a very confusing process. Later, upgrading to the newer version of OpenWrt is another hassle. And often the end users are not that technically literate to carry out the process smoothly.
The aim of this project is to simplify this whole process by carrying out a number of changes to the OpenWrt buildroot, web based firmware wizard, and a plugin for upgrading the system automatically.

The Project has 3 sub-parts (in order of implementation) which are described as:

  1. Creating meta-data for each target
    To get the data for the sub-parts that follow, the following are required:
    1. Modification of buildroot makefiles to output JSON file for each target device
    2. Add additional metadata of target devices to makefiles
    3. Writing a script to consolidate JSONs for the entire build into a single file to be read buy the Firmware Selector
  2. Initial Firmware Retrieval
    A webapp that helps to select the correct image and how to apply them will help the adoption of OpenWrt.
    Writing a webapp with the following set of features:
    1. Display device manufacturer / model name / hardware version / OpenWrt release / link to images
    2. Display a help for how to apply the image depending on its type (factory/sysupgrade image, rootfs/kernel image)
    3. Select model/images by typing in part of the device model name
    4. Create images with custom package selection
  3. Firmware Upgrades
    Creating a router web interface package for LuCI to check and apply new images.
    Required Features:
    1. Search for updates
    2. Apply update

Progress made till now

Till now, a variety of things have been accomplished such as:

  • Know-how of the OpenWrt build system
  • Know-how of the LuCI plugin development
  • Finalization of JSON schema
  • Project road map for the next phase

Next Phase

As we will be moving toward the next phase of the program, new progress will be made.

The deliverable at the end of the next phase will be:

Generation of JSON that can be accessed as a data source

1. Generation of JSON for each target device along with images
2. Addition of device specific metadata for a sample set
3. Creation of a consolidated JSON file from numerous target JSON files.

About Me

I’m a 20 years old undergrad student from Delhi Technological University, New Delhi, India majoring in Computer Engineering. I have interests in computer programming and design.
I have been a student with Drupal in GSoC ‘17 and my project was to port the Vote Up/Down Module to the latest version of Drupal’s Core.
I am really excited and overwhelmed to be working with my awesome mentors Moritz Warning and Paul Spooren for the summer. Upon finishing this project, I plan to be a consistent member of the community.

GSoC 2019 – Upgrading the Meshenger App

Meshenger App Logo

Overview

Meshenger App, also known as Local Phone App, is an Android app which allows voice and video-communication without any server or Internet access and works in a local network​. Last year, a successful technical demo of the app was made under GSoC and was also published on F-Droid. This year’s GSoC target is to make the app stable, versatile and to expand the usability and user-base of the Meshenger app, which will directly benefit the community networks as the app primarily depends on it and communication using local networks will always be the foremost priority of the app. The app will be revamped and new features will be added to enhance the app, like allowing calls over the Internet, securing authentication at the initial handshake, fixing bugs/issues etc. which will improve its performance and give a great overall user experience. Here is a link to the GitHub page of the Meshenger App project- (https://github.com/meshenger-app/), which you are all welcome to explore and contribute in.

About Me

My name is Vasu Harshvardhan and I am a student currently in 2nd year, pursuing Bachelor of Technology (B.Tech) course in Electronics and Communication Engineering from Jamia Millia Islamia, New Delhi, India.

I have a special interest towards Open Source Software and have always aspired to be a part of something that could help and make everyone’s life better through technology and contributing to Open Source is clearly the best way to do so. This is the first time I am participating in GSoC as well as contributing to the Freifunk community and through this project, I plan to become a bonafide member and a long-term contributor to the Freifunk organization.

Project Goals

The three main goals of this project are:

  • Allowing audio and video communication over the Internet. As the app uses WebRTC library, a special signalling mechanism needs to be implemented for SDP handshakes between the two peers. A STUN server will also have to be enabled for obtaining the public IP addresses of both the clients. The WebRTC will then establish the P2P connection as both the peers have exchanged the signalling data and will be able to communicate with each other over the Internet.
  • Establishing a secure authentication at the initial handshake between the two devices. For this, I have decided to use the libsodium library to encrypt the SDP offer/signalling blob that is exchanged by sending it to the other app and fed to WebRTC.
  • Polishing the app by improving its UI/UX, fixing issues/bugs etc. in order to make the app stable and boost its performance.

Next Steps

During the Community Bonding Period of GSoC’19, I had researched about the implementations of the proposed features to be added, followed up the codebase of the app and had productive discussions with my wonderful GSoC mentor on this project, Moritz Warning. Following his suggestions, I have decided to first work on the security feature of the app, as it will help me to get into the flow of things and will lead to a progressive and productive development of the app.

BMX7 WireGuard Tunneling

BMX7 is an IPv6 native dynamic routing protocol for mesh networks which offers very advanced features and a small network overhead – thanks to the distance-vector strategy and a set of optimizations – and a great way to extend its functionality via plugins. BMX7 is also referenced as SEMTOR (Securely-Entrusted Multi-Topology Routing for Community Networks).

(For Axel Neumann’s presentation of BMX7 on BattleMesh, as well as the SEMTOR Whitepapers refer to the Further Info section).

Project Abstract

BMX7 offers plugins which are used for the distribution of small files, settings up tunnels or offer stats of the network structure. Currently the connection between a client node and the gateway are established via IPIP (IPv4/6 over IPv6), which is unencrypted and therefore possibly readable by attackers. As mesh networks usually operate on unencrypted wireless connections, the attack vector is considerably big.

Our proposed solution is to combine the current cryptographic stack of BMX7 with the one used by WireGuard. The process via which this will be achieved will be iterative; meaning that first binary calls from bmx7 to userland WG will be introduced, afterwards the efforts will be centered in the creation of a new plugin implementing WireGuard routing by using part of the existing cryptographic primitives and at last the effort to combine the tunnel plugin with the wg one. In all of these one rule must be set: do not implement cryptography yourself; mistakes will be made.

The detail that distinguishes our approach’s difficulty from hard to medium is cryptographic keys. It’s simpler to announce new public keys for WireGuard and have a separate plugin than replacing the existing BMX7 keys to allow signing of descriptive updates and encryption of traffic.

WireGuard Overview

To portray some of the perks of the implementation, the following sections are quoted from the WireGuard Whitepaper.

WireGuard is a secure network tunnel, operating at layer 3, implementd as a kernel virtual network interface for Linux. (..) The virtual tunnel interface is based on a proposed fundamental principle of secure tunnels: an association between a peer public key and a tunnel source IP address. It uses a single round trip key exchange, based on NoiseIK, and handles all session creation transparently to the user using a novel timer state machine mechanism. Short pre-shared static keys–Curve25519 points–are used for mutual authentication. (..) Peers are identified strictly by their public key, a 32-byte Curve25519 point. (..) The protocol provides strong perfect forward secrecy in addition to a high degree of identity hiding. (..) An improved take-on IP-binding cookies is used for mitigating denial of service attacks, to add encryption and authentication. (..) Two WireGuard peers exchange their public keys through some unspecified mechanism, and afterwards they are able to communicate. (..) It intentionally lacks cipher and protocol agility (H: opinionated protocol).

From the official WireGuard Technical Paper, Sections: Intro, 1 and 2

For further, technical details refer to Section 5 of the WG paper.

Deliverables

  • Implementation of wrapper functions to native WireGuard API for the establishment of the WireGuard tunnel.
  • Creation of a new WG tunnel plugin based on a simplified version of the existing tun plugin, by use the aforementioned wrapper functions.
  • The ultimate goal is the combination of the current tunnel plugin of BMX7 that incorporates IPv4/6 over IPv6 tunnels with the cryptographic stack of Wireguard, offering each administrator’s node to choose whether to encrypt or not through a command line parameter (tun + wg == crypt-tun).
  • Stretch Goal: Applying the optional 32-byte pre-shared key that is mixed into the public key cryptography supported by WG for an additional layer of symmetric-key crypto for the purposes of resistance against post-quantum attacks that are potential for Curve25519.

Objectives

  • Phase 1:
    • Understand BMX7 functionality, descriptions (descriptive updates) and plugin system.
    • Understand the WireGuard cryptographic stack and tunneling.
    • Establish the following Testing setups for testing and later on continuous integration:
      • LXC running OpenWrt bridged mesh.
      • Qemu OpenWrt bridged mesh networks.
    • Implement the initial wrapper functions to WG.
  • Phase 2:
    • Release a Debian package for BMX7 (as described in the issue referenced in Further Info section)
    • Polish and test the wrapper function for the establishment of the WireGuard tunnel
    • Create the WG tunnel plugin, based on the wrapper functions.
  • Phase 3:
    • Test extensively on the CI environments and troubleshoot.
    • Try to combine/reuse the cryptographic primitives of BMX7 and WG.
    • Refresh the user and developer documentation of BMX7.

About Me

I’m an open-source advocate, supporter of FOSS communities and student currently dwelling in Athens. My purposes span around the empowering of pro-consensus community networks (either technical or human), distributed and censorship resistant (focusing in the lower Balkan area communities, where adversaries for freedom in the mainstream Internet have begun to thrive).

A milestone for myself would be to meet the people that have created the software that our communities use and be able to interact and hack together.

For this reason my best efforts will go into making it to meetings such as Battlemesh in Paris, CC Camp and Bornhack happening this summer.

Further Info:

  1. Debian Packaging for BMX7 GH Issue
  2. WireGuard Network Namespaces and Routing
  3. BattleMesh 2016 BMX7 Presentation
  4. WireGuard Key Exchange
  5. WireGuard mailing list on roaming between IPv4 and IPv6
  6. WireGuard mailing list on HW-related Timestamps
  7. SEMTOR Overview
  8. SEMTOR in detail

GSoC 2019 – Load-correlated distributed bandwidth analysis for LibreMesh networks

Introduction

Performance tests are key for identifying the bottlenecks and optimize the network topology.
The main indicator is the bandwidth, but also other values can be useful like latency, number of active users for each node, load average and RAM consumption of each node.
The value of these quantities can vary greatly between the peak time and the night time, for this reason some of the measurements should be carried on in both these two moments.
Some other measurements, which can affect the user experience, could instead be run just in the night time.
To identify the night time we can’t relay on the router’s internal clock which could be years away from the actual time.
So a method for getting the network-wide peak time will be sought.
Each router in the network should separately run these tests, and for avoiding to influence each others’ results they should run at different times.
This synchronization should be possible taking advantage of the LibreMesh architecture and the shared-state service.

About me

Here’s Ilario, I studied organic chemistry in Pisa, Italy and I’m currently in a PhD on perovskite solar cells in Tarragona, Catalonia, Spain.
During the university I contributed to the mesh network eigenNet, part of the Italian community network consortium Ninux.
I started the (nowadays stalled) NinuxVerona community and once in Spain I started actively contributing to GuifiCamp and LibreMesh.

Setup of develop environment and initial interactions with LibreMesh community

After proposing a fix, I managed to build the LibreMesh firmware at its current stable release (17.06) using lime-sdk.
Then I built the latest LibreMesh code on top of forked OpenWrt 18.06 buildroot as suggested by the mentors; at first this was not possible on Arch Linux but after contacting with the community they updated the forked OpenWrt repository and it worked, thanks!
Finally, in order to be able to have the most updated OpenWrt code available, I compiled the latest LibreMesh code on top of the trunk (master branch, the unstable version) of OpenWrt buildroot, this was possible after adapting some configuration to the latest OpenWrt.
For pushing my code I forked lime-packages repository and created a gsoc2019 branch which can be accessed here.
Additionally, in case modifications to OpenWrt 18.06 core were needed, I will push them here.
All the buildroot-based compilation methods are already setup with the new branch as a feed, while the possibility of a back port to the stable LibreMesh 17.06 release will be evaluated once the project is completed.

Objectives

  • Flash with LibreMesh 4+ routers (preferably different models with different performances, if needed buy some) and setup a test network;
  • define a set of information to collect, divide it in network safe (e.g. number of clients)/network intensive (e.g. bandwidth test) and understand how to collect this data;
  • understand how a Prometheus exporter works and develop one in lua for the “network safe” quantities;
  • choose a reasonable “network safe” quantity for identifying the usage peak of the whole network (e.g. number of clients);
  • develop a script that locally identifies the peak and the night time;
  • develop the scripts for the network intensive tests, these should also store on the flash memory the results;
  • discuss with the mentors if the previous logs can be overwritten or if they should accumulate on the router for a certain period of time, in the latter case implement it;
  • implement a strategy for avoiding network intensive tests on different routers to happen at the same time;
  • if for achieving this last point a synchronization of the routers’ clocks is unavoidable, find a converging way for doing so or an available tool which does not require internet access (no NTP);
  • write a small Prometheus exporter for serving the last peak and off-peak network intensive tests results;
  • write the init service;
  • create a Makefile for the package;
  • test in a real-world community;
  • adapt the code written for LibreMesh trunk version to run also on LibreMesh 17.06 release;
  • adapt the code to plain OpenWrt, evaluate needed dependencies, if possible push the created package to upstream repository.

GSoC 2019 – Log Monitoring on LibreMesh

Log analysis is the way to find and recover the problems (known or not) of hardware, software or “rare” traffic on the network. In addition to the technical problem involved in unifying the logs of various routers and analyzing them, in community networks we frequently encounter the following additional problems:

  • Temporary disconnections. Due to geographical and / or atmospheric conditions, some equipment is temporarily disconnected from the network, so we have to design a system that allows us to “transfer” those logs.
  • Several of the routers used in community networks have limited resources to store the logs.
  • Several community networks do not have a sys admin within the network or they may not have Internet access (total or only temporary) to receive external help.

The idea of this GSoC is to develop a system that allows us to unify the logs of the routers of the network, filter them to stay only with the relevant ones to analyze and generate automatic analyzes (for example analysis of correlation of logs) to find possible problems and report them to the community.
Also, we want to develop a general dashboard of the state of the community network.

About me

I’m Franco Bellomo, a student of Exact Sciences at the University of Buenos Aires. My area of study is mathematical analysis and computational modeling. My previous free soft projects are related to academic problems so I am very happy with this new challenge.

I am an activist for free knowledge and I am very motivated to contribute to community networks.

Goals

  • Normalize and decrease the size of the logs. For this I want to perform a test between developing and training an own model (starting from a Huffman tree) in comparison with using (liblognorm) [https://www.liblognorm.com/].
  • Centralize Within the network there will be an RPi which will help us to join the standardized logs. In this step you will not consider the teams that are temporarily out.
  • General Dashboard. Visualization of the topology of the network and the status of each team.
  • Analysis of traffic and outlier in the network.
  • The records within the log are labeled (debug, info, warning, etc). We want to generate a system of auto tageo of groups of registries, that is to say that combinations of logs are potentially dangerous. For this we are going to use algotirmos of classification.
  • Extraction of features of the logs to make an unsupervised model of anomalies detection.
  • Obtain the log of the connected routers. For this we are going to use the community telephones as bridges.
  • Generate a good documentation!

Library to export/import public datasets to Retroshare network

Hi all!
My name is Joan Pascual and I’m going to develop Library to export/import public datasets to Retroshare network.
RetroShare is a distributed F2F network that is in my daily use to share content or chat with friends. I like a lot the project and after the creation of the Retroshare JSON API I start to develop the RetroShare Web Bridges (https://gitlab.com/r3sistance/retroshare-web-bridges). And now, with GSoC, I would like to participate a little more with RetroShare project and its community.
The idea is to import public datasets into RetroShare network such as Wikipedia, WordPress or other stuff in order to populate RetroShare network with this data, creating distributed repository of all this information. 
Following with this, I’ll create a series of scripts that will help to import and update this information, so anyone can publish at the same time on RetroShare network or over a centralized service, creating some kind of bridge.
I’m going to investigate Doxygen, a part of the framework that compiles the JSON API endpoints for RetroShare, and I’ll will make the needed pull requests to expose new parts of the API that actually are not supported and could be needed for the libraries. Also maybe is possible to use Doxygen to create the library in python that will interact with API endpoints.
The whole project will be written in Python except when a new API endpoint should be exposed that will be in C++ and Doxygen (languages used by libretroshare), but this will be mostly adding inline documentation with the special @jsonapi{development} annotation.
At the end, it will be a reference library to interact with RS JSON API that will engage people to create apps over RetriSshare network. Also, taking advantage of this library, some public datasets will be imported to RS network.
I’m very happy to have been selected this year as GSoC student for RetroShare, a program that is in my daily life. So thanks to GSoC team, Freifunk, RetroShare developers and of course to my Mentors! It will be an awesome experience! 


RetroShare is a distributed F2F network that is in my daily use to share content or chat with friends. I like a lot the project and after the creation of the Retroshare JSON API I start to develop the RetroShare Web Bridges (https://gitlab.com/r3sistance/retroshare-web-bridges). And now, with GSoC, I would like to participate a little more with RetroShare project and its community.
The idea is to import public datasets into RetroShare network such as Wikipedia, WordPress or other stuff in order to populate RetroShare network with this data, creating distributed repository of all this information. 
Following with this, I’ll create a series of scripts that will help to import and update this information, so anyone can publish at the same time on RetroShare network or over a centralized service, creating some kind of bridge.
I’m going to investigate Doxygen, a part of the framework that compiles the JSON API endpoints for RetroShare, and I’ll will make the needed pull requests to expose new parts of the API that actually are not supported and could be needed for the libraries. Also maybe is possible to use Doxygen to create the library in python that will interact with API endpoints.
The whole project will be written in Python except when a new API endpoint should be exposed that will be in C++ and Doxygen (languages used by libretroshare), but this will be mostly adding inline documentation with the special @jsonapi{development} annotation.
At the end, it will be a reference library to interact with RS JSON API that will engage people to create apps over RetriSshare network. Also, taking advantage of this library, some public datasets will be imported to RS network.
I’m very happy to have been selected this year as GSoC student for RetroShare, a program that is in my daily life. So thanks to GSoC team, Freifunk, RetroShare developers and of course to my Mentors! It will be an awesome experience!