GSoC 2019 – Monitoring of a community network, first results

1 Intro

Like any network, in community networks it is important to know the status of each of the teams that compose it, to track these over time and identify possible problems.
To monitor the routers of the network we need to store the data of all the equipment, centralize, analyze and visualize it. The metrics that we reveal can be divided into two large groups:

  • Numbers such as uptime, sent packets, signal strength, etc.
  • Those that are text like the logs.

In this first stage we are going to concentrate on those of the first type.

2 Collecting the metrics

Prometheus is one of the most used free software for event and alert monitoring. The clients (the routers in this case) expose a http server which Prometheus scrapes with a periodic frequency (HTTP pull model) and saves the data in a time database. Prometheus defines 4 types of metrics that are used to generate each instrument (what we are going to measure).
Grafana allows us to connect with Prometheus and generate different personalized dashboards and extend the existing graphics to our needs. It also allows us to generate a system of alerts.

2.1 The setup for the first metrics

In our experimental setup we will create a mesh network with a router that has LibreMesh. In addition we will have a raspberryPi in the network where we will have installed Promemtheus and Grafana.

2.2 Prometheus client

As mentioned previously, each router will be running a Prometheus client which will serve via http a plain text with the client’s metrics.

2.2.1 Python vs Lua

Prometheus offers a library in python to do the implementation of the client. The problem is that when we are working on routers, the space available to install applications is very limited. The interpreter of python weighs 50MB but the interpreter of Lua only 4kB

2.2.2 A new client

OpenWRT has a client implementation of Prometheus on Lua, but we found two “problems” in this implementation :

  • Use lua-socket to create the server which need install an external package. We are going to use uhttpd which is a server already installed on the router and allow us to execute a Lua script in a custom url.
  • Each instrument is written from scratch. We want to implement 4 basic objects (the possible metrics) so that it is simple to extend to new instruments.

2.2.3 Initial Metrics

Initially we will measure:

  • Uptime
  • Load avg
  • Mem info
  • Package per interface
  • iwinfo
  • Chanel occupation

3 The first results

Some metrics collected by Prometheus visualized in a Grafana dash

Next steps

In the next week we will be working on:

  • Add missing instruments
  • Pack the client correctly
  • Testing in a real mesh network
  • Document these steps

GSoC 2019 Import public datasets to Retroshare network – Update 1

After three weeks of code here we have the first evaluation! 
The first week I started to talk with my mentors of how to guide the project. I started a [repo](https://gitlab.com/jpascualsana/retroshare-python-bot) to code a “bot” that will wrap the Retroshare JSON API for better interaction. But I didn’t continue the job because we are looking a way to wrap the API using Doxygen generation (looking at [https://gist.github.com/sehraf/23cbc8ba076b63634fee0235d74cff4b](@Sehraf work)).


So I get a list of different projects, provided by my mentors, and I started to[write different scripts](https://gitlab.com/jpascualsana/public-datasets-import) to import the data on to Retroshare network. Some of this projects are:

  • Wikimedia based projects
  • WordPress blogs
  • Gutemberg project
  • ActivityPub
  • RSS
  • Radio onda Rossa
  • XRCB.cat
  • RadioTeca

So this scripts parse in different ways this sites, and get their information as previous step to publish it on to RetroShare network categorized as channels.This scripts are able to:

  • Parse the site/project getting all “pages” of interest with different strategies.
  • Get updates (the pages that have changed since last time the info has been retrived)
  • Command line executable with argument parse. See -h option to get supported options.

So the next step is to use this scripts to import this information on Retroshare network using a wrapper dynamically generated by Doxygen.

On the next screenshot we can see the help for the script that import from ActivityPub

GSoC 2019 – Unit testing LibreMesh – Update 1

In the last weeks I have been involved in getting deeper into becoming part of the development team of LibreMesh.

During that process, I worked together with NicoPace in writing this blogpost where we build a solid ground for unit testing: https://blog.freifunk.net/2019/06/03/gsoc-2019-evaluating-options-to-do-unit-and-integration-tests-in-libremesh-and-a-first-working-example/

Not covered by the last blog post is the work that I did in a fake/mock implementation of the libuci library in lua. This allows writing a lot of tests for LibreMesh as ucithe most used library in the codebase that make sense to write a mock. The implementation is very small but covers the most used functionality of libuci: cursor(), get(), set(), save(), delete() and foreach(). This was implemented doing TDD with the support of the unittesting framework.

All this work is being done in the following branch of my lime-packages fork: https://github.com/spiccinini/lime-packages/commits/unittest_docker

During the upcoming weeks all this work will be properly released as a PR to the lime-packages repo accompanied by the Travis CI integration in a Docker container to do the tests in a contained environment, and more tests are going to follow 🙂

Retroshare for Android – Update 1

It’s been a while since my last post. For a reminder, my job is to create a functional Retroshare application for Android. As in May, also in this period of time I focused on the visual aspect of the application, but this time from the technical side.

It started with choosing the technology in which the application was to be written. Selected framework should provide an easy way to combine the frontend with libretroshare, have an active development community and be cross-platform so that in the future it can be used to easily transfer application to iOS as well. One the basis of these criteria, Flutter was chosen.

So far main screens has been implemented and can be seen below:

There is still a lot of work to be done on the visual side and I will be working on it in the near future. In addition, I will explore the addition of asynchronous messaging and storage of message history to libretroshare.

qaul.net – Choosing a Web Framework

These days, Rust has quite the collection of web frameworks to choose from. The Are we web yet? framework list lists some 17 frameworks and this number is growing constantly. One of my activities since the last update has been selecting one of these frameworks to base qaul.net’s http api on.

Requirements

For qaul.net we are doing our best to keep the binary size as low as possible. The qaul.net binary should be simple to move and the larger the program gets the harder that becomes. Aside size we also want to pick a framework that’s easy to use and that’s likely to be maintained for a while (it’d be a shame to have the framework be abandoned after we complete the project).

Results

From this initial list of 17 frameworks, 6 were cut because they were either abandoned or I couldn’t get any of their examples running.

We evaluated a series of test programs for various web frameworks and found that for the most part everything is within a couple hundred kilobytes in size. actix-web was the largest in our testing coming in at around 2.8 Mb for a fairly simple example program. This makes sense, actix pulls in the actix actor framework and that probably comes with a fair bit of code the other frameworks don’t bother with. rouille and Thruster were the smallest by a decent margin coming in at 1.2 Mb for a hello world. This makes sense as rouille is based on tiny-http and foregoes any sort of async code and Thruster by default uses it’s own backend. Most of the rest of the frameworks all came in between 1.7 and 2.3 Mb, and this too makes sense. Most of these frameworks were based on hyper and even using hyper directly gets you an executable around 1.6 Mb.

There were a few frameworks I was quite interested in that I decided against because of lack of documentation or non-functional examples.

Conclusion

In the end I decided to go with iron as I find it’s middleware model to be quite pleasant and extensible and it’s fairly well documented. Iron came in squarely in the middle of the pack in terms of binary size but I had strong maintainability and usability concerns for the very smallest frameworks making this choice a matter of a couple hundred kilobytes.

Next Steps

Presently I’m working on an authentication system for the api. Hopefully next month I’ll be able to talk about that and a bit more about the modular design of the api.

Web Interface for Retroshare – Update 1

Since the first post, there has been quite a lot of progress on development of the new Web Interface for Retroshare.

As the build process is not using any JavaScript-specific tools, I spent a lot of time making sure that the development process was made as streamlined as possible. All the components are logically isolated and functionality moved to their relevant places. Using mithril also helped a lot, which has this concept of components, a mechanism to encapsulate different parts of views. Which massively helps in project organization and code reuse.

One more important feature which was completed is automatic data refresh and redrawing of views. I decided to use a combination of mithril’s lifecycle methods and JavaScript’s browser setTimeout method that can be used to create background tasks which when attached to their respective components, will periodically fetch and refresh data. The background task gets activated whenever a component’s view is rendered and gets killed when a component goes off of display. Mithril also has it’s own auto-redraw system which refreshes views when a component’s event handlers are called. But it does not refresh when component attributes are updated, or when raw promises are resolved.

Here are a few screenshots displaying the new UI style:

Next Steps

Along with the goals displayed in the first post, these will be given a higher priority for completion during the coming phase leading up to the next evaluation:

  • Node panel in config tab for handling most(if not all) network settings like node information, Net mode, NAT, Download limits, ports, etc.
  • Shares panel showing shared files, allowing to edit them, set view permissions, among other options.
  • Peers tab to display connected peers and set their reputations.
  • Modify the main Retroshare client to enable/disable the WebUI.

GSoC 2019 – Simulating eventual consistency for qaul.net

As I wrote in my prior blog post, I am working on qaul.net for this GSoC. Specifically, I’m working on simulating various components of the system so that other parts can be tested and developed independently, reducing the overall coupling between various elements.

There is, however, some inevitable coupling. My original design called for three phases of implementation: quickly simulating a network of hosts, accurately portraying a qaul.net network, and providing a HTTP API mirroring the real qaul.net service, in that order. As it turns out, the API is not well enough defined at the moment to work from the ground up on the simulator, so I’ve focused my efforts onto the API itself and the second step of the process.

visn

visn, meaning “knowledge” in Yiddish, is the testing framework I’ve created to bypass some of the initial effort of building a real simulation of the eventually-consistent real world network. Instead, visn uses synthetic event streaming to simulate incoming messages, which are translated into actual mutations of the state of a node through a resolution function.

A diagram showing synthetic events flowing through a resolution function to become real calls on the system under test, changing its state.

For example, we could define some synthetic events such as MessageReceived and FileUpdate. Our node under test, A, receives two FileUpdate events for some file (say,hello.txt), one creating the file and then one changing its contents, and a MessageReceived whose message refers to hello.txt.

It’s desirable to have the final displayed message show the updated contents of the file. To test this, we define the mapping from synthetic events to real state changes, and then write tests for each possible arrival order case.

Without visn, managing these changes becomes rather challenging. With a framework in place to manage the state and order of arrival, however, all synthetic event translation can be done in a single function, and each test is decoupled from the rest of the software, so true unit tests can be used.

The Big Picture

Now that visn is in a working state, I can use it to help build out the libqaul API and start testing its components. This will help make future development faster, as well as providing a target for more detailed network simulation in the future.

GSoC 2019 – Evaluating options to do unit and integration tests in LibreMesh (and a first working example)

Prior experience

Some people that have been writing about unit testing in lua, and also about lua for embedded:

  • http://lua-users.org/wiki/UnitTesting
  • https://blog.freifunk.net/2019/05/26/gsoc-2019-unit-testing-libremesh/

Requirements

The set of requirements for the LibreMesh project in regards of testing are the following:

  • Must support lua 5.1, as it is the one packaged in OpenWRT
  • Must provide helpful assert test functions, like showing table diffs or formatters for outputs to understand the difference easily
  • We need mock functionality, because a lot of functions are hardware related and may not be possible to test them on hardware all the time.
  • It is desirable for the library to be a one-file import, so we can use it in the routers and in the continuous integration in the same way we do it in the desktop.

Options

We need to consider unit testing, mocking and coverage tests.

Unit Testing

There is a list of unittesting libraries in the lua package manager, luarocks:
https://luarocks.org/labels/test?non_root=on

These are the ones analyzed:

LuaUnit

URL: https://github.com/bluebird75/luaunit

Upsides:

  • No external dependencies (single file),
  • it’s well maintained,
  • popular (200k downloads luarocks, 200 stars Github),
  • supports multiple versions of Lua.
  • Has TAP support (for CI)
  • It is being used in other OpenWRT based images like OpenWISP: https://github.com/openwisp/openwisp-config/blob/master/openwisp-config/tests/test_utils.lua

Downsides:

  • It doesn’t have mocking helpers: Could be combined with a one file mocking library mockagne

Telescope

They describe themselves as a A highly customizable test library for Lua that allows declarative tests with nested contexts.
URL: https://github.com/norman/telescope/

Last release 2013. Lua 5.1 last release was done in 2012, so it is not that big of a deal… but has not received any updates since, so it might have not evolved since.

Busted

URL:

  • https://github.com/Olivine-Labs/busted
  • http://olivinelabs.com/busted/

Upsides:

  • Very well maintained by olivinelabs and contributors,
  • very popular (900k downloads luarocks, 800 github stars).
  • It has setup/teardowns and also mocks, spies, and matchers.
  • Has TAP support.
  • Has good documentation
  • It is integrated with luacov for test coverage

Downsides:

  • Must be installed using luarocks (it is a lot of files). A question has been posted to luarocks to explore the possibility of creating bundles for a package (one file with all dependencies). That would simplify its use: https://github.com/luarocks/luarocks/issues/1023

Mocking libraries

lua-mock

URL: https://github.com/henry4k/lua-mock

Mach

URL: https://github.com/ryanplusplus/mach.lua/

More or less well maintain, though it is not so popular.

Coverage reports

luacov

URL: https://github.com/keplerproject/luacov

Upsides:

  • Well maintained

Unit testing Architecture for LibreMesh (only for LibreMesh?)

The idea is to allow unit-testing packages and also the integration between them as some of the packages depend on other packages.

Context:

  • LibreMesh enables functionality selecting which packages must be installed and changing enabling/disabling the exposed features in configuration files.
  • In some packages the code is all inside the executable lua file (not like a library)
  • Some packages are independent, provide functionality without depending on lime-system. This packages are in lime-packages for convenience.
  • Packages could (should) be migrated into OpenWrt repositories. This migration may happen steps and when migrated the code may be in an independent repository for the package iteself.
  • many packages are wrappers of bash code, and this complicates the tests as you need a running system to test it out

This context is not an easy one to test as it has a lot of trade offs!

Options

Single and global tests directory

The easiest architecture is to have a global tests directory and some utility functions that allow to “install” a certain module for testing

Directory structure:

lime-packages/package/package1/
lime-packages/package/package1/...
lime-packages/package/package2
lime-packages/package/package2/...
lime-packages/tests/utils.lua
lime-packages/tests/fake_modules/nixio.fs
lime-packages/tests/test_package_1.lua
lime-packages/tests/test_package_2.lua
lime-packages/tests/test_package_1_and_2_integration.lua
lime-packages/run_tests.sh

Example of a (integration) test that uses libraries and fake modules
test_lime_proto_anygw.lua:

utils = require("test.utils")
-- installs required modules in the lua path
utils.install_limesystem_module() -- to allow access to lime.network, etc
utils.install_module("packages/lime-proto-anygw/src/anygw.lua", "lime.proto.anygw")
utils.install_module("tests/fake_modules/nixio.fs", "nixio.fs")

-- now we can load the modules
anygw = require("lime.proto.anygw")

function test_foo()
  assert anygw.foo() == 'bar'
end

Pros:

  • Easy to start with and to understand

Cons:

  • As all tests are together it is not easy to move a package to other repository or even to its own repository.

Tests inside each module and a shared tests directory to test integrations

Directory structure:

lime-packages/package/package1/
lime-packages/package/package1/tests/test_foo.lua
lime-packages/package/package2
lime-packages/package/package2/tests/test_bar.lua
lime-packages/tests/utils.lua
lime-packages/tests/fake_modules/nixio.fs
lime-packages/tests/test_package_1_and_2_integration.lua
lime-packages/run_tests.sh

Pros:

  • Each package has more independence

Cons:

  • ?

Testing in a fully working image with all packages and libraries installed

Tests can be run installing all files of the packages (by some script that parses the Makefiles, or “by hand in a helper script”).

Pros:

  • It requires less boilerplate to test package integration
  • Libraries of the target system can be used directly
  • Other packages may be installed

Cons:

  • Less control over what it is really happening
  • slower than the other options, as it needs to load a full system

Fake modules as library

Testing executable modules

Executable lua modules can be tested with a simple modification in the file creating a main() function and then using something like:

function main()
  --- the main code in here
end

-- detect if this module is run as a library or as a script
if pcall(debug.getlocal, 4, 1) then
  -- Library mode, do nothing
else
  -- Main script mode
  main()
end

Then from a test file it can be loaded like any normal module and all the functions can be accesed without executing main()

Testing environment

A docker environment (or multiple, even using “qemu-user” under docker) with the testing libraries and target lua version is loaded by the “run_tests.sh” executable.
This environment can provide some useful libraries for testing (coverage reporting,

Direct import and testing can be done for unit tests were functions are not using system libraries, or when these are simple enough to be mockable (mocking shell() calls).

luarocks router-local environment

described by this guy, thanks!

We did a trial to run busted inside a router… that would have been useful for in-router tests and also for tests inside a virtual environment.

It used the strategy of installing luarocks dependencies in a separate directory and copying them to the router.

The steps are pretty straightforward:

$ sudo apt install luarocks
$ luarocks install --tree lua_modules busted
$ cat <EOF
require 'busted.runner'()

describe('Busted unit testing framework', function()
  describe('should be awesome', function()
    it('should be easy to use', function()
      assert.truthy('Yup.')
    end)

    it('should have lots of features', function()
      -- deep check comparisons!
      assert.same({ table = 'great'}, { table = 'great' })

      -- or check by reference!
      assert.is_not.equals({ table = 'great'}, { table = 'great'})

      assert.falsy(nil)
      assert.error(function() error('Wat') end)
    end)

    it('should provide some shortcuts to common functions', function()
      assert.unique({{ thing = 1 }, { thing = 2 }, { thing = 3 }})
    end)

    it('should have mocks and spies for functional tests', function()
      local thing = require('thing_module')
      spy.spy_on(thing, 'greet')
      thing.greet('Hi!')

      assert.spy(thing.greet).was.called()
      assert.spy(thing.greet).was.called_with('Hi!')
    end)
  end)
end)

EOF
> test.lua
$ cat <EOF
-- set_paths.lua
local version = _VERSION:match("%d+%.%d+")
package.path = 'lua_modules/share/lua/' .. version .. '/?.lua;lua_modules/share/lua/' .. version .. '/?/init.lua;' .. package.path
package.cpath = 'lua_modules/lib/lua/' .. version .. '/?.so;' .. package.cpath
EOF
> set_paths.lua
$ scp -r test.lua set_path.lua lua_modules root@thisnode.info:~
$ ssh root@thisnode.info 'lua -l set_paths test.lua'

The output of this command though was not what we expected:

$ ssh root@thisnode.info 'lua -l set_paths test.lua'
lua: lua_modules/share/lua/5.1/pl/path.lua:28: pl.path requires LuaFileSystem
stack traceback:
    [C]: in function 'error'
    lua_modules/share/lua/5.1/pl/path.lua:28: in main chunk
    [C]: in function 'require'
    lua_modules/share/lua/5.1/busted/runner.lua:3: in main chunk
    [C]: in function 'require'
    test.lua:1: in main chunk
    [C]: ?

Deeper inspection showed that the library’s dependencies had C bindings that we compiled for a different arquitecture, so tha strategy was not feasable for routers anymore:

find . -name \*.so
./lua_modules/lib/lua/5.1/lfs.so
./lua_modules/lib/lua/5.1/term/core.so
./lua_modules/lib/lua/5.1/system/core.so

where term/core.so and system/core.so are system libs, but lfs.so is from LuaFileSystem.

There is a lua-only implementation of LuaFileSystem: https://github.com/sonoro1234/luafilesystem , but as it doesn’t support luarocks deeper understanding of the platform is needed to attempt to replace the only binary binding with this implementation.

Found a sister library from that one in luarocks: https://luarocks.org/modules/3scale/luafilesystem-ffi based on this repo: https://github.com/spacewander/luafilesystem. So:

$ luarocks install --tree lua_modules luafilesystem-ffi

installed it and then touched the code were the penlight library was imported in busted:

$ grep -r require.\*lfs *
path.lua:local res,lfs = _G.pcall(_G.require,'lfs')
$ pwd
/home/nico/tmp/lua_local_test/lua_modules/share/lua/5.1/pl

but this library, as it depends on ffi (a module of luajit), it depends on a C extension too.

also, luafilesystem exists as a native library in OpenWRT, so it could be included just for the sake of the exercise: https://openwrt.org/packages/pkgdata/luafilesystem … but not this time.

Docker with LuaRocks and Lua 5.1

Some docker images already exist:

  • https://github.com/akornatskyy/docker-library/
  • https://hub.docker.com/r/abaez/luarocks/
  • https://hub.docker.com/r/abaez/lua
  • https://github.com/martijnrondeel/docker-luarocks

A simple Dockerfile whould be:

FROM abaez/luarocks:lua5.1

WORKDIR /root

RUN luarocks install luacov; \
    luarocks install busted

Excellent blog post on handling Lua paths: http://www.thijsschreijer.nl/blog/?p=1025

Example of LUA_PATH to load executables (without ending in .lua): LUA_PATH="packages/safe-upgrade/files/usr/sbin/?;;". The double ;; at the end means append the default paths.

First attempt running tests

I selected safe-uprgade libremesh module to start doing unittests because I know the module as I wrote it so I already know which code would gain value being tested. Also I am confident to refactor the module if needed.

First I start using the busted unittest library with a simple test of the function get_current_partition() that must return the partition number that is currently running. As this is done from reading /proc/mtd I refactored the function so we can pass from the outside the expected content.

Content of lime-packages/safe-upgrade/tests/test_safe_upgrade.lua:

local su = require "safe-upgrade"

describe("safe-upgrade tests", function()

    it("test get current partition", function()

        proc_mtd = [[#!
        dev:    size   erasesize  name
        mtd0: 00020000 00010000 "factory-uboot"
        mtd1: 00020000 00010000 "u-boot"
        mtd2: 00180000 00010000 "kernel"
        mtd3: 00d40000 00010000 "rootfs"
        mtd4: 00b10000 00010000 "rootfs_data"
        mtd5: 000f0000 00010000 "config"
        mtd6: 00010000 00010000 "firmware"
        mtd7: 00ec0000 00010000 "fw2"
        mtd8: 00ec0000 00010000 "ART"
        ]]
        assert.is.equal(su.get_current_partition(proc_mtd), 1)

        proc_mtd = [[#!
        dev:    size   erasesize  name
        mtd0: 00020000 00010000 "factory-uboot"
        mtd1: 00020000 00010000 "u-boot"
        mtd2: 00180000 00010000 "kernel"
        mtd3: 00d40000 00010000 "rootfs"
        mtd4: 00b10000 00010000 "rootfs_data"
        mtd5: 000f0000 00010000 "config"
        mtd6: 00010000 00010000 "fw1"
        mtd7: 00ec0000 00010000 "firmware"
        mtd8: 00ec0000 00010000 "ART"
        ]]
        assert.is.equal(su.get_current_partition(proc_mtd), 2)

    end)
end)

The modifications I did to do to the safe-upgrade module are:

  • refactor get_current_partition() into get_proc_mtd() and get_current_partition(proc_mtd). This way we can inject different /proc/mtd values for testing.
  • return a table containing the module exported functions when running in library mode. In this case we are exporting get_current_partition.
  • move argparse module loading to the parse_args function that only gets executed when the module is run in script mode (not library mode)

This changes may not be the best way of handling testing but for now it allow us to move forward without digging a hole too deep:

[san@jones lime-packages]$ git diff
diff --git a/packages/safe-upgrade/files/usr/sbin/safe-upgrade b/packages/safe-upgrade/files/usr/sbin/safe-upgrade
index 8aeece4..fc6d467 100755
--- a/packages/safe-upgrade/files/usr/sbin/safe-upgrade
+++ b/packages/safe-upgrade/files/usr/sbin/safe-upgrade
@@ -17,7 +17,6 @@
 ]]--

 local io = require "io"
-local argparse = require 'argparse'

 local version = '1.0'
 local firmware_size_bytes = 7936*1024
@@ -114,10 +113,15 @@ function get_current_cmdline()
     return data
 end

-function get_current_partition()
+function get_proc_mtd()
     local handle = io.open('/proc/mtd', 'r')
     local data = handle:read("*all")
     handle:close()
+    return data
+end
+
+function get_current_partition(proc_mtd)
+    local data = proc_mtd or get_proc_mtd()
     if data:find("fw2") == nil then
         return 2
     else
@@ -289,6 +293,7 @@ end


 function parse_args()
+    local argparse = require 'argparse'
     local parser = argparse('safe-upgrade', 'Safe upgrade mechanism for dual-boot systems')
     parser:command_target('command')
     local show = parser:command('show', 'Show the status of the system partitions.')
@@ -338,6 +343,9 @@ end
 -- detect if this module is run as a library or as a script
 if pcall(debug.getlocal, 4, 1) then
     -- Library mode
+    local safe_upgrade = {}
+    safe_upgrade.get_current_partition = get_current_partition
+    return safe_upgrade
 else
     -- Main script mode

To run the test inside the docker container we have to add the module under test to the LUA_PATH (check that it is an executable module that does not ends with .lua so the expresion is ? instead of ?.lua):

(docker) [san@jones lime-packages]$ LUA_PATH="packages/safe-upgrade/files/usr/sbin/?;;" busted packages/safe-upgrade/tests/test_safe_upgrade.lua
●
1 success / 0 failures / 0 errors / 0 pending : 0.001127 seconds

Sum up

  • We did an evaluation of testing libraries and shrinked our selection to busted or luaunit. We are selecting busted as it has more pros than luaunit, mainly integrated mocking and coverage.
  • Some architectural options were proposed as starting point. We discussed them with @nicopace and we will be moving forward iterating with the Tests inside each module and a shared tests directory to test integrations idea.
  • Running tests in a local OpenWrt based device was investigated (thanks @nicopace!)
  • A working Dockerfile is proposed.
  • I did a real world example of unit testing a single function of a simple module. Little but working 🙂

GSoC 2019 – qaul.net HTTP API

qaul.net allows spontaneous, ad-hoc networks to form between any wireless-enabled devices over whatever medium is available. Currently, the project is undergoing a Rust rewrite, which will provide enhanced security, modularity, and performance.

Project Overview

To make networking with qaul.net as easy as possible an HTTP API layer is being developed. This should enable applications to be written using qaul.net from any language with a decent HTTP implementation lowering the bar of entry substantially.

In the next couple of weeks a majority of my focus will be on making sure that we’re building from a strong base. As with all projects we want to make sure that we’re not going to end up with a hacky mess in a month or so and with qaul.net we want to keep binary size to a minimum. Rust is a language with many, many web frameworks so I’ll be spending a lot of time evaluating them looking mostly at ease of use and binary size.

Presently my top two contenders are `actix-web` and `gotham` but in the coming weeks I’ll be evaluating as many as I can so stay tuned for the results.

About Me

Hi, I’m jess, a Computer Engineering student in the Boston area.

GSoC 2019 – Building a Network Simulator for qaul.net

qaul.net allows spontaneous, ad-hoc networks to form between any wireless-enabled devices over whatever medium is available. Currently, the project is undergoing a Rust rewrite, which will provide enhanced security, modularity, and performance.

As with any rewrite, testing is a major component of the process. In the case of qaul.net, that requires a fairly sophisticated model of the underlying network protocols in use and even some of the physics behind them (like jitter, also known as “lag spikes”, which has the potential to foul up any routing protocol).

Project Overview

The problem of building a simulator is a common one, but in this case it is somewhat complicated by its dual use here, in both automated testing and as a way to quickly provide (fake) data to new applications, UIs, and other software components.

The simulator needs to be able to quickly simulate a network of hosts communicating over various media, accurately portray a qaul.net network complete with cryptographic identity management, and finally provide a HTTP API mirroring that of the real qaul.net service, so applications can develop against a known-good testing state.

Quickly simulating a network of hosts is relatively easy. Using an existing graph datastructure library (petgraph), along with ancillary data structures, the simulator will build a world state model representing hosts as vertices and connections (over Wi-Fi, Bluetooth, and even Internet overlay connections) as edges. Then, the simulator will provide a “tick”-based simulation model in which messages are passed from host to host via each medium in a simulated timestep.

Accurately portraying a qaul.net network is a little harder, but not much more so. With the network simulator built up, each host needs only to be given a small amount of behavior and a cryptographic identity to begin to act like a real qaul.net network. This behavior will be parametric, so testing all kinds of scenarios (including adversarial ones) will be possible.

Providing a HTTP API mirroring the real qaul.net service is the final step, and will probably mean wrapping the simulator in an asynchronous shell so that a Futures-based HTTP library can run alongside it. Once this is complete, the webserver can be spun up and applications can be pointed to it as though it were a real network!

Progress So Far

Up to now, I’ve spent my time getting comfortable in the qaul.net Zulip chat and evaluating the fundamental abstractions that I’ll use to build the simulator core. I currently have about a hundred lines of code written, in addition to a few “spike” components that I wrote just to test out various concepts. Starting today, though, the real code is happening!

About Me

I’m Leonora, a computer science student at Beloit College. I’ve written a lot of Rust and really like the language, but I’ve spent most of the last two years working on data analysis applications like the Open Energy Dashboard and CancerIQ. I’m super excited to be working on the next generation of qaul.net and the open, decentralized internet!