GSoC 2019 – Import public datasets to Retroshare network final evalutaion

Hi all!

On this final evaluation we focus the work on give to the automatic API wrapper generator support for openapi-swagger specification.

As Wikipedia said:

The OpenAPI Specification, is a specification for machine-readable interface files for describing, producing, consuming, and visualizing RESTful web services.

In addition, there are lots of tools that generate the front and backend code automatically from an OpenAPI specification. So, theoretically, an API interfacedefined with the OpenAPI specification, can easily be generated in an amount of languages using the OpenAPI Generator Project .

So with my mentor decided that this is more useful and priority for RetroShare project, because anybody could generate their own wrapper in a lots of languages in an automated way from the last commit of RetroShare master branch without write any code. 

So the workflow should be:

  1. Compile the RetroShare desired commit to generate the Doxygen documentation.
  2. Use the jsonapiwrapper-generator-openapi.py script that generates the OpenAPI YAML specification for RetroShare API.
  3. Generate the frontend code using  the OpenAPI Generator Project on to the desired language. For example, you can check the result for python on this example.

Following this steps, easily you can have an updated RetroShare API wrapper without coding anything!.

Some interesting tip about auto generated code is that the documentation are generated automatically with working examples.

Workaround

After we decided to give the support to OpenAPI specification we start to adapt the previous script https://gitlab.com/jpascualsana/retroshare-api-wrapper-generator/blob/master/jsonapiwrapper-generator.py to support the new specification. 

Trying to use the generated YAML against the generator validator shows us a lotsof warnings and crashes of our script. So we need to solve lots of bugs that we didn’t know that exists, bugs related of RsClasses generation, types etc… https://github.com/RetroShare/RetroShare/pull/1614

Also we deeply studied how to translate the RS API on to the OpenAPI specification, studying how to translate the types, types of functions, the authentication etc… 

Then, how to integrate the documentation on to the YAML to see the result on the generated code, following the specification. 

Finally we get in trouble with the asynchronous functions, that are not fully supported for the python generator and we are already looking for a solution.

So once we had the generated API wrapper we start to test it:

  • Tests for models: when the model objects are created using the OpenAPI Generator, you can pass an entire object as dictionary or instantiate it with the provided Class created by the generator. For example, on the link above we can see how to instantiate the object instead of pass it as dictionary.
groupMetada = RsGroupMetaData(m_group_name="Just another test channel2", m_group_flags=4, m_sign_flags=520)
channel = RsGxsChannelGroup(m_meta=groupMetada, m_description="Test it!")
req_rs_gxs_channels_create_channel = {"channel": channel}

Conclusion

So on this GSoC we started thinking that we  are going to code a way to import public datasets on the RetroShare network .

To do that we understood the necessity of create an automatically API wrapper, to do not code by hand each end-point of the API. 

Finally we found the necessity to provide a way to generate wrappers on any language needed on an automatic way, finding an already created solution on the OpenAPI Generator Project. 

Now is easier for developers to write applications over RetroShare network using the RetroShare JSON API, unifying the wrappers generation. 

On the back we let the project to import pulic datasets that I would like to finish on the following months, after GSoC, implementing the OpenAPI wrapper to the “import public datasets” repository. 

Repositories

Freifunk posts

  1. Arrival
  2. First evaluation
  3. Second evaluation
  4. Third evaluation

GSoC report

https://gitlab.com/jpascualsana/gsoc-2019-report

GSoC 2019: Building a Testing Framework for qaul.net

During this Google Summer of Code, I built visn, a testing framework designed specifically to integration-test Rust projects that rely on eventual consistency.

Originally, I imagined that a full network simulator was needed to thoroughly exercise the qaul.net API, as I discussed in my initial blog post. While sketching out designs for the libqaul API, as seen in this pull request (which influenced the actual service API), I discovered that it made much more sense to simply simulate the events coming into an individual instance. This is simpler to design, easier for developers to use, and less computationally complex.

That insight lead to the idea of a generic, easy-to-extend testing framework which could be used to test qaul at multiple levels at once. This framework, discussed in more detail in my second blog post, essential boils down to a mechanism for writing tests in an easy-to-read format which is transformed into calls into the system under test.

While working on this idea, I was also involved in several pull requests which added various capabilities to the libqaul API, including adding the Message struct and designing security and consistency validation steps for messages.

Eventually, my first pull request for visn itself came along. I got a lot of useful feedback and decided that the easiest way to make sure I was writing the right kind of test framework was to actually use it, by implementing features in libqaul and testing them at the same time as I implemented features in visn. This lead to my third blog post, on the value of this approach to what I call “conceptual” testing and my first PR that actually implemented and tested some libqaul features, along with the PR that fixed the issues I mentioned in that blog post.

Around this time, the project moved to a GitLab instance, so my PRs were somewhat split. In addition to a first stab at a contacts book API, which I later revised, I added the most recent visn feature, and in my opinion, one of the most useful: permutation testing.

In essence, rather than simply testing a single order of events, visn now allows testing all possible orderings of a set of events. This is, of course, and O(n!) algorithm, since there are n! orderings of n events, but I was able to do it pretty efficiently with the use of what eventually became the permute crate.

permute uses a data structure, called an ArbitraryTandemControlIterator, that stores both a reference to an array (a “slice”) and an iterator over indices to that array and transparently iterates over references to elements in that array. This way, copying of the array’s elements is kept to an absolute minimum, reducing both computational and memory footprint.

For example, reversing the elements of a vector using an ATCI:

let data = vec![
    String::from("red"),
    String::from("orange"),
    String::from("yellow"),
    String::from("green"),
    String::from("blue"),
    String::from("indigo"),
    String::from("violet"),
];

let control = vec![6, 5, 4, 3, 2, 1];

let atci = ArbitraryTandemControlIterator::new(&data, control.clone().into_iter());

for (atci_val, rev_val) in atci.zip(data.iter().rev()) {
       // Critically, these are both &std::string::String. No copying occurred.
       assert_eq!(atci_val, rev_val);
}

Of course, control could just as easily be vec![6, 5, 6, 4, 4, 3, 2, 1] or any other combination of valid indices.

Combined with an implementation of Heap’s algorithm for permuting lists, permute can provide an efficient method of generating all possible orderings in a deterministic way.

With that done, I was able to add a sample all-orders test for some aspects of the libqaul API, and tidy up the internal testing crate so that others can use this work. Once these two merge requests are reviewed and accepted, the visn testing framework and the service_sim crate in the qaul.net tree will be fully able to support ongoing libqual development.

Google Summer of Code with Freifunk has been a wonderful experience during which I’ve worked with cool technologies, amazing peers, and extremely helpful mentors. Overall, I think it’s made me a much better technologist and programmer, and I’d like to wholeheartedly thank Google, Freifunk, and my mentors for the amazing opportunity.

Load-correlated distributed bandwidth analysis for LibreMesh networks – #4: Conclusions and further work

Here I describe everything I did for my Google Summer of Code project this year.

First of all, thanks to Freifunk and LibreMesh communities and developers for the opportunity!
The work I did is quite spread, from general documentation to bug fixing and actual coding, I’ll try to collect everything in a more-or-less ordered fashion.

Compiling the firmware: methods, fixes and documentation

At the beginning of my GSoC, I tested various methods for compiling the latest LibreMesh firmware.

OpenWrt buildroot

At first I tried using the LibreRouter organization fork of the OpenWrt source code repository. After updating a small thing here (merged PR) I decided to use directly OpenWrt repository on the openwrt-18.06 branch in order to have all the fixes which will enter the next OpenWrt release in the 18.06 family.

As explained in my second blog post I decided to compile all the LibreMesh packages but to not include them in the binary image, this allowed me to flash a safe image (plain OpenWrt) and to add the juicy bits using OPKG from a local-network packages repository. Looking back, maybe this was an overkill and including all the packages into the images would have been just fine.

The list of packages I selected and I suggest to use as default for the next LibreMesh release are:

check-date-http first-boot-wizard hotplug-initd-services lime-app lime-debug lime-hwd-ground-routing lime-hwd-openwrt-wan lime-proto-anygw lime-proto-babeld lime-proto-batadv lime-proto-wan lime-system shared-state shared-state-babeld_hosts shared-state-dnsmasq_hosts shared-state-bat_hosts shared-state-dnsmasq_leases shared-state-nodes_and_links lime-docs lime-docs-minimal libremap-agent

I documented the process here.

lime-sdk

lime-sdk was the recommended local compilation method for the last stable release LibreMesh 17.06. I fixed its master branch here (merged PR). But more problems persist, see my issues here and here. I didn’t try to fix them as the main developer decided to abandon the support to the latest stable release (see here) and for the next release it won’t be used anyway.

openwrt-metabuilder

I casually found Paul Spooren’s openwrt-metabuilder which has the potential to provide the same user experience as lime-sdk. I fixed a small thing in the examples here (merged PR) and created two new examples: one for compiling LibreMesh 17.06 and another for compiling the latest code, they can be found here (open PR). This system downloads and installs compiled packages, which for the latest LibreMesh code case are compiled by Travis continuous integration. Travis configuration was broken, I updated the configuration here (merged PR) and works again. The list of the packages being compiled was not complete so that some of the ones needed for the latest LibreMesh could not be installed, I added all of them to the to-be-compiled list here (open PR).

Documentation on compilation

What I concluded was used for updating the compilation instructions on the LibreMesh website, with plentiful of other updates and improvements, can be read here (open PR).

One thing that the documentation is still missing is how to use the network-profiles (introduced with LibreMesh 17.06 to be used with lime-sdk for having a network community wide customization) with the OpenWrt buildroot (while openwrt-metabuilder already supports it, simply indicating the network-profiles name as a package to install works). I started some discussion on the topic here.

Test network: supporting unsupported routers and unexpected bugs

Supporting more routers

LibreMesh default configuration creates three interfaces on each radio (two access points with different ESSID and one IEEE802.11s mesh). This works on a very limited set of routers, which are the officially LibreMesh-supported ones. I own many home routers from various ISP, which are perfectly supported by OpenWrt but not by LibreMesh and I wanted to expand LibreMesh support to these abundant and “free as in free beer” routers.

On LibreMesh, by default, the routing (BATMAN-adv and Babeld) happens on top of IEEE802.11s mesh interfaces. For using these routers I had to expand the configuration scope for AP and client interfaces and the result can be seen here (open PR).

Memory leak of YouHua WR1200JS on ethernet when using VLAN 802.1ad

While testing with the LibreMesh-supported routers I have, TP-Link WDR3600 and YouHua WR1200JS, I had some interesting trouble. The first router saw the routing peers also via ethernet cable connection while the second didn’t. Digging deep into the packets with tcpdump on various interfaces I realized that the YouHua WR1200JS leaks memory (I don’t know from which memory) into the packets’ content when using VLAN of type 802.1ad (the common VLAN 802.1q works just ok) breaking the packets and leaking information.

I reported this fact here and here and received no answer nor confirmation yet.

Data collection: lime-report and bandwidth-test

The objective of my GSoC included the development of reporting utilities and the smart scheduling of their execution.

Regarding the first part, I completed the development of lime-report (based on a draft by Paul Spooren) and developed from scratch bandwidth-test. The former can be seen here (open PR) and the latter here (open PR).

lime-report

lime-report is a very simple shell script outputting a set of debugging commands output and configuration files content. A few options allow the user to select the needed information type.

bandwidth-tool

bandwidth-test is a tool for estimating the maximum available download bandwidth from the internet. In order to work even on restricted connections, it just uses port 80 with HTTP connections. It has be designed for working also on a common Linux machine (requires lua, wget and pv), not only on OpenWrt.

By default, a few large files are downloaded during 20 seconds. After this timeout, the download gets interrupted and the speed estimated. The failed downloads gets ignored and more files gets downloaded until having 5 successful tests. At this point the outputted value is the median of the 5 results.

Tests scheduling at peak and night time

In order to have interesting information, the network status and performances have to be referenced to the network load. Active tests which risk to affect the users experience should be run during the night time, when the network is at rest, while passive tests can be safely done at the network usage peak time, when problems are more likely to show up. The tests results should be stored on the router for allowing the diagnosis of problems after an accident.

Each router determines the peak time based on three different commands giving an estimation of the network-wide connected clients. Once one day-time of load data is collected, each router starts scheduling the passive tests at the peak time, using the classical at command. The load time-profile is constantly updated considering both previous days and today’s loads.

The most heavy test to be run during the night time is the bandwidth test. In order to avoid cross-correlation between the tests, they have to be performed at different times. The synchronization is obtained using the shared-state routine and assuming that all the router’s clock are synchronized (we are performing bandwidth tests towards the internet, so it’s safe to assume that the clocks are synchronized, either via NTP or via check-date-http routine). The implemented strategy is: run the tests-scheduler routine at a randomized time, so that each router does it at a different time. Select the 6 hours in the day where the network load (number of clients) is minimum. Read the time the other routers announced they will run the tests at, this works via shared-state. Between these 6 hours chose the one which has less scheduled tests by other routers. Within this hour, group the other routers scheduled tests in 5 minutes groups and chose the less populated group. Randomize the execution time in this 5 minutes range.

The code is not yet tested enough to be considered ready, but can be seen at this commit. The actual PR will have a rewritten version of this, from another branch, but this link will be kept valid for GSoC reference.

More minor fixes and documentation

I reported here and proposed a fix here (open PR) for a problem noticed by an user. Some very minor errors I noticed and fixed are here (merged PR), here (merged PR) and here (open PR).

In this already mentioned pull request I also updated and expanded the lime-example file which is the most complete documentation on the LibreMesh configuration. Some more improvements to the website are here (merged PR), here (merged PR) and in this already mentioned pull request.

Further work

  • Complete the testing of tests-scheduler
  • Use LibLogNorm for normalizing the logs collected by lime-report and reducing their size
  • Make the tests results available to an external Prometheus monitor
  • Implement a strategy for saving the tests results on flash memory rather than on RAM (so that they are persistent over reboot): frequent writing has to be avoided for limiting the memory tearing, logs can be written on flash just when certain problems are detected (e.g. internet connection lost)
  • Implement a strategy for deleting old tests results when RAM or flash start getting full

Maaany hugs!
Ilario

“Das größte Bürgernetz Deutschlands” – Freifunk in der c’t 17/2019

In ihrer aktuellen Ausgabe 17/2019 berichtet die c’t ausführlich über das Freifunk-Projekt. Der Themenschwerpunkt umfasst insgesamt vier interessante Artikel:

Das größte Bürgernetz Deutschlands

“Freifunk: Das größte Bürgernetz Deutschlands” beschreibt das Projekt aus User-Sicht. Der Artikel erläutert, wie Freifunk-Netze technisch aufgebaut werden können und wie sich Nutzerinnen und Nutzer mit ihnen verbinden. In seinem Beitrag stellt Andrijan Möcke Freifunk einem kommerziellen Hotspot-Provider gegenüber. Jedoch übersieht er, dass es bei Freifunk nicht nur um kostenlosen Internetzugang geht sondern um viel mehr: Wir bauen gemeinsame ein freies und selbstverwaltetes Funknetzwerk, das alle Menschen in meiner Umgebung miteinander verbindet und das von niemandem einfach abgeschaltet werden kann – so Lisa im Video Freifunk verbindet.
Freifunkas bauen nicht ihr Hotspot-Netz – sie befähigen Menschen es selber zu tun: Im Kern stehen Erfahrungsaustausch und Wissensvermittlung. Gemeinsame Projekte gibt es z.B. mit der CryptoParty, Chaos macht Schule, einigen CCC-Erfas und dem Jugendnetz Berlin. Ein DIY-Ansatz ohne Hotline und Vertrag ist kaum mit Service-Anbietern vergleichbar; die Community hofft auf Engagement bei ihren Projektzielen. Die vielen Verweise auf Vereine zur Teilnahme überraschen zudem: Den mehreren hundert losen Communities stehen nur etwa 50 Vereine gegenüber.

Haftungsentspannt – Ihr Gastnetz, Freifunk und die Störerhaftung

Der Rechtsanwalt Nick Akinci gibt in seinem Beitrag “Haftungsentspannt Ihr Gastnetz, Freifunk und die Störerhaftung” eine juristische Einschätzung zum Betrieb von Freifunk Routern und offenen Funknetzen. Hierbei geht er insbesondere auf die Situation nach dem Ende der Störerhaftung für WLAN-Betreiber ein. Neben Nick Akincis liefert Reto Mantz in seinem Blog “Offene Netze und Recht” weitere Einschätzungen und Hintergründe. Reto arbeitet als Richter am Landgericht Frankfurt am Main und beschäftigte sich u.a. in seiner Dissertation mit Rechtsfragen in offenen Netzen. Seine Analyse “BGH „Dead Island“ – Wie der BGH zwar die Abschaffung der Störerhaftung (bei WLANs) bestätigt, ihr Grundübel aber weiter beibehält” gibt einen umfassenden Einblick in die derzeitige Situation bzgl. der Störerhaftung.

Einmal Hotspot, bitte!

Im dritten Artikel “Einmal Hotspot, bitte!” gibt Vincent Wiemann Kaufempfehlungen für Freifunk Router. Im direkten Vergleich stellt er fünf Geräte verschiedener Hersteller einander gegenüber. Er zeigt auf, dass ältere Geräte wie der Linksys WRT54G und der TP-Link TL-WR841N inzwischen über zu wenig Ressourcen für die immer größeren Freifunk-Netze verfügen. Getestet werden die TP-Links RE450 v1, Archer C7 v5 und Archer C50 v4 sowie die Ubiquiti AC Mesh und die AVM Fritzbox 4040. Die Empfehlung für den RE450 ist jedoch problematisch, da die Geräte der Serie zum Teil über zu wenig Arbeitsspeicher verfügen. In vielen Communities können zudem auch andere Geräte verwendet werden: Das OpenWRT-Projekt hat eine Liste mit z.B. Gluon kompatiblen Freifunk-Routern veröffentlicht.

Gemeinsam funken – Zu Besuch bei Freifunk-Communities in Stadt und Land

Zuletzt gibt Keywan Tonekaboni in “Gemeinsam funken – Zu Besuch bei Freifunk-Communities in Stadt und Land” einen faszinierenden Überblick über verschiedene Freifunk Communities und Projekte in Deutschland. Zu Beginn geht es zur Freifunk Community in Hanover im Hackerspace Leinelab – darauf führt die Reise zum Wireless Community Weekend, dem jährlichen Freifunk Community-Treffen in Berlin. Hier stellt Keywan Tonekaboni Gründerinnen und Gründer der Freifunk Bewegung vor. Zuletzt gibt es einen Einblick in den Netzausbau in Wittmund und das Richtfunk-Backbone an der Nordseeküste.

Insgesamt veröffentlicht der heise-Verlag ein gelungenes Bild von Freifunk, das bei vielen Interesse wecken dürfte. Vielen Dank dafür.

qaul.net – The Conceptual Value of Testing

While working on the visn eventual consistency testing framework for the qaul.net project, I’ve run into an excellent example of one of the most important reasons to test software, in some ways more important than the discovery of regressions, design defects, or other functional issues. Specifically, the ability to determine problems in the conceptual model around which the software is built.

Conceptual Testing

Unit, integration, and acceptance tests are well known for their value in detecting regressions, ensuring that functions, classes, and other units are written in a self-contained and composable style, and ensuring that design goals are met throughout the lifecycle of the project.

In statically typed languages like Rust, however, it can often be tempting to eschew the fine-grained level of unit testing used in dynamic languages, since the compiler checks many of the constraints unit tests are designed to impose. Rust, for example, permits encoding a lot of detail about the presence, or absence, of values with the type system.

In qaul.net’s libqaul, we provide a model for metadata about a user in the UserData struct (from libqaul/src/users/mod.rs):

/// A public representation of user information
///
/// This struct is used for both the local user (identified
/// by `UserAuth`) as well as remote users from the contacts book.
#[derive(Default, Debug, PartialEq, Clone)]
pub struct UserData {
    /// A human readable display-name (like @foobar)
    pub display_name: Option<String>,
    /// A human's preferred call-sign ("Friends call me foo")
    pub real_name: Option<String>,
    /// A key-value list of things the user deems interesting
    /// about themselves. This could be stuff like "gender",
    /// "preferred languages" or whatever.
    pub bio: BTreeMap<String, String>,
    /// The set of services this user runs (should never be empty!)
    pub services: BTreeSet<String>,
    /// A users profile picture (some people like selfies)
    pub avatar: Option<Vec<u8>>,
}

This struct provides Optional types for every field, except for those fields which can contain nothing (like the BTreeMap or BTreeSet), since by design, a user may not have set any of these fields. This works really well for user storage, which was the original purpose of the data structure, but does not work well for user information transmission, as I found out.

Conceptual Problems in the User Model

Initially, libqaul was designed to use the UserData struct for all user data needs, including transmission. Some of the UserData related API surface was the first that I implemented during the initial deployment of visn, and therefore the first API surface to be tested. During that process, I mapped the function Qaul::user_update() to a visn synthetic event, UserUpdate, which carried a UserData to be passed to user_update().

While writing these tests, I encountered a problem: what happens in the case that a user wants to clear a field in their UserData? Do they issue a UserData in which there is an Option::None value in that field (like a null), which is interpreted to mean that the field should be cleared?

This made the user_update() function very easy to implement, since it could simply assign the newly received UserData as the new canonical UserData for that user. That, however, leads to a problem when it comes to data transmission over the actual network. When, for example, a user has set a profile photo or a lot biography fields, the UserData could be pretty large, and retransmitting that on every subsequent update is not very practical.

The act of writing these tests, which were primarily designed to prevent regressions, lead me to implement a delta-based UserData update scheme, wherein the UserData is updated incrementally with small changes. This provides other advantages, too, such as allowing more orderings of those events’ arrivals to result in a valid state for the UserData.

Conceptual Problems in visn

In addition to uncovering problems in the design of libqaul, this process helped me refine my ideas for the visn testing framework. Initially, visn assumed that all operations modelled by synthetic events were infallible, or at least that failure to perform an action should lead to test failure. In fact, a critical component of eventually consistent systems is their ability to reject invalid states, in order to remain robust in the face of serious network problems or malicious input.

Originally, the resolve function took the state of the system under test and an event, and returned the new state (Fn(Event, System) -> System).

To address this problem, visn‘s type model became even more complex, incorporating a separate return type rather than requiring that the function which resolves events always return a successfully transformed state (Fn(Event, System) -> Return), and the infallible variant now simply sets Return to System.

In addition, rather than taking a singular System argument when resolving events, visn now takes a function returning a System, laying the groundwork for supporting multiple permutations of the ordering of queued events.

Conclusion

Testing is important for both compiled and dynamic languages to prevent defects and enforce good factorization, but the benefits to compiled languages can, like many design processes, be moved “left”, into the pre-execution step. As seen here, the simple act of writing tests often leads to conflict with the type system and compiler that can reveal conceptual and design defects in the system being tested.

qaul.net – Strictly Typed Code in a Stringly Typed World

The new qaul.net HTTP API speaks JSON, as do increasingly many things. It allows you to express complex types, it maps well to most programmers’ mental models, it’s self describing, and there’s a decent library for it in every language under the sun. In the Rust world JSON is primarily dealt with using the serde_json crate (https://crates.io/crates/serde_json) which allows the programmer to easily map strictly typed structures into JSON and back. Today we’re going to be talking about the difficulties we encountered building a type for JSON:API’s relationship data field (https://jsonapi.org/format/#document-resource-object-linkage).

The data field of a relationship can be any of the following things:

  • non-existant
  • null
  • a single object
  • []
  • an array of objects

Each of these options semantically represents a distinct thing and so we should be able to tell them apart. We will use the following enumto represent our value:

#[derive(Serialize)]
#[serde(untagged)]
enum OptionalVec<T> {
  NotPresent,
  One(Option<T>),
  Many(Vec<T>),
}

Non-existant relationships will be represented by OptionalVec::NotPresent, to-one relationships (empty or otherwise) will be represented by OptionalVec::One, and to-many relationships will be represented by OptionalVec::Many.

Serializing

To allow our enum to serialize properly we just need to implement a method to tell if it’s supposed to be present or not:

impl<T> OptionalVec<T> {
  pub fn is_not_present(&self) -> bool {
    match self {
      OptionalVec::NotPresent => true,
      _ => false,
    }
  }
}

Now when we wish to use our enum we simply need to put the #[serde(skip_serializing_if = "OptionalVec::is_not_present")] attribute before the field.

Deserializing

To cover the OptionalVec::NotPresent case we will need to implement std::default::Default for OptionalVec as follows:

impl<T> Default for OptionalVec<T> {
  fn default() -> Self {
    OptionalVec::NotPresent
  }
}

Now whenever we use OptionalVec we need to add the #[serde(default)] attribute to the field. This tells serde to fill the field with a default value if the key isn’t present.

For the other options, we need to implement a custom deserializer. The technically proper way to do this is to build a Visitor, but we’re going to take the simpler route and deserialize it to serde_json::Value first. Our deserializer is as follows:

impl<'de, T> Deserialize<'de> for OptionalVec<T>
where T: DeserializeOwned {
  fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
  where D: Deserializer<'de> {
    let v = Value::deserialize(deserializer)?;
    match serde_json::from_value::<Option<T>>(v.clone()) {
      Ok(one) => Ok(OptionalVec::One(one)),
      Err(_) => match serde_json::from_value(v) {
        Ok(many) => Ok(OptionalVec::Many(many)),
        Err(_) => Err(D::Error::custom("Neither one nor many")),
      },
    }
  }
}

And that’s it! Effectively we try first to deserialize the singular case and if that fails we try to deserialize the multiple case. The first case will catch null as Option<T> will deserialize null as None.

Conclusion

This is just one of the many challenges we faced writing a framework for JSON:API parsing in Rust. The contents of this article in their proper context can be found here: https://github.com/qaul/json-api/blob/master/src/optional_vec.rs

Load-correlated distributed bandwidth analysis for LibreMesh networks – #3: Completed test network and broadened scope of the work

The planned test network has been built, employing both fully supported (I just documented them in the tested routers list here) and common home routers (officially unsupported by LibreMesh but supported by OpenWrt).

Employing non supported routers required an expansion of my previous work about making possible an AP-sta (point to multi point access point to clients) network architecture (instead of the default IEEE802.11s mesh). My previous solution relied on BMX6 which will not be included in the next release, in favor of Babeld, so the problem is open again. I provisionally managed to have Babeld on AP and client interfaces adding the following setting in /etc/config/lime on the access point:

config wifi 'radio0'
     list modes 'apname'
     option country 'ES'
     option channel_2ghz '11'
     option apname_ssid 'LibreMesh.org/%H'
     option apname_key 'someAPpassword'
     option apname_encryption 'psk2'
     option distance '100'

 config net 'wirelessap'
     option linux_name 'wlan0-apname'
     list protocols 'babeld:17'

and the following in the /etc/config/lime of the client (taking advantage of the client protocol I added some time ago here):

config wifi 'radio0'
     list modes 'client'
     option country 'ES'
     option channel_2ghz '11'
     option client_ssid 'LibreMesh.org/LiMe-eb7f64'
     option client_key 'someAPpassword'
     option client_encryption 'psk2'
     option distance '100'

 config net 'wirelessclient'
     option linux_name 'wlan0-sta'
     list protocols 'client'
     list protocols 'babeld:17'

For some reason this solution does not propagate the default route obtained from Babeld to the whole network, this does not directly affect my project, anyway I’ll surely manage to fix this in the upcoming days.
In case the usage of such perfectly-working trashware was a blocker, I will receive a few more supported routers in the following days and I will just use those.

Also due to the switch to Babeld, to obtain a complete graph of the network is not yet possible (Babeld being based on the distance vector principle, does not know the whole topography and we’ll have to aggregate it using the new shared-state LibreMesh feature).

During the building of the test network, the planned topography changed a bit resulting in this one (solid lines are cabled connections, directional dotted lines with arrows points from the client to the access point, non-directional dotted lines are proper IEEE802.11s mesh):

All the routers were flashed with OpenWrt 18.06-SNAPSHOT image, which is OpenWrt 18.06.4 with additional fixes appeared in the release branch here compiled locally using OpenWrt buildroot. LibreMesh packages were also compiled in the same process but not included in the compiled image, and installed later using opkg and serving the packages over the local network. This approach showed to be more convenient than expected, additionally, the fallback image is a plain OpenWrt, which decrease the risk of “brikking” the devices.

The complete list of the installed packages from the LibreMesh ones is:

check-date-http first-boot-wizard hotplug-initd-services lime-app lime-debug lime-hwd-ground-routing lime-hwd-openwrt-wan lime-proto-anygw lime-proto-babeld lime-proto-batadv lime-proto-wan lime-system shared-state shared-state-babeld_hosts shared-state-dnsmasq_hosts shared-state-bat_hosts shared-state-persist shared-state-dnsmasq_leases shared-state-pirania

Lately, I got also involved in the development of lime-log-review, which uses liblognorm to decrease the volume of the logs and can be used in my project for storing the key information from the voluminous logs when an incident is detected.

BMX7: Wireguard Tunneling – 2nd Update

The second phase is officially over and within it’s time a lot have happened. First things first, this year’s Battlemesh took place in Paris and it was a blast, the wg_tun plugin saw some great changes and the documentation PR is closed.

For my personal experience on WBMv12 refer to this blog post.

Wireguard on BMX7: wg_tun plugin

Image that shows the successful announcement and reception of keys and ipv6 crypto-addresses.

Initial work on the plugin has been put towards of the first phase and subsequent effort was put before Battlmesh. On Battlemesh along with Axel we created the establishment of a session between two wg-bmx7 nodes.

This work can be found here: https://github.com/bmx-routing/bmx7/tree/WBMv12_session_with_harry

These nodes are able to exchange the public keys of their own interfaces (devices) and assign each other as a peer.

The key decisions of the current implementation are as follows:

  • BMX7 traditionally uses “fd70” as the prefix of an instance’s – auto-assigned on startup – primary IP(v6). The primary IP is the product of this prefix in addition with the first 14 bytes of the public SHA224 key of the interface’s unique identification mechanism.
    In the wg approach we adhere to this scheme and we configure the primary IP of an interface as “fd77” + the 14 bytes of the SHA224 key.
    (Credits to Axel for the idea).
  • The implementation is auto-configurable; meaning that each device that carries a prefix of fd77 when it gets received by other wg-bmx7 interfaces they try to establish a session between them.
  • Cryptographic keys (private and public) exist only in the scope o a single session. Every time an instance gets restarted it spawns new keys.

After Battlemesh and this happily successful result, effort is being put into the handling of some minimal command line arguments as well as the routing of data inside the established tunnels.

Further work will take advantage of the SEMTOR mechanisms provided by BMX7 in combination with the WG plugin.

For more info: either refer to the wg_tun plugin branch, or have a chat with me @luserx0:matrix.org.

BMX7 Github Documentation

The documentation PR was completed and it offers to the GitHub repo a revamped feeling. Details can be found inside the PR.

The total scope of this effort has been to make the BMX7 more approachable by new contributors and users.

The bad

Two goals for this phase it was deemed necessary to push back (and weren’t achieved):

  • Work on mlc to port it’s functionality to mlc-ng,
  • The bmx7 Debian Package.

It’s undoubtedly of utmost importance to have WG secure tunnels working on BMX7 and these goals have been pushed back until secure tunneling is completed.

Final Thoughts and Future Goals

The project itself has proved to be more challenging than expected and it has tested a lot my developer abilities. Hopefully, there is a lot of knowledge to be gained and day by day we get closer to a functioning version of the plugin.

The goals for the final phase should be:

  • The establishment of routing between bmx7-wg instances and the successful completion of the beta.
  • Documentation of the added code and it’s functionality.
  • Proper command-line controlling facilities for users (experienced and not).
  • Research on reuse of cryptographic keys (WG keys with BMX7 keys) and proper handling of the wg interfaces on the bmx7 ecosystem (kernel calls, avl trees and argument hierarchy).
  • Further work on contributor and user documentation.
  • Stretch: Work on MLC to mlc-ng

GSoC 2019 – Upgrading the Meshenger App – Update 2

Meshenger App

In my previous blog post, I had achieved the authentication at the initial handshake in the app. Since then there has been quite a lot of progress in upgrading the Meshenger app.

Progress Till Now

1.) Refactoring of the codebase

Initially, I started with the refactoring of the codebase in order to allow different means of connection to the client. For e.g. contacting a client over the server or Internet, enabling direct calls in layer 3 networks with the help of multicast groups and pim6sd. etc. I am working on enabling the calls over the Internet.

Firstly, I started by removing the “challenge” from the entire codebase. Challenge was used as a security parameter but now as authentication has been developed, it became redundant. Secondly, I refactored the Contact (Client data) and AppData (User data) class to hold different connection data like mac address, port and the hostname. I moved the identifier and address in the “connection_data” ArrayList and stored the data structure serialized as a string in the Contacts database which needed to hold different contact data of the form List<ConnectionData>. I added this data to the QR-Presenter Activity ‘s QR-Code and parsed in the QR-Scan Activity. I followed the same procedure for the AppData database also. Lastly, I also removed “username” and “identifier” from the call JSON.

2.) Implementing client online/offline detection over the Internet

For implementing the client online/offline detection over the Internet, I needed to hold a persistent TCP/IP connection to a signalling server. So I started a thread at the start of the app and that thread opened a persistent TCP/IP socket for each SignalingServer object in the connection_data. The sockets were held open for as long as the app is running. I used a signalling server made on node.js and ran it on the laptop. Then after connecting the phone A and phone B to the laptop’s hotspot, I ran the app on the phone A and the server displayed that the user is online. After that, I checked for the client status by running the app on phone B while still keeping the app on phone A on. The result was displayed that the client was online and the client’s status was detected over the Internet.

Next Steps

The next phase i.e. the Final Phase of GSoC 2019, will be about achieving the call over the Internet, adding other features and some code polishing.

OpenWrt Firmware Wizard Update – 2nd phase completion

After 4 more weeks, there has been progress regarding the agenda. The agenda for the last phase was to create the web interface using which users could download appropriate images for their devices and build custom images too.

What has been achieved

A refined version of the web application has been achieved. The functionality of creating custom images has been added to the application.

The application has been moved to ReactJS. The finished application looks like this:

Note: There are minor bugs and issues in the app which will be rectified in the later versions.

You can look at the code for the interface here and can use the interface here (currently in beta).

Next Steps

In the next phase, an openwrt tool has to be engineered which can be used for upgrading OpenWrt automatically.
A interface for the tool has to be created for LuCI which will house all the settings and preferences of the user. The tool will periodically check if there is a new version of OpenWrt, if so, then it will download and apply the upgrade package automatically.

The foundation to check if there is a new version was laid during the first phase with the JSON metadata.