DAWN – Final Post

So did I achieve my aims with DAWN?

GSOC Aims

  1. Simple Installation
  2. All patches Upstream
  3. Configuration of the nodes should be simplified
  4. Visualize the information of the participating nodes
  5. Improve the controller functionality by adding mechanisms like a channel interference detection and other useful features

1 and 2:


Everything is upstream!
All hostapd patches are merged. I even added some patch that extending the hostapd ubus functionality.
The iwinfo patches are merged too. But actually the patch from the other guy was merged that contained my patch #1210.
You can now simply add the feed and compile DAWN.

3 and 4:

I added a luci app called luci-app-dawn, there you can configure the daemon. If you do this, the daemon configuration is send to all participating nodes. So you don’t have to change the config on every node.
Furthermore, you can see in the App all participating WiFi Clients in the network and the corresponding WiFi AP. Furthermore, you can see the Hearing Map for every Client.

 

5:

So I’m still refactoring my code. Some code snippets are ugly. :/
I read stuff about 802.11k and 802.11v.
802.11v is very interesting for DAWN. It would allow DAWN a better handover for the clients. Instead of disassociating the client, the client can be guided to the next AP using a BSS Transition Management Request frame format.
This request can be sent by an AP or station (?) in response to a BSS Transition Managment Query frame or autonomously.

I want to send this request autonomously instead of disassociate clients if they support 802.11v.
For that I would set a the Disassociation Timer (the time after the AP disassicates the client if it’s not roaming to another AP) and add another AP as a candidate. Furthermore I should enable 802.11r for fast roaming…
If you want to play around with 802.11v you need a full hostapd installation and enable bss transition in the hostapd config.

bss_transition=1

The stations sends in the association frame if it supports bss transition when associating with an AP.
My plan is to extend the hostapd ubus call get_clients with the information like it’s already done with the 802.11k flags.
After this I need a new ubus call in which I build such a BSS Transition Management Request like it’s done in the neighbor reports ubus call.
I found a patch on a mailing list that adds a function to build such a bss transition frame in an easy way.

wnm_send_bss_tm_req2

Sadly, it was never merged. 80211v implementation can be found in the hostapd.

Furthermore, I could use 802.11k for asking a client to report which APs he can see. This is a better approach as collecting all the probe entries. The hearing map is very problematic, because clients are not continuously scanning the background (or they don’t scan at all). Furthermore a client can move around. Typically questions are, how long such a probe entry can be seen as valid. If the time span a probe request is seen as valid is set to big and the clients moves around, he can not leave the AP although the RSSI is very bad. (and a bad rssi is the worst thing you can have!) A bad RSSI can trigger the internal client roaming algorithm and the client tries always to roam to another AP and gets denied because there is already a hearing map entry with a very good rssi. But this entry is not valid anymore, because the client moved very fast.

My Merged Pull Requests:

My Open Pull  Requests:

My Declined Pull Requests:

GSoC 2018 – Better map for nodewatcher (Final update)

Hello everyone,

In my last update I represented solutions for most of my goals that I set in my first post. There was still one feature to implement and I worked hard to have it finished in time for GSoC.

Problem

The last feature that I am talking about is the ability to show recently offline nodes in the map. This was the hardest part to implement but also the most important. Because with it you would be able to see which nodes are offline and need maintenance and you could see exactly where they are located. Until now there was only an email alert system, but it sent out an email for every change to the node. There wasn’t a filtering option and also it would do this for every node so the inbox would get cluttered really fast. By adding this feature you can get a list of all nodes that went offline in the past 24 hours and it updates that list alongside the map.

Solution

In my last post I talked about adding a sidebar that had a list of all nodes that are currently online and showing on the map. So I just added a new tab that represented the recently offline nodes. The hardest part of adding this was that I had to use nodewatchers API v2 which was still in development and hasn’t been fully documented. I still wanted to use it because in the newest nodewatcher version every API v1 request will be replaced by v2. This way there would be less work in the future and also I took some time to document everything I have learned about it. This document has everything that I was able to gather from nodewatcher code and examples of how to use it. In the picture below you can see how the sidebar currently looks and also the list of recently offline nodes. It has the same functionalities as the online node list like the search bar, option to show the selected node on the map and to go to that specific nodes page.

What’s next?

GSoC has provided me with a unique opportunity to work on a large scale open source project and I have learned a lot in the past three months. Mostly about time management and not putting too much on my plate. It was truly an experience that will help me later on in my life. I will for sure work on other open source projects and continue my work with nodewatcher because I have analysed and figured out most of the code. It would be a shame to just let that knowledge go and move on to another project before being sure that someone else does take over and continues the work.

Important links:

Freifunk blog posts:

https://blog.freifunk.net/2018/05/14/gsoc-2018-better-map-for-nodewatcher/

https://blog.freifunk.net/2018/06/11/gsoc-2018-better-map-for-nodewatcher-1st-update/

https://blog.freifunk.net/2018/07/09/gsoc-2018-better-map-for-nodewatcher-2nd-update/

Github pull requests:

Main map code: https://github.com/wlanslovenija/nodewatcher/pull/69

API v2 documentation: https://github.com/wlanslovenija/nodewatcher/pull/70

 

nodewatcher: Build system rework and package upstreaming – Final update

Hi, everybody.

This is my last post regarding my GSoC project for 2018.
My work can be found here:

A quick summary of what this project was about: Move on from building Nodewatcher supported imagebuilders from source but instead use upstream provided OpenWrt imagebuiders. Also, build our custom not yet upstreamed packages using OpenWrt SDK.

Current status

Most of the code was merged to their relevant wlanslovenija repositories but some of it is still waiting to be merged.

Nodewatcher

Various fixes to make Nodewatcher run on newer kernels and distributions like Ubuntu 18.04 were merged to the main branch of wlanslovenija/nodewatcher repository.
This includes fixes for known issues with the never pip tool as well as multiple packages with new names.
Still to be submitted is updating the various Python packages which are currently outdated, this is waiting for thorough testing.

firmware-packages-opkg

The name of wlanslovenija repository with our custom packages that are used by Nodewatcher like Tunneldigger is firmware-packages-opkg.
A big part of the changes is already merged, in wlanslovenija/firmware-packages-opkg.
This was a first big cleanup after a long time, a lot of packages that were not used and a lot of those that used custom patches were dropped.
I manually verified that those with custom patches had those patches upstreamed before they were dropped, this now enables us to use new iwinfo versions that include many fixes.
This also enables compiling packages such as curl with GCC7.3.
There are currently fixes for packages refusing to compile or dead sources waiting in my tree on Github.

firmware-core and the build process

The name of wlanslovenija repository where all files pertaining to the building of Nodewatcher compatible Docker-based imagebuilders are located is firmware-core.
This repository was the target of the bulk of my effort.
The current code was almost completely dropped or significantly reworked, which in the end resulted in removing 3,964 lines of code while adding only 255.
This significantly reduces the maintenance burden as almost no maintenance is needed except for adding or removing required Ubuntu packages in our Docker images.

Big changes that were made are:

  • LEDE and OpenWrt are remerged in our build process
  • Building of old OpenWrt versions prior to 17.01 was completely removed (CC 15.05 etc.)
    This was completely unnecessary and only caused the legacy code to stick around.
    There is no explanation for use of OpenWrt Chaos Calmer and even older versions now that OpenWrt and LEDE have merged.
    Those versions have numerous known exploits that have been fixed in 17.01 and now in 18.06.
  • Both our build and runtime Docker base images now use Ubuntu 18.04 instead of old 14.04.
    This enables us to fully utilize the fact that OpenWrt uses GCC7.3 as default compiler since Ubuntu 18.04 finally ships with it as default too.
    Size of the base image has reduced due to less unnecessary packages being shipped with it.
  • We now use imagebuilders provided by upstream OpenWrt project
    This significantly reduces the build time as most of the packages and the whole toolchain are not built anymore.
    The fact that we can’t patch the sources with custom patches anymore does not matter as we were not using any important patches.
    Unfortunately, due to the fact that most of the packages needed for Nodewatcher to function are custom written and were never upstreamed we still need to custom build them.
    Thankfully upstream OpenWrt provides an SDK next to imagebuilders, those are meant for just what we need, for building packages only.
    They provide already built toolchain and all of the tools needed so that saves a lot of time, but since our packages have a lot of dependencies it still takes some time to build them.
    Then they are simply copied to the imagebuilder, we manually trigger the package index to be regenerated as we use that package index to generate metadata so Nodewatcher knows what and which version of packages are inside each imagebuilder. This enables configuring packages on per version basis.
    Since we can now easily download all of the community packages we dont have to compile them in like we did so far.
    This completely removes the need for us to have package mirrors.In the end, this has reduced the time needed for each target around 3-4 times.
  • Configuration of the build process was greatly reduced as well as its complexity.
    No more need for a lot of Dockerfiles and configuration for each of the targets.

Currently, all of these changes been merged into the main repository wlanslovenija/firmware-core

Future

I did not have time to do all of the things I wanted.
This is mainly upstreaming as much of our packages as possible, as they are the biggest time consumer during building.
This will be dealt with after GSoC.

Nodewatcher needs to be updated to merge LEDE and OpenWrt as we have some checks to ensure that more advanced features are only enabled on LEDE as OpenWrt did not have them at that time.
This will be dealt with after GSoC too.

I also wanted to add some new features to our imagebuilders, but since hitting a lot of bugs and unexpected stuff during development I did not have for these, so like previous two points, this will be dealt after GSoC.

So to sum this up, this was a really good experience.
I got to focus on two things I enjoy working on: FOSS software and OpenWrt.
This enabled me to learn a lot on the functioning of our Nodewatcher, OpenWrt imagebuilder and especially OpenWrt SDK.

Thanks to Google for organizing GSoC, Freifunk for enabling me to give back to the community in the usefull way.
And special thanks to my mentor Valent Turković.

Best regards
Robert Marko

LibreNet6 – final report

A tail of dependencies

Creating the LibreNet6 service is highly is highly dependent on LibreMesh as the former builds on the latter’s existing script. So issues within the LibreMesh framework broadened my working scope, to focus on other areas as well. In the process I discovered some general flaws in the framework which I’m now happily fix over the next several weeks, independent of the Google Summer of Code. A focus will be the network profile management, which is currently somewhat messy, to allow new users to setup their own networks without deeper understanding on the lower levels of LibreMesh configuration. The network profile issue is very related to LibreNet6, as of now user still need some SSH and shell skills to connect.

To understand the current problem, feel free to check the mailing list archive.

The output

On the surface this project result an English manual on how to setup the IPv6 gateway and server. Instead of just translating (and correcting) the existing manual, I read into Tinc 1.1 and its new features, which vastly simplify the setup process for clients. It’s meant as a step by step manual only requiring the user to know how basic SSH and shell functions.

For the backend, Altermundi provided a VM which will serve as IPv6 server, working as a connection between IPv6 gateways (the client devices) and a real IPv6 uplink connection. The server is setup as described in the previously mentioned manual.

IPv6 broker vs LibreNet6

As the network uses Tinc, all IPv6 gateways build up a mesh network. When routing traffic between networks, they can decide to use the IPv6 server to route the IPv6 traffic, however may also connect directly via IPv4 to other gateways. This behaviour is one of the initial motivations of LibreNet6, as this highly reduces ping latency’s in cases, where the IPv6 server is on another continent, but two different mesh clouds are close to one another. Both IPv6 gateways connect directly to one another, routing traffic over IPv4 without using the IPv6 server.

Interest of LibreRouter

People from the LibreRouter project wrote me about their interest in integrating this feature in the LibreRouter v2. In that case it would not only enable IPv6 connectivity but also work as a remote help system, where users may receive setup help from the LibreRouter team. This feature is planed for the near future and details not yet completed.

Migrating from existing LibreNet6 setups

Now that the server works, future work has to be done to migrate all existing setups to the new server. I’ll work on that over the next few month, independently of the GSoC.

Final thoughts

This was my second time to participate in the Google Summer of Code and for a LibreMesh project. I’m happy they were satisfied with my last years project as they chose me again this year. The last years project took quite some time until users started to use it, however I’m happy to see it now being used on a daily basis. In the future I’m trying to improve LibreNet6 just as active as the image server.

GSoC 2018 – Easily Expandable WIDS – Final report

This summer I worked on an Easily Expandable Wireless Intrusion Detection System (called Eewids). In this blog post I’d like to present what I’ve done, the current state of the project, the lessons learned and what needs to be done in the future.

Projects repository on GitHub: techge/eewids

What I’ve done

Analyzing existing projects

Before actually starting this project I analyzed existing Free and Open Source Software (FOSS) connected to IEEE 802.11 standard. I achieved this in part by looking through all projects listed on the extensive wifi-arsenal of the GitHub user 0x90 and categorized most of the entries myself.

I realized that there just is no complete ready-to-go WIDS yet, at least regarding FOSS. For me a WIDS should

  • detect most of the known Wi-Fi attacks,
  • scale easily and thus be able to work within big organizations and
  • be easily expandable.

Although there is indeed software on GitHub which can be used to detect Wi-Fi attacks, they are usually specialized on some attacks and/or they are hobby projects which would not fit in setups of bigger environments. Please have a look at the defence-related Wi-Fi tools on the wifi-arsenal list.

A distributed, highly scalable system

Based on a proposal of my mentor Julius Schulz-Zander, I created a framework with a microservice approach. Instead of creating a monolithic solution, Eewids is supposed to consist of different well-known and actively developed software. The central point of Eewids is a message broker. The advantage is an independence between different system components. New detection methods can be added to the system without actually bothering about the capture process, the parsing or even the presentation of the results. Everything leads to the message broker.

RabbitMQ is an awesome choice for such a message broker. It is not only most widely deployed, but it also supports a lot of programming language. No matter if you want to add a detection method using C, python or Java, there already exists a library to easily attach to RabbitMQ, get the captured and parsed data and sent the results back (see RabbitMQs devtools).

The message broker also helps to make the whole system highly distributed and scalable and thus helps to fulfill the requirements to work for bigger organizations as well. Many organizations may already have experiences with RabbitMQ-Clusters.

System components

Eewids consists of:

  • a capture tool to get the actual 802.11 data packets
  • a parser to transform the information into simple json-formatted data
  • a message broker (RabbitMQ)
  • detection methods
  • a visualization

The project aimed to find a solution for all components.

The capturing should be done by Kismet, which turned out to be a bad idea (see section “Lessons’s learned”). Therefore, a simple capture tool was created, which will be further improved (see section about future work). For parsing the package a parser was created (“eewids-parser”) in part based on the existing project radiotap-python. As kind of proof-of-concept the visualization software Grafana was added (in tandem with InfluxDB). This turned out to be a very good choice and is now integrated as the main visualization in Eewids. Grafana connects easily to various systems, so that other data withing a company may be connected and visualized as well.

Eewids setup with RabbitMQ as central point and an own capture tool based on standard tools
Overview of Eewids’ components

Lesson’s learned

At first, I didn’t dare to create a capture tool myself. I thought that this would take a lot of programming experience. Therefore, I have chosen Kismet as a basis for capturing the 802.11 data. Kismet is well-known and very popular. Still, it does not fulfill the above requirements – it is not a full-featured WIDS, it is monolithic and the code proved to be difficult to read/change. I invested a lot of time to integrate Kismet in Eewids, to build docker container (which did not exist before), to read out pcap export from the REST interface etc. In the end I integrated a feature-rich system to only use one function – the capturing. After several crashes of the Kismet server and an analysis of the network performance, I decided to try to create a simple capture tool myself (see this blog post).

It turned out that it is not that difficult to create a simple capture with help of libpcap. Although the capture tool is still very rudimentary it already fulfills the function which Kismet served before. While I still think, that it is always a good idea to have look at existing projects first, I would trust more in my skills next time, to avoid wasting to much time in integrating a system which is not suitable for the project at stake.

What needs to be done in future

While the framework itself is ready and works fine, there is still a lot to do. The capture tool should be further improved as described in this blog post. Getting used to libpcap needed more time than anticipated, so that I was not able to include all features yet.

The parser should get extended depending on upcoming needs. For example, only some element ids are parsed for now. All main 802.11 fields are parsed though. See the projects page for an overview of all parsed data.

More dashboards for Grafana could be added to visualize as much data as possible. Furthermore, some useful notification could be added by default.

Most of all, Eewids should get more actual detection methods in the end. The RogueAP detection that was implemented during the summer is a start, but served mainly as a proof-of-concept and shall be improved. As Eewids is supposed to be easily expandable this last objective should in part get solved by different persons in future.

In the end I am eager to further improve the system even after the end of this Google Summer of Code. I am glad that I had the opportunity to work continuously on this project. Now I am more confident in my skills than before. I’ll surely have less doubts to actually start a project in the future.

Lime-app ground routing configuration page (Final Report)

In this last report of what was done during the Google Summer of Code I want to review the tasks done, the pending and the new tasks that arose from what was done.

The main goal was to have a ground routing configuration page in Lime-app (simple GUI for Libremesh router management). This main  goal was achieved.

Currently there are two pull-request modules that incorporate this functionality, both in the view (https://github.com/libremesh/lime-app/pull/153) and in the ubus modules (https://github.com/libremesh/lime-packages-ui/pull/20).

In addition, the interface is translated into English and Spanish and incorporated into the wikitransalte scheme that uses libremesh.

Spanish translation

Unsuccessful goals

I had set myself the extra goal of designing the same user interface for LuCi, which unfortunately I didn’t get to implement.

To do

In the interface you can configure a single link (link1), the job that remains to be done is to save multiple links and edit them one by one. It doesn’t mean a great job and I’ll continue it until I do.

Another pending task is to program the administrative pages to be hidden from the menu until the administrator logs in, this is something related to the lime-app design and must be solved. The average user of libremesh does not need to make use of ground routing and therefore displaying it in the menu would only generate confusion and possibly configuration errors.

Acknowledgements

I want in this last post to thank the Freifunk community, the LibreMesh team and especially Gio for his work as a mentor, he was always there to answer my questions and concerns. Finally, I would like to thank Google Summer of Code for its efforts during all these years and for its commitment to the development of open source software. Thank you very much, everyone.

A module for OLSRv2 to throughput estimation of 2-hop wireless links

Hi to community members!

In the phase 2 period, we set up an emulation environment in CORE, then we tested PRINCE with iperf client/server (https://github.com/pasquimp/prince/tree/iperf) in CORE. We built a simple three nodes topology (n1, n2, n3) where n1 is directly connected (in the wireless coverage) to n2 and n2 is directly connected to n3. n1 is connected to n3 through OLSR. The neighbor estimated throughput at IP level is of about 43 Mbps on a physical link of 54 Mbps (in the figure the throughput estimated from n2 towards n1).

We tested in CORE the initial version of the OONF plugin too (https://github.com/pasquimp/OONF/tree/neighbor-throughput). So, the plugin now is able to send a couple of probe packets towards each neighbor and is able to read the time of reception of packets. I’m now exploring a problem in the reception of the probe.

In the next weeks, we will perform other tests with PRINCE with iperf and with the OONF plugin to resolve the problems in the reception phase and then we will perform timestamp based throughput estimation in order to compare the results obtained with PRINCE with iperf and OONF with the plugin. We will update you in the coming weeks!

GSoC 2018 – Kernel-space SOCKS proxy for Linux – July progress

What we have so far

Last month I introduced my test setup intended for fast kernel trials and network development. After that updated my shadowsocks-libev fork for the latest 3.2.0 version which is the latest upstream stable version. This fork dont do any encryption which is not so secure but faster – and in our new approach: we can put the data plane into the kernel (because we cant do any data modification in the userspace).

Possible solutions

The problem emerged in a different environment recently: at the cloud/datacenter scope. In the cloud transmission between containers (like Docker) happens exactly like in our SOCKS proxy case: from user to kernel, than back to user (throught the proxy) than back to kernel, and to user. Lots of unnecessary copy. There was an attempt to solve that: kproxy .This solution is working pretty well, butthere are two drawbacks: not merged into the kernel (the main part is a module, but also modifies kernel headers) and in my testsit is slower than the regular proxy with the extra copies. Sadly I dont know the exact problem, but with my loopback tests on a patched 4.14 kernel were about ~30% slower than a regular proxy.

The kproxy is currently AFAIK not in development anymore, because featuring TCP zero-copy there is a better solution with zproxy, but its not released yet. But some part of the original kproxy code is already merged into the kernel part of the eBPF socket redirect function: https://lwn.net/Articles/730011/
This is nice because its standard, already in the vanilla 4.14 kernel, but a bit more complicated to instrument it, so I will test it later.

The backup solution if none of them works the I will try it with netfilter hook with the skb_send_sock function, but that version is very fragile and hacky.

GSoC 2018 – Ground Routing in LimeApp – 2nd update

Hello in this past month I was working on the validation of the configuration in both the front-end and backend.

Basically it is to confirm that the minimum parameters to generate the basic configuration are selected and are of the corresponding type. The double validation is because the ubus module can be used in the future by other applications, and in this way its good use is guaranteed, while validation in the frontend allows a faster response to the user.

View for LuCi

While doing all this I started to develop the basic view for LuCi, although the goal of GSoC is to develop the view for LimeApp I can do both by reusing much of the code. In the next few days I will upload some screenshots.

GSoC 2018 – Better map for nodewatcher (2nd update)

Hello everyone,

I am very happy to say that since my last update I was able to implement most of the features that I have talked about and was able to test them with real data.

In the last update I talked about how I started my own local leaflet map with which I wanted to test every feature before implementing them. While doing that I also need to go through most of the nodewatcher code to see how the map is being generated. The problem here was that nodewatcher uses Django templates and many custom scripts that were placed in multiple locations. It took some time to figure out what each part was doing because the map was made at the start of nodewatcher and wasn’t documented well. So this took most of my time, but after I figured out where everything was I was able to start implementing most of my code.

The implementation went surprisingly fast, so I was able to test everything on my own nodewatcher server that I started at the beginning of GSoC. The only problem here was that I didn’t have any nodes to see on my map. I was able to workaround this by redirecting my API call to gather node data from the nodes.wlan-si.net server which is the wlan slovenija nodewatcher server. It has over 300+ active nodes. In the pictures below you are able to see the things that I have currently implemented that are:

  • The fullscreen map option
  • Popup with some general information about the node that you get when you click on it. And also by clicking the name in the popup you can go to that nodes website
  • Sidebar that gives you a list of all currently online nodes with a search bar and the ability to show each one on the map.

Next thing for me is to try to implement one more feature which is the ability to see nodes that have gone offline in the past 24 hours. I say try because I have already looked into it and the problem with this is that the current API doesn’t have a filtering option so I can’t get only the nodes that have the location parameter set. I will also mostly focus on writing good documentation because that is something that nodewatcher is currently lacking and it would have really helped me a lot.