LibreNet6 – final report

A tail of dependencies

Creating the LibreNet6 service is highly is highly dependent on LibreMesh as the former builds on the latter’s existing script. So issues within the LibreMesh framework broadened my working scope, to focus on other areas as well. In the process I discovered some general flaws in the framework which I’m now happily fix over the next several weeks, independent of the Google Summer of Code. A focus will be the network profile management, which is currently somewhat messy, to allow new users to setup their own networks without deeper understanding on the lower levels of LibreMesh configuration. The network profile issue is very related to LibreNet6, as of now user still need some SSH and shell skills to connect.

To understand the current problem, feel free to check the mailing list archive.

The output

On the surface this project result an English manual on how to setup the IPv6 gateway and server. Instead of just translating (and correcting) the existing manual, I read into Tinc 1.1 and its new features, which vastly simplify the setup process for clients. It’s meant as a step by step manual only requiring the user to know how basic SSH and shell functions.

For the backend, Altermundi provided a VM which will serve as IPv6 server, working as a connection between IPv6 gateways (the client devices) and a real IPv6 uplink connection. The server is setup as described in the previously mentioned manual.

IPv6 broker vs LibreNet6

As the network uses Tinc, all IPv6 gateways build up a mesh network. When routing traffic between networks, they can decide to use the IPv6 server to route the IPv6 traffic, however may also connect directly via IPv4 to other gateways. This behaviour is one of the initial motivations of LibreNet6, as this highly reduces ping latency’s in cases, where the IPv6 server is on another continent, but two different mesh clouds are close to one another. Both IPv6 gateways connect directly to one another, routing traffic over IPv4 without using the IPv6 server.

Interest of LibreRouter

People from the LibreRouter project wrote me about their interest in integrating this feature in the LibreRouter v2. In that case it would not only enable IPv6 connectivity but also work as a remote help system, where users may receive setup help from the LibreRouter team. This feature is planed for the near future and details not yet completed.

Migrating from existing LibreNet6 setups

Now that the server works, future work has to be done to migrate all existing setups to the new server. I’ll work on that over the next few month, independently of the GSoC.

Final thoughts

This was my second time to participate in the Google Summer of Code and for a LibreMesh project. I’m happy they were satisfied with my last years project as they chose me again this year. The last years project took quite some time until users started to use it, however I’m happy to see it now being used on a daily basis. In the future I’m trying to improve LibreNet6 just as active as the image server.

GSoC 2018 – Easily Expandable WIDS – Final report

This summer I worked on an Easily Expandable Wireless Intrusion Detection System (called Eewids). In this blog post I’d like to present what I’ve done, the current state of the project, the lessons learned and what needs to be done in the future.

Projects repository on GitHub: techge/eewids

What I’ve done

Analyzing existing projects

Before actually starting this project I analyzed existing Free and Open Source Software (FOSS) connected to IEEE 802.11 standard. I achieved this in part by looking through all projects listed on the extensive wifi-arsenal of the GitHub user 0x90 and categorized most of the entries myself.

I realized that there just is no complete ready-to-go WIDS yet, at least regarding FOSS. For me a WIDS should

  • detect most of the known Wi-Fi attacks,
  • scale easily and thus be able to work within big organizations and
  • be easily expandable.

Although there is indeed software on GitHub which can be used to detect Wi-Fi attacks, they are usually specialized on some attacks and/or they are hobby projects which would not fit in setups of bigger environments. Please have a look at the defence-related Wi-Fi tools on the wifi-arsenal list.

A distributed, highly scalable system

Based on a proposal of my mentor Julius Schulz-Zander, I created a framework with a microservice approach. Instead of creating a monolithic solution, Eewids is supposed to consist of different well-known and actively developed software. The central point of Eewids is a message broker. The advantage is an independence between different system components. New detection methods can be added to the system without actually bothering about the capture process, the parsing or even the presentation of the results. Everything leads to the message broker.

RabbitMQ is an awesome choice for such a message broker. It is not only most widely deployed, but it also supports a lot of programming language. No matter if you want to add a detection method using C, python or Java, there already exists a library to easily attach to RabbitMQ, get the captured and parsed data and sent the results back (see RabbitMQs devtools).

The message broker also helps to make the whole system highly distributed and scalable and thus helps to fulfill the requirements to work for bigger organizations as well. Many organizations may already have experiences with RabbitMQ-Clusters.

System components

Eewids consists of:

  • a capture tool to get the actual 802.11 data packets
  • a parser to transform the information into simple json-formatted data
  • a message broker (RabbitMQ)
  • detection methods
  • a visualization

The project aimed to find a solution for all components.

The capturing should be done by Kismet, which turned out to be a bad idea (see section “Lessons’s learned”). Therefore, a simple capture tool was created, which will be further improved (see section about future work). For parsing the package a parser was created (“eewids-parser”) in part based on the existing project radiotap-python. As kind of proof-of-concept the visualization software Grafana was added (in tandem with InfluxDB). This turned out to be a very good choice and is now integrated as the main visualization in Eewids. Grafana connects easily to various systems, so that other data withing a company may be connected and visualized as well.

Eewids setup with RabbitMQ as central point and an own capture tool based on standard tools
Overview of Eewids’ components

Lesson’s learned

At first, I didn’t dare to create a capture tool myself. I thought that this would take a lot of programming experience. Therefore, I have chosen Kismet as a basis for capturing the 802.11 data. Kismet is well-known and very popular. Still, it does not fulfill the above requirements – it is not a full-featured WIDS, it is monolithic and the code proved to be difficult to read/change. I invested a lot of time to integrate Kismet in Eewids, to build docker container (which did not exist before), to read out pcap export from the REST interface etc. In the end I integrated a feature-rich system to only use one function – the capturing. After several crashes of the Kismet server and an analysis of the network performance, I decided to try to create a simple capture tool myself (see this blog post).

It turned out that it is not that difficult to create a simple capture with help of libpcap. Although the capture tool is still very rudimentary it already fulfills the function which Kismet served before. While I still think, that it is always a good idea to have look at existing projects first, I would trust more in my skills next time, to avoid wasting to much time in integrating a system which is not suitable for the project at stake.

What needs to be done in future

While the framework itself is ready and works fine, there is still a lot to do. The capture tool should be further improved as described in this blog post. Getting used to libpcap needed more time than anticipated, so that I was not able to include all features yet.

The parser should get extended depending on upcoming needs. For example, only some element ids are parsed for now. All main 802.11 fields are parsed though. See the projects page for an overview of all parsed data.

More dashboards for Grafana could be added to visualize as much data as possible. Furthermore, some useful notification could be added by default.

Most of all, Eewids should get more actual detection methods in the end. The RogueAP detection that was implemented during the summer is a start, but served mainly as a proof-of-concept and shall be improved. As Eewids is supposed to be easily expandable this last objective should in part get solved by different persons in future.

In the end I am eager to further improve the system even after the end of this Google Summer of Code. I am glad that I had the opportunity to work continuously on this project. Now I am more confident in my skills than before. I’ll surely have less doubts to actually start a project in the future.

Lime-app ground routing configuration page (Final Report)

In this last report of what was done during the Google Summer of Code I want to review the tasks done, the pending and the new tasks that arose from what was done.

The main goal was to have a ground routing configuration page in Lime-app (simple GUI for Libremesh router management). This main  goal was achieved.

Currently there are two pull-request modules that incorporate this functionality, both in the view (https://github.com/libremesh/lime-app/pull/153) and in the ubus modules (https://github.com/libremesh/lime-packages-ui/pull/20).

In addition, the interface is translated into English and Spanish and incorporated into the wikitransalte scheme that uses libremesh.

Spanish translation

Unsuccessful goals

I had set myself the extra goal of designing the same user interface for LuCi, which unfortunately I didn’t get to implement.

To do

In the interface you can configure a single link (link1), the job that remains to be done is to save multiple links and edit them one by one. It doesn’t mean a great job and I’ll continue it until I do.

Another pending task is to program the administrative pages to be hidden from the menu until the administrator logs in, this is something related to the lime-app design and must be solved. The average user of libremesh does not need to make use of ground routing and therefore displaying it in the menu would only generate confusion and possibly configuration errors.

Acknowledgements

I want in this last post to thank the Freifunk community, the LibreMesh team and especially Gio for his work as a mentor, he was always there to answer my questions and concerns. Finally, I would like to thank Google Summer of Code for its efforts during all these years and for its commitment to the development of open source software. Thank you very much, everyone.

A module for OLSRv2 to throughput estimation of 2-hop wireless links

Hi to community members!

In the phase 2 period, we set up an emulation environment in CORE, then we tested PRINCE with iperf client/server (https://github.com/pasquimp/prince/tree/iperf) in CORE. We built a simple three nodes topology (n1, n2, n3) where n1 is directly connected (in the wireless coverage) to n2 and n2 is directly connected to n3. n1 is connected to n3 through OLSR. The neighbor estimated throughput at IP level is of about 43 Mbps on a physical link of 54 Mbps (in the figure the throughput estimated from n2 towards n1).

We tested in CORE the initial version of the OONF plugin too (https://github.com/pasquimp/OONF/tree/neighbor-throughput). So, the plugin now is able to send a couple of probe packets towards each neighbor and is able to read the time of reception of packets. I’m now exploring a problem in the reception of the probe.

In the next weeks, we will perform other tests with PRINCE with iperf and with the OONF plugin to resolve the problems in the reception phase and then we will perform timestamp based throughput estimation in order to compare the results obtained with PRINCE with iperf and OONF with the plugin. We will update you in the coming weeks!

GSoC 2018 – Kernel-space SOCKS proxy for Linux – July progress

What we have so far

Last month I introduced my test setup intended for fast kernel trials and network development. After that updated my shadowsocks-libev fork for the latest 3.2.0 version which is the latest upstream stable version. This fork dont do any encryption which is not so secure but faster – and in our new approach: we can put the data plane into the kernel (because we cant do any data modification in the userspace).

Possible solutions

The problem emerged in a different environment recently: at the cloud/datacenter scope. In the cloud transmission between containers (like Docker) happens exactly like in our SOCKS proxy case: from user to kernel, than back to user (throught the proxy) than back to kernel, and to user. Lots of unnecessary copy. There was an attempt to solve that: kproxy .This solution is working pretty well, butthere are two drawbacks: not merged into the kernel (the main part is a module, but also modifies kernel headers) and in my testsit is slower than the regular proxy with the extra copies. Sadly I dont know the exact problem, but with my loopback tests on a patched 4.14 kernel were about ~30% slower than a regular proxy.

The kproxy is currently AFAIK not in development anymore, because featuring TCP zero-copy there is a better solution with zproxy, but its not released yet. But some part of the original kproxy code is already merged into the kernel part of the eBPF socket redirect function: https://lwn.net/Articles/730011/
This is nice because its standard, already in the vanilla 4.14 kernel, but a bit more complicated to instrument it, so I will test it later.

The backup solution if none of them works the I will try it with netfilter hook with the skb_send_sock function, but that version is very fragile and hacky.

GSoC 2018 – Ground Routing in LimeApp – 2nd update

Hello in this past month I was working on the validation of the configuration in both the front-end and backend.

Basically it is to confirm that the minimum parameters to generate the basic configuration are selected and are of the corresponding type. The double validation is because the ubus module can be used in the future by other applications, and in this way its good use is guaranteed, while validation in the frontend allows a faster response to the user.

View for LuCi

While doing all this I started to develop the basic view for LuCi, although the goal of GSoC is to develop the view for LimeApp I can do both by reusing much of the code. In the next few days I will upload some screenshots.

GSoC 2018 – Better map for nodewatcher (2nd update)

Hello everyone,

I am very happy to say that since my last update I was able to implement most of the features that I have talked about and was able to test them with real data.

In the last update I talked about how I started my own local leaflet map with which I wanted to test every feature before implementing them. While doing that I also need to go through most of the nodewatcher code to see how the map is being generated. The problem here was that nodewatcher uses Django templates and many custom scripts that were placed in multiple locations. It took some time to figure out what each part was doing because the map was made at the start of nodewatcher and wasn’t documented well. So this took most of my time, but after I figured out where everything was I was able to start implementing most of my code.

The implementation went surprisingly fast, so I was able to test everything on my own nodewatcher server that I started at the beginning of GSoC. The only problem here was that I didn’t have any nodes to see on my map. I was able to workaround this by redirecting my API call to gather node data from the nodes.wlan-si.net server which is the wlan slovenija nodewatcher server. It has over 300+ active nodes. In the pictures below you are able to see the things that I have currently implemented that are:

  • The fullscreen map option
  • Popup with some general information about the node that you get when you click on it. And also by clicking the name in the popup you can go to that nodes website
  • Sidebar that gives you a list of all currently online nodes with a search bar and the ability to show each one on the map.

Next thing for me is to try to implement one more feature which is the ability to see nodes that have gone offline in the past 24 hours. I say try because I have already looked into it and the problem with this is that the current API doesn’t have a filtering option so I can’t get only the nodes that have the location parameter set. I will also mostly focus on writing good documentation because that is something that nodewatcher is currently lacking and it would have really helped me a lot.

LibreNet6 – update 2

This is an quick update on my work on LibreNet6 and LibreMesh within the last weeks. The exam period in Tokyo started and I had a cold which slowed me a bit down, once both passed I will focus with doubled concentration on the project again!

Multiple servers

The approach of using Tinc allows the usage of more then one IPv6 server, allowing to connect the servers of multiple communities with different IPv6 subnets. Babeld automatically detects where to route traffic when using external subnetworks. This is fortunate as it is possible that there is a high latency between mesh gateway and IPv6 server which would slow down traffic. However, using Tinc and babeld I ran a setup with two mesh gateways both using two different IPv6 subnets. While pings to the other network had high latencies at first (me in Tokyo, one IPv6 server in London and one in Argentina), Tinc automatically exchanged the IPv6 addresses of the mesh gateways which then could connect directly, lowering the latencies. Summarizing this experiment, using Tinc makes the network independent of the public IPv6 addresses.

No lime-app plugin

Initially I though of creating a lime-app plugin which allows to easily requests access to a Tinc mesh. However, after an evolution with my mentor and reading more about Tinc, we decided against it: The new 1.1 release of Tinc not only simplifies joining a mesh by offering the invite and join commands, but also allows to do all configuration automatically with the help of an invitation file. These new features simplify the project much more then I though, following the Spanish documentation on Altermundi.

Adding some security

As mentioned above some parts where easier as excepted, I though of looking into additional tasks for the project. Currently the usage of babeld requires all users of the mesh to fully trust one another as babeld does not provide any security (I could find) regarding announced routes. Mesh routing with security is offered by BMX7, which introduces a model to set (dis)trust between nodes. For this reason I’ve been in contact with Axel Neumann, the developer of BMX7, to fix an long standing error in OpenWrt which lead to false MTU rates in BMX7. The fix was merged upstream and thereby allows to test BMX7 over Tinc as a secure babeld alternative.

English documentation

Beneath the experiments I’ve started to translate (and simplify) the Spanish documentation of LibreNet6 and will upload it to the GitHub repository once finished. Important part is also how to configure 6to4 tunnels as surprisingly few VM providers offer any IPv6 connectivity per default but only a single public IPv4 address.

nodewatcher: Build system rework and package upstreaming – Second update

Hi,

Last weeks have been spent solely on reworking the build system.

First, it was a matter of rebranding the current LEDE back into OpenWrt and fixing a couple of hard-coded names that would cause issues with OpenWrt name. It also involved dropping the old OpenWrt build system which has not been used for years and most likely never will again, so that removes unnecessary code to maintain.

After rebranding, I spent some time verifying that the whole system still works.
Fortunately, there were only small bugs which were simple to fix.

And then came the main task of this project, to completely rework and massively simplify the whole building the image builder job a lot easier and resource intensive.

Firstly, since I was still gonna use Docker to images for a build environment updating the base image which is the actual build environment was needed from old Trusty 14.04 to fresh 18.04 Bionic. This proved to be mostly trial and error as a lot less of default packages were included in 18.04 so getting all dependencies working. After a while base image is now working fine and is relatively small, actually smaller than 14.04 base image.
This is due to less unnecessary packages.

Once the base image was sorted out I finally got working on dropping the unnecessary scripts, docker files and all of the hardcoded build files.

This proved to be not so hard, so work on a new docker based build system started.

So far it’s broken into only 4 separate scripts:

  1. docker-prepare-build system: Like its name hints it builds the base image and installs the needed packages. I am still thinking to maybe pull this from the auto built image on Docker Hub.
  2. generate-docker files: Which generates the temporary docker files needed for building inside a Docker 18.04 base image.
  3. docker-build: Which actually “builds” the image builder and SDK.
  4. build: Main script, which simply calls others to configure and build everything.

Number of scripts will most likely grow by one or two since the built image builder with all of the packages need to be packaged and then deployed in a runtime specific image which will only contain the bare minimum of packages to keep it as lightweight as possible.

Currently, building works fine for most custom packages using SDK, but its stuck at building ncurses with a weird LC_TIME assertion error which I need to fix.

So next period will be strictly for fixing the bugs and finishing the build system.
After that is done I will update the custom packages and try to get them upstreamed.

GSoC 2018 – DAWN a decentralized WiFi controller (2st update)

Hi,
I still try to get my patches upstream.
For the libiwinfo patch I had to add the lua bindings. I never used lua so first I had to get comfortable with this. Additionally I wanted to add the channel utilization in the luci statistics app. But suddenly Luci is giving me a null pointer exception in the dev branch.


Additionally I tried to get comfortable with Luci for developing my own app.
Meanwhile another developer created nearly the same patch for iwinfo that add the survey data for the nl802.11 driver… This patch is still not accepted. The only difference is that it returns all survey data for all channels (like iw dev wlan0 survey dump)…
Furthermore, my pull request for the hostapd ubus bindings that add information about the ht and vht capabilities had to be rewritten. (https://github.com/openwrt/openwrt/pull/898). Again I have to wait for some feedback. While rewriting this patch, I had a new idea: If you subscribe to the hostapd via ubus and want to notify on the messages you have to activate it. It would be possible to add flags in the hostapd_ubus_bss to select what information should be published via the ubus bus. Before doing so, I want some feedback if this is a good idea.If somebody is interested why I am interested in the capabilities: I want to create a hearing map for every client. I’m building this hearing map using probe request messages. This probe request messages contain information like (rssi, capabilities, ht capabilities, vht capabilities, …). VHT give clients the opportunity to transfer up to 1,750 Gigabits (theoretical…) If you want to select some AP you should consider capabilities… In the normal hostapd configuration you can even set a flag that forbids 802.11b rates. If you are interested what happens if a 802.11b joins your network search for: WiFi performance anomaly. 🙂

Summarizing, I spent a lot of time waiting for feedback, debugging, modifying my patches or replying on the email lists. It is a bit frustrating.
The cool stuff was that I had my first pull request. 🙂 (it was just a typo ^^) But somebody took the time to fork my project and create a pull request. 😉
Furthermore, it is exam time and I have a lot of stuff to do for the university.

Actually I wanted to go on with more interesting stuff like connecting to the netifd demo to get more information.

Or to look at PLC. There is an interesting paper EMPoWER Hybrid Networks: Exploiting Multiple Paths over Wireless and ElectRical Mediums.