Hello again, for two months I have been working on my project and I have achieved all the goals.
An alpha version of my program was released for the mid-term evaluation.Since then I have fixed all the bugs, packaged the program for OpenWRT, tested the code and written the documentation.
Everything is available on the GitHub repository 
I structured my work opening by issues for all the bugs I had to fix and for everything I wanted to improve. Then, I organized the issues in milestones. Milestone 0.1  is the one that I had to complete to finish the project. When that milestone was closed I made the branch “v0.1” .
In order to be sure that my program worked correctly. I wrote a simulator in python. It’s made by a small server, with the same interface as OONF’s telnet plugin, and two python libraries written by my mentor: cn_generator  and pop-routing. The first one generates synthetic graphs using the Cerda-Alabern algorithm, the second one is a pop-routing implementation in python. In the server I implemented the commands to push the NetJson topology and the ones to receive the timers values.
When prince requests a topology from my test program, cn_generator generates a random graph and pushes it back; meanwhile using the python implementation of pop-routing, the references values are computed. The values received from prince are then compared to ones calculated using python.
The goal of tests is to verify that the difference between the reference and the measured values is always less than 1%.
I'm going to use my work to write my Bachelor Thesis, so I wanted to perform some measurements to check how well it worked.
My goal was to implement the algorithm on an embedded device, so I chose to measure the execution time on a “Ubiquiti Picostation M2HP” to see how well it was performing.
I branched Prince , modifying the code to measure the time needed to calculate the betweenness centrality and push it back along with the timers.
I used the graph generator to create graphs from 5 to 500 nodes, and I measured the time needed to compute with a sample every 10 nodes. For each sample I ran 5 tests, then calculating the mean and the standard deviation. The results are shown here:
As you can see from the graphic above, the computation time on the embedded device is quite good if we use the heuristic (8s for 100 nodes), it proved to be unusable without it (100s for 100 nodes)
The last objective I gave myself in the previous post was to write a plugin for OLSRd. Since OLSRd isn't maintained anymore - and since all the developers are working on OONF - I decided to focus on it while avoiding the plugin. Instead, I wrote a makefile and packaged prince for openWRT / LEDE. The makefile hasn't been published yet in any openWRT feed, but it is hosted in my repository . Instructions are available along with documentation.
I’ll keep working on this project, mantaining the code and fixing the future issues. Since the graph-parser library is the last piece of code implemented in C++ and it depends on the Boost libraries, I’m looking forward to re-implement it in pure C.
One of my goals was also to run prince in my WCN (Ninux Firenze), but switching from OLSRd to OONF is taking more time than expected, so I'm hoping to try it in the future.
Now that GSoC 2016 comes to its end, please allow me to update you on the powquty project. Despite some delays regarding the hardware, the goals set for GSoC 2016 have been reached. Here follows a picture that shows a compact demonstration for powquty.
The picture depicts the set-up for powquty, which consits of a LEDE based wireless router, connected to the PC over serial interface, where a Termial session of the router is running. The oscilloscope is attached via USB to the router. On the router's terminal session we see that powqutyd has been started, with the correspondent messages shown on stdout. These message show the calculated power parameters.
For installing powquty follow these steps:
src-git powquty https://github.com/thuehn/powquty.git
Note this package depends on the following libraries/packages, that have to be installed before installing powqutyd:
When successfull the powqutyd package will create:
Before running powqutyd you need to configure it.
For the powquty project, my mentor and I set up a dedicated Github page accessible under the following link:
There the list of my commits can be seen under:
The following picture depicts the folder structure of the work done during the porject.
During the project I had to reschedule the tasks planned repeatedly according to the progress and to the other circonstances. The order of the tasks changed significantly from the original plan, yet it was necessary to achieve the Goals of the project. Finally the tasks I did were in the following order:
Goals and future work
As described in the first blogpost, the goal of the powquty project within GSoC 2016 is to create a LEDE package which ensures the following three functionalities:
At this point the powquty project has reached its first big Milestone where the basic functionalities set for the GSoC 2016, have been completely implemented.
Yet the end of one milestone is the begin of the next milestone. This is why I conclude with some propositions for future work on powquty:
Nonetheless, please allow me to express my gratitude to my mentor, Dr. Thomas Hühn, for his outstanding support, and constant availability. Also great thanks are due to Freifunk, whose distinguished voluntary efforts - not only in regard to GSoC - made this possible in the first place. Last not least, great credits are due to Google for encouraging this innovative type of value creation.
now the time has come to explain the full Google Summer of Code Project. In both blog posts before I explained the work packages and structure of the Project . In the first post I declare the three main subjects. Here is a short overview to remind of the project structure:
The sub-projects are background work for community projects.
The mainline Google Summer of Code project is to develop a new firmware for routers, based on LEDE . The third point are seminars for enlightment of technical aspects of the Freifunk Community.
First I would like to list up all sub-projects and their status.
The first of the sub-projects is the hoodselector. For a final explanation of this construct I would like to explain the following points to give you a good understanding of this concept. On the Nordwest Freifunk community we had one big problem. Due to the batman-adv management traffic, the network setup is not really scalable. This problem also exists on many other communities where they have thousands of mesh routers inside one single network. If there are too many routers inside of one layer 2 network the batman-adv management traffic will flood this network and make it useless.
Therefore in the Nordwest Freifunk Community we decided to develop the concept of hood-networking. This concept consists of two main components: The hoodfile  and the hoodselector. The hoodfile is a json file containing all informations necessary for a definition of a geobased hood. One hood is defined by a geostationary fixed quadrants, VPN-peering- and wireless configurations. Inside a hoodfile are multiple hoods defined. Following an example of a definition of a hood:
A hood has the following definition:
The name describes the region depending on its geo coordinates. For example, if you create a hood over the city Oldenburg(Oldb) (Germany), a good name could be Oldenburg or ol as short. Every name has to be unique inside a hoodfile. Redundant names are not allowed!
The bssid will be set for the adhoc wireless interface. This is the main part of splitting the layer 2 network. Inside the bssid there is the IPv4 sub-network encoded which is in use inside the hood. In the above json part the following IPv4 sub-network is encoded “10.18.160.0/21” dec to bin => “0000’1010 0001’0010 1010’0000 0000’0000” bin to hex => 0x0A 0x12 0xA0 0x00 hex to mac => 02:00:0A:12:A0:00. Therefore the bssid should also be unique.
The defaulthood boolean is only true on the default hood. The default hood doesn't have coordinates and is the inverted form of all other hoods with geo coordinates.
Contains an array of VPN connection informations. These informations are:
VPN server address (host)
VPN crypto key
One VPN server should be used for one hood only! If two hoods have the same VPN server, batman-adv will loop them over VPN.
This 3 dimensional array describes the geographical size of the hood. The surface is rectangular. You just need two points per box to reconstruct it. Here is an example:
Each hood can have any number of rectangles inside the boxes array.
To make your life a little easier you can use the hoodgen and source to write your json with the required informations. This is a simple web visualization tool to plan hoods and generate the right json format for the hoodfile. This tool has been written by Eike Baran. Big Thanks to him for this helpful tool!
Now to the hoodselector.
It is a software that creates decentralized, semi automated ISO OSI layer 2 network segmentation for batman-adv layer 2 routing networks. This program reads the geobased sub-networks called hoods from the above mentioned hoodfile. The decision of choosing the right hood is made on following points: first, the hoodselector checks, if the router has a VPN connection. If it has, the hoodselector then checks, if a static geoposition was set on the router. If not, it tries to get a position using wireless based localization with the so called geolocator. The geolocator  is a software which makes it possible to receive a position based on wireless networks “seen” around. These informations will be sent to the openwifi map project . Knowing the position of the router the hoodselector can find the right hood, because each hood is defined with geocoordinates. If the Router doesn't have a VPN connection e.g. as a mesh only router, the hoodselector triggers a WIFI scan and searches for neighboured mesh routers in other hoods. If there is an other router with a different BSSID but with the same mesh SSID, the router chooses it’s hood based on the neighboured BSSID. I got much positive feedback from many other Freifunk Communities. Someone even created a integration request issue for gluon . Gluon is a framework based on openWRT and is very popular in the Freifunk community . Before I will send this as a patch to gluon there remains one last urgent issue . The current hoodselector is not able to handle mesh on LAN or WAN connections. So there is still a potential point of failure. Because persons who are not familiar enough with the hood-networking concept can accidentally interconnect hoods over the mesh on cable functions. I plan to fix this problem up to mid of september. When this issue is closed I would like sending patches for integration to gluon. Other issues can be found here.
On the Nordwest Freifunk community we currently have 10 active hoods including a default hood. That is a special hood where all routers will connect to, if they are not able to choose a hood, including also routers there out of ranges from other real hoods. After the last 3 months we can safely say that the setup works. Commits can be found here All currently active hoods can be see here in this picture.
As next I would like to tell you about the second sub-project as a prework for the mainline project. In that part I work on a proper workaround with the continuous-integration (CI) system of Gitlab. As I explained in the midterm evaluation, on our Nordwest community we started automatically building of nightly testing firmware images for our community firmware. The CI works now with a dynamical multiple core build processes and auto generated architecture targets out of source. At the moment it is not possible for Gitlab to handle high verbose inside the web-engine while the build process. I discussed the problem with the gitlab team and open an issue . The CI builder is very helpful for the developing process of the monitoring drone. Here you can see the result for the local community image and for the monitoring-drone .
The mainline project was to create a new firmware for monitoring and quality assurance of open wireless networks. So I started reading of informatins about openWRT  and LEDE . I decided to use LEDE as base system. I know there still no release to use this as a defined base structure (we are all looking forward to this moment) but since july 2016 I am on the developer list and the way where LEDE is growing looks good. Next I looked for a build management script. First I thought about using make and Makefiles but this was not my favourite, so I decided to use the buildscript from the Franken Freifunk community as base, which is written in bash. Now I'll explain the structure how to work with and use it. The following directories and files are important for basic work:
buildscript ← File
BSP ← Dir
Community ← Dir
build_patches ← Dir
modules ← File
The buildscript is mainly a bash script for a humanly working with this buildroot. In other words it is an abstraction from the LEDE build ENV.
Inside the BSP directory are all necessary architecture specific informations.
BSP means Board-Support-Package. Also inside this directory are default informations like the shell banner system configs and so on.
The community directory includes community specific configurations, similar like the gluon siteconf . Currently there are only two config parameters inside: first the "AP SSID" to set a default SSID with witch WIFI network should the monitoring-drone connect and the second parameter, the "AP BSSID" to set a node specific BSSID in case if more than one router with the same SSID is present. Then the monitoring drone is pinned on one specific node. This config parameter will be dropped in the future because it is not really effectively if a default BSSID is set . In future I plan to configure thous parameters over an extra web interface.
In the build_patches directory you can put patches for LEDE or if you what you can also put patches for each package repository. Here is a schemata:
0001-this is a patch.patch
0002-this is another one.patch
0001-this is a patch.patch
0002-this is another one.patch
The last file is called modules. Inside this file you can add external package repositories and also select specific packages out of this repositories. Following an example:
OPENWRT_PKGS="libwlocate lwtrace ffnw-node-info hoodselector"
Clemens and I discussed about the API design regarding the communication between the monitoring drone and the netmon core. So we met together. Here a picture:
At last point were the seminars. During the Google Summer of Code we started to gave seminars for technical aspects of Freifunk because we have not enough developers and system administrators. That is mostly a big problem in volunteer activities. On the hacking sessions we follow a simple structure:
– two lectures about Freifunk technical aspects.
– discus about the contend of the lectures
– work session on projects
On the first hacking session at the 28. may 2:00 PM we created video recordings of the two lectures, you can find them here. The next hacking session were failures because of Clemens and my exams. In future there will follow other streams about tecnical aspects of Freifunk.
Last but not least, my future plans:
For the hoodselector, I plan to close up the last urgent issue that I mentored in the above for the migration into gluon I also started prework on gluon for this. One part was the implementation of a sequential code minifying process at compile time . Also some other issues are still open so I will continuing the work on the hoodselector.
For the monitoring firmware they currently is just configurable over ssh. A web interface should follow soon and also a plugin system for community specific monitoring data requests.
On the Kieler Linux information days inside the Kieler Innovations- and technology center I will hold amongst others 4 presantations about Freifunk relevants themes:
Hoodselector - Network segmentation for Layer 2 routing at 11:00 (16.09.2016)
Wireless-based localization (openwifi.su project) at 13:00 (16.09.2016)
OpenWRT Embedded Linux distribution at 16:00 (16.09.2016)
Freifunk Kiel/Nordwest (2016) - year review at 16:00 (17.09.2016)
This blog post concludes the work I did during the Google Summer of Code 2016.
Spending the summer on nodewatcher was a pleasantly laborious exercise. Although my proposal concerned a development of a new algorithm, I realized that I also had to prepare various other components so that the algorithm would work. I started by sketching a short plan of different components I had to integrate for my algorithm to work. To remind the readers, SWOON is an algorithm for an efficient allocation of wireless frequencies in a densely populated area where many wireless access points interfere with each other. Before any work could begin, I had to install a local version of nodewatcher. I did not want to interfere with wlanslovenija's mature network and I needed a small network of my own to run these tests. To find such a network, I collaborated with a non-profit organization based in Berkeley, California. They provided me with access to a building outfitted with more than 10 wireless access points. The wireless access points in this testbed are extremely diverse. They come from various manufacturers and not all broadcast on the 5GHz band. This allowed me to work with issues unique to community networks: software has to be built ground-up to be extremely modular and it has to support a diverse range of equipment, since there is no central planning authority.
I started with writing the data collection module, on which I have reported in my previous blog posts. Once I was able to work with the data, I worked on a different algorithm which detected rogue nodes in networks. This is especially useful for network maintainers who now have information on which nodes are most harmful to their network. They can now significantly reduce interference by working together. Algorithms on this scale have to be extremely efficient: I derived a complexity bound of O(E * log(V)) where E is the number of edges and V is the number of vertices in a graph. This allows the algorithm to scale all the way to wlanslovenija's network or more, as it is almost linear. In order to achieve such scalability and efficiency, I had to peruse state-of-the-art graph theory research and utilize complex data structures (the one I used is called a disjoint-set data structure) to quickly order datasets in accordance with our needs.
I then moved on to the main algorithm, which took weeks to even rigorously define, let alone code up. I first considered implementing the Beigel-Eppstein coloring algorithm, which is known to run in ~O(1.3^n) time, but the complexity of the algorithm rendered it impractical. It relies on 19 different subcases which are iteratively used to simplify the constraint satisfaction (CSP) instance. Instead, I reverted to a custom implementation of a state-of-the-art greedy approach, which had to be modified from the source code of NetworkX, a leading open-source graph manipulation package. I devised a custom way of ordering the nodes in a graph to correctly prioritize wireless access points. The algorithm starts with looking at all the 'neighbors' (other nodes your radio detects) and computes the 'busyness' of each frequency band. We then minimize joint interference through the algorithm.
We observed an interesting property when running this algorithm on our wireless testbed. Most frequencies were moved to minimize interference, but some got worse. At first, we thought it was a bug, but we did eventually discover that some nodes must pick up more interference for others to remain noiseless. This perfectly coincides with our goal, which is to minimize the interference over all nodes, not every single node.
One major problem with the algorithm is how unreliable 'signal' data is. We were thinking of coming up with a better metric to measure link quality instead of using signal strength. We were considering using the number of packet collisions instead. The modular design allows us to painlessly replace the metric without having to understand the entire algorithm.
We also noticed openWRT's iwinfo doesn't provide information about channel widths of access points. Adding this data would allow more accurate calculations instead of assuming all channels are 20MHz wide.
Finally, nodewatcher doesn't yet have support for issuing node-specific warnings. I'd love to use such a warning to let a maintainer know that their node needs to change the channel.
However, I am confident this algorithm will enhance our user's experience.
All my contributions can be seen on this link:
If you're interested in the timeline of the development, look at the following pull requests on nodewatcher's GitHub:
Bug fixes related to a local installation of nodewatcher:
Data collection module:
Command that users can use to export data from the data collection module:
Improving the data collection module:
Algorithm that detects rogue nodes in one's network:
Additional options for user-friendly data export command:
Testing framework for the rogue node algorithm + improvements:
Channel allocation algorithm implementation (not merged yet):
Updated data in nodewatcher/core:
please allow me to update you on the progress if the powquty Project within Google Summer of Code 2016 at Freifunk.
As mentioned in my last blog the goal of the project is to create a LEDE package which ensures three functionalities:
Herein I will give a short update about the progress of these functionalities.
For this project we are using an off-the-shelf USB-based oscilloscope “WeSense” from the company A.Eberle. The oscilloscope provides real time samples of measured voltage from the power plug via USB bus, using a binary protocol and a sample frequency up to 10kHz. Initially the oscilloscopes USB bus supported the host functionality only. Hencethe router would need to act in USB device mode, which is a rather unusual mode to be supported by todays WiFi router platforms. To overcome this limitation, aforementioned company agreed to provide us with another hardware implementation that implements the USB device functionality with optional five volts power feeding functionality. The new hardware is expected on my desk in mid July.
As a counter measure for this delay, we started implementing an emulator, that locally generates a signal-samples, which are then organised in packets as similar to the binary protocol.
Regarding the calculation of the power quality parameters functionality, we successfully ported the power quality library (in Ansi C) from A.Eberle to compile and run under Linux LEDE. The libraries functionality allows to calculate the frequency, effective voltage, harmonics, and phase shift, from the signal samples in an efficient way. We provide this library as binary blob, since it is basically not open sourced (yet), and originated from the manufacturer himself. Now it is ported for LEDE, and can be used for our purposes.
For the provisioning of the calculated parameters, we intend to implement a luci app that shows the calculated parameters.
The rest of the project timeline is depicted below:
Working phase: June 20th – July 10th
Working phase: July 11th – July 24th
Working phase: July 25th – August 7th
Working phase: August 8th – August 21st
More updates in the upcoming weeks.
Today has started the midterm evaluation, the deadline Is next Monday, so I have to show the work I have done ‘till now. It can be resumed in the following parts:
1) Refactoring of graph-parser and C Bindings
During the community bonding period I started working on the code of Quynh Nguyen’s M.Sc. Thesis. She wrote a C++ program capable of calculating the BC of every node of a topology . I re-factored the code, and now it is a C/C++ shared Library . I’ve also applied some OOP principles (Single responsibility and inheritance) and unit tests to make it more maintainable.
The interface of the library Is well defined and it can be re-used to implement another library to perform the same tasks (parsing the json and calculating the BC).
2)Prince Basic functionalities
After I completed the library a started working on the main part of the project. the daemon. We decided to call it Prince in memory of the Popstar.
This daemon connect to the routing protocol using the specific plugin (see below), calculate the BC using graph-parser, computes the timers and then it push them back using again the specific plugin. With this architecture it can be used with any routing protocol.I wrote the specific plugin for OONF and OLSRd. At the moment it has been tested with both, but I need to write a plugin for OLSRd to change the timers at runtime. For OONF I used the RemoteControl Plugin.
With these feature Prince is capable of pulling the topology, calculate the BC and Timers and push them back to the routing protocol daemon.
3) Additional Features: Configuration file, Dynamic plugins,
I wrote a very simple reader for a configuration file. Using the configuration file the user can specify: routing protocol host and port, routing protocol (olsr/oonf), heuristic, (un)weighted graphs.
As you can see from this Issue , I’m going to use INI instead of this home-made format.
As I said before I moved to a specific plugin all the protocol specific methods (pulling the topology and pushing the timers), to keep the daemon light I decided to load this plugin dynamically at runtime. So if you specify “olsr” in the configuration file just the OLSRd specific plugin will be loaded.
At the moment I consider this an “alpha” version of Prince. In the next 2 months I’ll be working on it to make it stable and well tested. The next steps will be:
This is my first update on working on SWOON, an algorithm to automatically allocate channels to nodes in nodewatcher. Not a lot of time has passed since I started working and but the wlanslovenija community was immensely helpful in getting me started. Up to now, I only worked on my own coding projects, where I got to be the main architect behind the codebase. I decided where functions were defined, I decided on the coding style, I did code reviews and I was the one to approve branch merges (of which there were none, because I wrote all the code :-) ). Now, GSoC threw me in the middle of a well established open source project where you have to get accustomed to existing coding practices. And accustoming to this was more difficult than I expected. So people from wlanslovenija had to be really patient with me (and I think they'll have to keep this up for a while :-) ). But I slowly got the hang of it. My overarching plan for the algorithm is to first read the data, then process the data and finally issue a warning to those who maintain nodes with inefficient channel allocations. My work on reading the data from nodes is almost done (almost because a new amazing feature was suggested only yesterday!). On a tangent, I think this shows another immense benefit of community oriented projects - everyone's free to suggest additional features that would make this process more eficient. To get the data about neighboring nodes, one has to temporarily turn the wireless radios off. Because of that, nodewatcher-agent only performs a survey of neighbors every two hours. But we collect data much more regularly. So how do we deal with this? My current implementation simply stores the data into the database every hour and it does not take into account the age of the data. I will now work on comparing the current datapoint with the latest one and only insert it into the datastream if the data has changed. This will reduce the space we're using and could even be expanded for use by other modules. It's great to know that you can contribute to any aspect of the project. Mitar, a senior developer of nodewatcher, said that they built all of it themselves, so we can rewrite anything as well.
I also worked on a parallel implementation of the mechanism, which read the data from old dumps. This gave me the insight into the architecture that made writing the actual implementation much easier.
My next steps are to finish the data collection part and create a module to analyze this data. This is exciting for everyone because there's no documentation about writing such modules so I'll get to be the first one to do it! I can only hope I'll document it well enough to make the process easier for those who will follow as new members in the community. The github PR can be found here.