GSoC2016 Wrapping up: A crypto module for qaul.net

The last weeks

So it’s been a summer. Wow. It was an amazing experience taking part in this project, I feel like I have learned a lot and I hope that my work will be useful to others.

I want to take this opportunity to highlight what I set out to do and what I achieved in the end. The last weeks were probably the most satisfying bit of the entire project as modules that I had written weeks prior were slotting together seemlessly and things suddently just started to work. At the same time they were also the most frustrating with my code having two bugs that took me almost 2 weeks to finally track down.

Most of my work can be found in this repository, on the qaul_crypto branch.

I did some work on a different project, mildly related to qaul.net which might become useful soon.
Most of what I have done is backend work but there is an example application you can run. The source code compiles and has been tested under Linux (Ubuntu 14.04 and Fedora 24). By running the qaul-dbg binary (under src/client/dbg/qaul-dbg you will create two users (to emulate a communication between two nodes), add their public keys to a keystore and then exchange and verify a signed message. Feel free to play around with the code for this demo, that can be found under src/qaullib/qcry_wrapper.c.

In this final blog post I will talk a bit about what I have done, what it means for the project and how my ~2.2k lines of c-code contribution will affect the future of qaul.net.

What I have done

The basic idea behind my project was to provide a cryptography submodule to qaul.net to make it a safer and more verifiable place to communicate in. Part of that are of course ways to sign and verify messages but at the same time a core aspect of that are also user identities.

The two biggest parts of the crypto module are internally referenced as qcry_arbit and qcry_context. As I have already explained in previous blog posts, those are the Arbiter (which provides a static API to use in the rest of the library) and the Context which holds the actual “magic” of signatures, identities, keys and targets. In an earlier diagram there was a dispatcher. However due to testing the functions, it was deemed “overkill” and the functionality was integrated into the arbiter, meaning less abstraction layers and more flat code (which is always easier to understand and maintain).

With the arbiter in place it is now possible to create unique users with a unique fingerprint that can be shared on the network to identify people as well as sign and verify messages. What has also been done is provide new message types for OLSR (the routing protocoll used in qaul.net) to propagate this new information among nodes.

What I haven’t yet done

Unfortunately there were some delays due to problems with the main crypto library in use in the project as well as a few bugs that really slowed down progress in the last few weeks. Due to that the entire encryption part has been delayed. There are functions planned and partially provided to take care of these tasks however no code behind them yet to implement it.

The feature is currently in planning and will land in a later patch, probably after the initial merge of the module has gone through as there are a lot of things to be taken care of.

Current state

Initial User signup dialogue

As said above, it is currently possible to create users, sign and verify messages and to flood the users fingerprint as well as public key to other users. However this doesn’t address a few problems that we have had while integrating the arbiter API into the rest of libqaul. Due to technical limitations in the communication system it’s currently not possible to sign private messages. This is due to how the communication subsystem works; which is currently being redesigned from scratch.

This means that any merge that will go through now will need to be adjusted to work with a new communication backend in a few weeks. As such, it is a bit unclear when exactly and how the crypto code will be merged. I am looking forward to working with the team on resolving these issues. And am also looking forward to a new shiny communication backend. But for now it means that merging the code I’ve written for the Summer of Code might be difficult.

New message types were added as well to provide a way to exchange these new payloads (signatures, public keys, fingerprints, etc) without breaking backwards compatibility with older nodes. What happens is that new nodes send out both a new and an old hello message. Other new nodes ignore the old hello message and can address the new node via it’s fingerprint and use verifiable channels of communication while old nodes are oblivious to any change. Until they are updated 🙂

In conclusion

The Google Summer of Code brought qaul.net a very big step closer to being a more safe and secure place to communicate on. The crypto submodule is tested and easy to use. What might happen is that parts of the code get merged (the crypto submodule itself) without merging any of the code that hooks into the communication stack.

There are still things to be optimisised, probably some hidden bugs to be fixed and there definately is some work needed to make private messaging actually private.

This might be the end of the Summer of Code and a “final evaluation” for this part of the project but I am by no means done. I was interested in open source software before, wrote my own projects, contributed small chunks of code to other people. But this is the first time I have been deeply involved in a large community project and the first time I learned to deal with integrating with other peoples code. And it is very largely due to the Summer of Code, Freifunk and my mentor.

I had a lot of fun working on this project and I am looking forward to more contributions. Even if it was frustrating and very stressfuly at times I’m glad to have participated. The experience tought me a lot about communication in teams, software development and funnily enough signature alignments 🙂

Acknowledgements

First of all I want to thank my mentor Mathias Jud to have introduced me to qaul.net and the Google Summer of Code. I also want to thank him for his support and guidance with problems I had while interfacing the arbiter with the rest of libqaul.

In addition I want to thank saces, another member of the qaul.net team who helped me in resolving dependency issues as well as problems I caused in the build system which prevented my code from building on Windows (woops).

Lastly want to thank Freifunk for providing me with this opportunity as well as Google for organising and sponsoring the Summer of Code projects.

Google Summer of Code 2016: External netifd Device Handlers – Final Milestone

TL;DR

FINISHING THE PROJECT

The past weeks have felt very satisfying in terms of coding. With the main structure in place, all that I had to do was add features, test, iron out the wrinkles and prepare my code for submission to the maintainer. Adding features was fun because it felt like taking big steps forward every day with immediate feedback.
There were a few stubborn bugs, though. Despite taking the better part of a week to find, they were — luckily — easy to fix. They usually required little more than changing the order in which code executed which meant swapping a few lines of code.
Rebasing the netifd source code on to the current version went very fast and resulted in three separate commits; two small ones to prepare the existing code base for my additions and one pretty big one adding my ~2000 line .c-file.

As this is the blog post wrapping up my Google Summer of Code, I am going to summarize the entire project. This probably means that I will repeat some of the points I have already explained in my previous posts.

GOALS MET

I set out to implement a way to open up OpenWRT/LEDE’s network interface daemon (netifd) for new device classes.
Until now, netifd was only able to handle device classes for which a handler was hard-coded into the daemon. I added a way to generate the necessary device handler stub from JSON descriptions and interface them over ubus with another process that handles all device-related actions such as creation and configuration. Netifd is more open to experimentation and does not require maintenance from someone introducing a new device class.
Configuration of these devices still happens in the familiar /etc/config/network file.

Along with the ubus interface in netifd, I wrote ovsd, an external device handler for Open vSwitch. Together with ovsd, netifd can create and configure Open vSwitch bridges without any Open vSwitch-specific code added to it. It simply parses the configuration in /etc/config/network, sees that there is an interface on a device with type ‘Open vSwitch’, uses this name to look up the corresponding device handler from a list and calls the ‘create’ function that is part of the device handler interface.
The Open vSwitch device handler stub then relays the command along with the configuration blob to ovsd via ubus.

Ovsd processes the command asynchronously and answers using the ubus subscription mechanism. Netifd can then bring up the device as usual and attach protocol handlers and interfaces to it.

My work on netifd is not likely to get included upstream before GSoC ends which is why I have created a patch with my changes and put it in a repository here. The repo includes the LEDE source code as a git submodule and is easy to build.
The ovsd source code is hosted at GitHub along with instructions on how to install it.

DETAILS OF THE SUBMITTED WORK

This example file demonstrates all the available configuration options for Open vSwitch devices in a possible scenario featuring an Open vSwitch bridge with interface eth0 and a fake bridge on top of it:

# /etc/config/network
config interface ‘lan’
    option ifname ‘eth0’
    option type ‘Open vSwitch’
    option proto ‘static’
    option ipaddr ‘1.2.3.4’
    option gateway ‘1.2.3.1’
    option netmask ‘255.255.255.0’
    option ip6assign ’60’

    option ofcontrollers ‘tcp:1.2.3.2:9999’
    option controller_fail_mode ‘standalone’
    option ssl_cert ‘/root/cert.pem’
    option ssl_private_key ‘/root/key.pem’
    option ssl_ca_cert ‘/root/cacert_bootstrap.pem’
    option ssl_bootstrap ‘true’

config interface ‘guest’
    option type ‘Open vSwitch’
    option proto ‘static’
    option ipaddr ‘1.2.3.5’
    option netmask ‘255.255.255.0’

    option parent ‘ovs-lan’
    option vlan ‘2’
    option empty ‘true’

The lines set apart are Open vSwitch-specific options. They configure OpenFlow settings, control channel encryption and the nature of the device itself, which can either be a ‘real’ or a ‘fake’ bridge on a parent bridge and VLAN tagging enabled.

This is the JSON description from which the device handler stub is generated:

# /lib/netifd/ubusdev-config/ovsd.json
{
    “name” : “Open vSwitch”,
    “ubus_name” : “ovs”,
    “bridge” : “1”,
    “br-prefix” : “ovs”,
    “config” : [
        [“name”, 3],
        [“ifname”, 1],
        [“empty”, 7],
        [“parent”, 3],
        [“vlan”, 5],
        [“ofcontrollers”, 1],
        [“controller_fail_mode”, 3],
        [“ssl_private_key”, 3],
        [“ssl_cert”, 3],
        [“ssl_ca_cert”, 3],
        [“ssl_bootstrap”, 7]
    ],
    “info” : [
        [“ofcontrollers”, 1],
        [“fail_mode”, 3],
        [“ports”, 1],
        [“parent”, 3],
        [“vlan”, 5],
        [“ssl”, 2]
    ]
}

The first four fields tell netifd the type of the devices and the name of the external handler to connect to via ubus. “bridge” and “br-prefix” signal the bridge capabilities of the devices and give a short prefix to prepend to devices of type Open vSwitch creating devices named “ovs-lan” and “ovs-guest” for the interfaces “lan” and “guest”, respectively.
The “config” and “info” field detail the configuration parameters and their data types for device creation and for responses to the ‘dump_info’ method of the external device handler (see my earlier blog post).
I have created a manual on how to write a JSON description of an external device handler that goes into greater detail. You can find it here.

POSSIBLE LIMITATIONS AND OPEN WORK

Because of the nature of the device handler I implemented, I could only test scenarios that arise from using Open vSwitch. These are bridge-like devices by design and not every device class someone might want to integrate with netifd will behave like them. I had to anticipate much of the behavior for non-bridge-like devices trying to make the ubus interface device class agnostic.

Also, while I tested ovsd with all the complex setups I could think of using fake bridges and wireless interfaces, there are probably edge cases where my implementation is insufficient. Ovsd may have to become stateful at some time.

At the moment, the JSON files from which the device handler stubs are created are a bit “human-unfriendly”. Parsing the JSON objects fails unless they are written on a single line with proper EOF termination. Allowing pretty-printed JSON would greatly increase readability. What’s more, the parameters’ data types are cryptic enumeration values from libubox. Another translation layer turning numbers into readable types like “string” would also help users read and write description files.

IMPROVEMENTS TO MAKE

In the future, I would like to add a way for external device handlers to send detailed feedback about requests they receive back to netifd within the context of the request. This way authors of external device handlers could provide device-specific information such as specific error messages when a command fails.
All the building blocks for such a mechanism are there:
  – ubus replies that are logically connected to an ongoing request and
  – the possibility to tell netifd what messages to expect via an entry in the JSON description file.

This way, messages and their formats could be defined by the authors of external device handlers on a per-device-class — and even a per-command — base. Netifd would then be able to handle these messages and write them to the logs along with information about the context of the ongoing transaction.

MY EXPERIENCE WITH GSOC

All in all, GSoC was a blast. Although challenging and frustrating at times, I am happy to have done it. This was the first time I had to read other people’s code to this extent and at this level of — for lack of a better word — sophistication. It took me the better part of a year to really get a grip on the code base when I was working with it as part of the student project at TU Berlin but once I had a feeling for its structure, playing around with the features I added was fun. It feels rewarding to have worked on and contributed to “real world” free software.
From the organizational, side I am very happy with both Freifunk’s and Google’s way of managing the process. I always knew what was expected of me and was provided the necessary information in advance.

ACKNOWLEDGEMENTS

Now that my first GSoC is over, some thanks are due. First and foremost, I would like to thank my mentor and advisor Julius Schulz-Zander for introducing me to GSoC and for his counsel throughout the summer. Thanks to Felix Fietkau, original author of netifd and its maintainer, of whom I’ve learned a lot both indirectly by working with his code and directly when receiving feedback on my work. Finally, I want to thank Freifunk for letting me do this project and — of course — Google for organizing GSoC.

Upadate on the powquty project

Dear Freifunkers,
please allow me to update you on the progress if the powquty Project within Google Summer of Code 2016 at Freifunk.

As mentioned in my last blog the goal of the project is to create a LEDE package which ensures three functionalities:

  • Retrieving sampled voltage data from the power quality measurement device via USB
  • Calculating statistical power quality parameters (such as: avg. voltage, harmonics, ect.) form the retrieved voltage samples
  • Provisioning of the calculated parameters for retrieval and graphical representation

Herein I will give a short update about the progress of these functionalities.
For this project we are using an off-the-shelf USB-based oscilloscope “WeSense” from the company A.Eberle. The oscilloscope provides real time samples of measured voltage from the power plug via USB bus, using a binary protocol and a sample frequency up to 10kHz. Initially the oscilloscopes USB bus supported the host functionality only. Hencethe router would need to act in USB device mode, which is a rather unusual mode to be supported by todays WiFi router platforms. To overcome this limitation, aforementioned company agreed to provide us with another hardware implementation that implements the USB device functionality with optional five volts power feeding functionality. The new hardware is expected on my desk in mid July.
As a counter measure for this delay, we started implementing an emulator, that locally generates a signal-samples, which are then organised in packets as similar to the binary protocol.
Regarding the calculation of the power quality parameters functionality, we successfully ported the power quality library (in Ansi C) from A.Eberle to compile and run under Linux LEDE.  The libraries functionality allows to calculate the frequency, effective voltage, harmonics, and phase shift, from the signal samples in an efficient way. We provide this library as binary blob, since it is basically not open sourced (yet), and originated from the manufacturer himself. Now it is ported for LEDE, and can be used for our purposes.
For the provisioning of the calculated parameters, we intend to implement a luci app that shows the calculated parameters.
The rest of the project timeline is depicted below:
Working phase: June 20th – July 10th

  • Finalize the emulator
  • Integration with the power quality library.
  • first prototype of the provisioning functionality

Working phase: July 11th – July 24th

  • Finalize the provisioning functionality
  • prototype implementation of sample retrieval (given that the hardware is delivered)

Working phase: July 25th – August 7th

  • Integration of the three functionalities into a working implementation
  • Software Testing and bug fixing
  • Documentation of the integrated  functionalities

Working phase: August 8th – August 21st

  • Buffer for possible delays

More updates in the upcoming weeks.
BR,
Neez

Implementing Pop-Routing – Midterm Updates

 

Hi Everyone!

Today has started the midterm evaluation, the deadline Is next Monday, so I have to show the work I have done ‘till now. It can be resumed in the following parts:


1) Refactoring of graph-parser and C Bindings

During the community bonding period I started working on the code of Quynh Nguyen’s M.Sc. Thesis. She wrote a C++ program capable of calculating the BC of every node of a topology [1]. I re-factored the code, and now it is a C/C++ shared Library [2]. I’ve also applied some OOP principles (Single responsibility and inheritance) and unit tests to make it more maintainable.

The interface of the library Is well defined and it can be re-used to implement another library to perform the same tasks (parsing the json and calculating the BC).


2)Prince Basic functionalities

After I completed the library a started working on the main part of the project. the daemon. We decided to call it Prince in memory of the Popstar.

This daemon connect to the routing protocol using the specific plugin (see below), calculate the BC using graph-parser, computes the timers and then it push them back using again the specific plugin. With this architecture it can be used with any routing protocol.I wrote the specific plugin for OONF and OLSRd. At the moment it has been tested with both, but I need to write a plugin for OLSRd to change the timers at runtime. For OONF I used the RemoteControl Plugin.

With these feature Prince is capable of pulling the topology, calculate the BC and Timers and push them back to the routing protocol daemon.

 

3) Additional Features: Configuration file, Dynamic plugins,

I wrote a very simple reader for a configuration file. Using the configuration file the user can specify: routing protocol host and port, routing protocol (olsr/oonf), heuristic, (un)weighted graphs.

As you can see from this Issue [3], I’m going to use INI instead of this home-made format.

As I said before I moved to a specific plugin all the protocol specific methods (pulling the topology and pushing the timers), to keep the daemon light I decided to load this plugin dynamically at runtime. So if you specify “olsr” in the configuration file just the OLSRd specific plugin will be loaded.

 

 

At the moment I consider this an “alpha” version of Prince. In the next 2 months I’ll be working on it to make it stable and well tested. The next steps will be:

 

  • Close all the Issues [4]
  • Write tests and documentation for Prince.
  • Write a plugin for OLSRd

 

Cheers, Gabriel

 

[1]: https://ans.disi.unitn.it/redmine/projects/quynhnguyen-ms

[2]: https://github.com/gabri94/poprouting/tree/master/graph-parser

[3]: https://github.com/gabri94/poprouting/issues/4

[4]: https://github.com/gabri94/poprouting/issues

Sharable Plugin System for qaul.net – Midterm Updates

Hi everyone, 

As we are on the midterm evaluation process, I would like to share what I have done so far. I created a small HTML5 application to share the location of a user, it uses geolocation API for this purpose. This will work online but, if we are offline it will work only if the device had a browser which communicated directly with the GPS hardware. This is how the application looks like when the location is found out:

As per my plan I was supposed to create a JSON API the application can use to send the request to the web server of qaul.net and a new message type for all the plugins in general. But it is not yet completed, so I have created a button in the message page of qaul.net GUI which will invoke the Javascript function to get the location of the user and it will be sent using the existing message type. There’s more work to be done for the proper working of the plugin mechanism in the qaul.net. Currently the location of the user is sent as longitude and latitude manner in the messaging system as shown below.

Cheers,

Anu

Provide a cryptographic implementation for Qaul.net – midtern update

It’s been almost a month since official coding started for Google Summer of Code 2016. And it’s definately been an interesting experience.

When I was told to write this article some sugestions for content were “pictures” and to “provide links to prototypes”. Unfortunately with my subject (especially at this point in time) that’s not as easily possible and I’m not sure how many of you want to see source code in the form of glorious macros in a blog post.

But I will try 🙂

Qaul.net is building it’s crypto on a small, embeddable crypto library maintained and mostly developed by ARM called “mbedtls”. Unfortunately the project just went through some massive refactoring and the API has completely changed. Thus a lot of documentation online is out of date or just plain out doesn’t work. Most of my time so far I’ve spend crawling through the source code of the library components I need and learning how they link together in the background to understand which files I need to compile into the project and which are optional. The goal here is to make the end binary and the memory footprint of qaul.net as small (and as embeddable) as possible.

Next up was the general structure of the crypto module. In libqaul I added a new folder and thus function/ field namespace called “qcry” which is short for Qaul CRYpto. Following is a quick graphic of the layout of the module.

qaul crypto structure

All orange modules are part of the crypto-package, yellow modules show heavy modification to their code base where as grey modules will require very little modification. As you can see the crypto module is structured into five parts: Arbiter, Dispatcher, Context, Targets and Utilities. Following a quick description of what they do, if you’re interested in more in-depth documentation, check out my wiki.

  • Arbiter: Provides abstraction layer to the rest of libqaul and handles “disputes” between different threads or application states.
  • Dispatcher: Acts as a load balancer between the non-threadsafe functions of the crypto and the multithreaded environment of libqaul. Also optimises CPU overhead for context switching.
  • Contexts: Contains raw data for crypto tasks (private keys, mode of operation, targets, etc..). Is created by the arbiter but most handled by the dispatcher.
  • Targets: Contain data required for public key crypto. Mostly contains metadata and public keys of other users. Get created by the arbiter.
  • Utilities: Base64 encodings, hash functions, verifications, creating user avatars, lots of stateless functions that are used throughout libqaul go here.

One thing I have started to work on too (but haven’t gotten far enough yet to show something off in the UI) is user verification. So…users have private keys and with the fingerprint of that key it is possible to tell if they really are who they say they are. But how to communicate that to users? Showing people a long number is very unintuitive and will just cause people not to care.

So instead of a longer number (or maybe additionally), on a new user profile screen we will show an image generated from the username of the person, their fingerprint and a signature (public key encrypted with private key). This way users will be able to quickly see if the person they are talking to is who they say they are.

To do so there are several libraries:

 – https://github.com/Ansa89/identicon-c
 – https://github.com/stewartlord/identicon.js
 – https://github.com/spacekookie/librobohash (My C re-implementation of https://robohash.org)

 Which of these solutions will be used is not clear yet (Personally I like robots, who doesn’t like robots? Wink ). But that should give you an idea of what is to come in the UI department.

 Coming up in the next week is the arbiter and a rudimentary dispatcher. The arbiter needs to be hooked into the rest of libqaul and provide access to all the low level functions the crypto module provides. From there on out test with the rest of qaul.net can be conducted.

 I hope I didn’t ramble on for too long. And I’m looking forward to the next few weeks of coding.

 Cheers,
 spacekookie

GSoC: The ECE configuration system – midterm update

Hi,
the GSoC is nearing its midterm evaluation, so let’s have a look at the current state of the “Experimental Configuration Environment” (working title)!

I started the project with defining models for configuration data, configuration diffs (that are used to modify data and store modifications) and schemas. All of the design documents I wrote can be found at: https://gitlab.com/neoraider/ece/wikis/design

I tried hard not to re-invent things that already exist in a more or less standardized way:

  • My configuration model is just JSON (without support for floating-point numbers right now, as the libubox blobmsg format ubus uses to represent JSON doesn’t support them – I plan to fix this)
  • JSON Pointers (https://tools.ietf.org/html/rfc6901) are used to refer to elements of the configuration tree
  • The schema format uses a subset of JSON Schema (http://json-schema.org/)

Based on this design, I’ve started implementing the configuration management daemon eced. The page https://gitlab.com/neoraider/ece/wikis/implementation gives a good overview of the current features and usage of eced. Quite a lot is already working:

  • Multiple schemas can be loaded and merged
  • Default configuration is generated from the schemas
  • A ubus interface allows to query and modify the configuration
  • Config modifications can be loaded from and stored into a diff file

There’s still a lot missing, so here’s what I plan to do next:

  • Make something use ECE! UCI is well-integrated in OpenWrt/LEDE and can be used from C, Lua and shell scripts; all of these languages should also be supported by an ECE client library. An example schema to replace /etc/config/system does already exist and could be used to experiment with partially replacing UCI with ECE.
  • Support schema upgrades: When schemas are updated, this can partially invalidate config diffs. A way to deal with such inconsistencies must be defined.

Discussions and feedback have also led me to the decision to put a stronger focus to UCI interoperability. While I had only planned import and export of UCI configuration at first, I’ve realized that a two-way binding between UCI and ECE (i.e. an API-compatible libuci replacement/extension that will use ECE as backend) will be necessary for ECE to find acceptance. This will also allow to continue to use existing configuration tools like LuCI with UCI configuration converted to ECE.

I also plan to have OpenWrt/LEDE packages for ECE ready soon, so you can play around with the current implementation yourselves. Maybe I’ll also set up a public host with ECE Laughing

Monitoring and quality assurance of open wifi networks out of client view (midterm evaluation)

Hey everyone,
 
Now we are on the midterm evaluation. I would like to tell you what I have done so far and what will come next. In the first post [0] I explained the work packages. In this post I will come back to the work packages  and show you what I have done for each package.
 
The first sub-project was the hoodselector. At the beginning of the work period I did some bugfixing for the hoodselector so that we where able to deploy the hoodselector in our live environment. The hoodselector creates decentralized, semi automated ISO OSI layer 2 network segmentations. You can find a detailed discription here [0]. Retrospective I can say that the deploymend of the hoodsystem went without any major problems opposed to my first expectations. Currently we have 4 hoods active. Around Oldenburg(Oldb), Ibbenbüren, Osnabrück and Friesland. More hoods will follow in future. Open Issues can be find here [1].
 
The second sub-project was to create a propper workaround for building images with the continuous-integration (CI) system of Gitlab using make on multiple cores.
The Freifunk Nordwest firmware now has automatically built testing images that are not only build on a single core but can be built on multiple cores. And the architecture targets are also autogenerated out of the sourcecode. This makes it possible to generate images dynamically for all targets also including new targets that may come in the future. I implemented a small algorithm that manages the thread counter of make commands. I use the number of CPUs out of /proc/cpuinfo * 2 this means for each logic core will follow two threads. In example our runner02.ffnw.de server has 8 cores so the CI build process will automatically build with 16 Threads [2]. Here is an example of a passed buildprocess with our CI builder[3]. Actually it is not possible to build images with a high verbose output, because the CI logfiles will get to big. That makes it impossible to use the webfrontend for analyzing the buildprocesses. I opened an issues for this and discussed the problem with the gitlab developers [4].
The CI builder is very helpful for the developing process of the monitoring drone.
 
Following I would like to report about our first hacking seminar.
The first hacking seminar was on 28.05.2016. We started with two presentations. One about Wireless Geo Location and the second one about the Hoodselector. We recorded the presentation with our new recording equipment [7] bought using some of the money for the mentoring organisation and uploaded the recordings to youtube [5].
 
The first presentation was about geolocating with wireles technologies.
Based on the Nordwest Freifunk geolocator [6]

The second presentation was about the function of the Hoodselector

 
After this two presentations we had a smal disscussion about the presentation topics and than we started o
ur hacking session where the developers started coding on their projects.
 
Now all sub-Projects are finnisched and I will continue with the Monitoring Drone Project after I finish my Study exams. Also the date of next hacking seminar is set for 9th of Juli 2016. Again we will have two presentations. One on Gitlab CI and one about how to use our new Puppet Git repositories including the submodule feature. The presentations will be recorded and after the presentation we will have a coding session like last time.
 
Timeline:
23. May: Community Bonding (3 weeks)
test and deploy hoodselector  <- Done
16. May 6:00 PM: GSoC Mumble  <- Done
Refine the roadmap  <- Done
23. May – 20. June: Work period 1 (4 weeks) <- Done
28. May 2:00 PM: hacker-session  <- Done
  1. Presentation about the hoodselector <- Done
  2. Presentation about the openwifi.su project[4] and the geolocator <- Done
13. June 6:00 PM: GSoC Mumble  <- Done
Midtermevaluation
Tarek & Clemens exams!!!
20. June – 15. August: Work period 2 (8 weeks)
9. July 2:00 PM: hacker-session
  1. Presentation about workaround with git CI processes.
  2. Presentation about puppet deployment system
13. June 6:00 PM: GSoC Mumble
25. June 2:00 PM: hacker-session
  1. Presentation about workaround with git CI processes.
  2. Presentation about puppet deployment system
18. July 6:00 PM: GSoC Mumble
30. July 2:00 PM: hacker-session
  1. actual unknown
  2. actual unknown
15. August 6:00 PM: GSoC Mumble
Finalevaluation
 

DynaPoint update

Hi everyone,

here are some updates regarding DynaPoint. The idea was to create a daemon, which regularily checks the Internet connection and changes the used access point depending on the result. That way the handling with APs would become easier as you already could tell the status by the AP’s SSID.

A daemon with basic functionality is already working. After installation, there is one configuration step necessary.
You have to choose in /etc/config/wireless which AP should be used, if Internet connectivity is available and which one if the connectivity is lost. You can do that by adding “dynapoint 1” or “dynapoint 0” to the respecive wifi-iface section.

You can also configure dynapoint via LuCI, although it’s not yet complete.
I was struggeling a bit with it, because the documentation of LuCI is a bit minimalistic…
Here is a screenshot of how it currently looks like:

Next steps

To verify Internet connectivity, it is probably better to make a small http download than just ping an IP address.
Using “wget –spider” should be suitable for that.

Also, I will see if I can get rid of the required configuration step in /etc/config/wireless in the next weeks and provide fully automatic configuration.

If you want to test dynapoint for yourself, just go to https://github.com/thuehn/dynapoint.

SWOON: Simultaneous Wireless Organic Optimization within Nodewatcher – in the middle of the road.

Hi everyone,

This is my first update on working on SWOON, an algorithm to automatically allocate channels to nodes in nodewatcher. Not a lot of time has passed since I started working and but the wlanslovenija community was immensely helpful in getting me started. Up to now, I only worked on my own coding projects, where I got to be the main architect behind the codebase. I decided where functions were defined, I decided on the coding style, I did code reviews and I was the one to approve branch merges (of which there were none, because I wrote all the code 🙂 ). Now, GSoC threw me in the middle of a well established open source project where you have to get accustomed to existing coding practices. And accustoming to this was more difficult than I expected. So people from wlanslovenija had to be really patient with me (and I think they’ll have to keep this up for a while 🙂 ). But I slowly got the hang of it. My overarching plan for the algorithm is to first read the data, then process the data and finally issue a warning to those who maintain nodes with inefficient channel allocations. My work on reading the data from nodes is almost done (almost because a new amazing feature was suggested only yesterday!). On a tangent, I think this shows another immense benefit of community oriented projects – everyone’s free to suggest additional features that would make this process more eficient. To get the data about neighboring nodes, one has to temporarily turn the wireless radios off. Because of that, nodewatcher-agent only performs a survey of neighbors every two hours. But we collect data much more regularly. So how do we deal with this? My current implementation simply stores the data into the database every hour and it does not take into account the age of the data. I will now work on comparing the current datapoint with the latest one and only insert it into the datastream if the data has changed. This will reduce the space we’re using and could even be expanded for use by other modules. It’s great to know that you can contribute to any aspect of the project. Mitar, a senior developer of nodewatcher, said that they built all of it themselves, so we can rewrite anything as well.

I also worked on a parallel implementation of the mechanism, which read the data from old dumps. This gave me the insight into the architecture that made writing the actual implementation much easier.

My next steps are to finish the data collection part and create a module to analyze this data. This is exciting for everyone because there’s no documentation about writing such modules so I’ll get to be the first one to do it! I can only hope I’ll document it well enough to make the process easier for those who will follow as new members in the community. The github PR can be found here.