Der „Digital-O-Mat“ zur Wahl in Bayern und Hessen

Sollten alle Software-Entwicklungen der öffentlichen Hand auch als freie und Open-Source-Software veröffentlicht werden? Brauchen Hessen oder Bayern ein Informationsfreiheitsgesetz, das diesen Namen verdient? Sollte ehrenamtliches Engagement im digitalen Bereich genauso gefördert werden wie im analogen? Ab sofort können alle Wahlberechtigten ein neues Online-Tool als Entscheidungshilfe für die Landtagswahlen nutzen: In 10 Klicks verrät der Digital-O-Mat Wählerinnen und Wählern, mit welcher Partei sie bei netzpolitischen Themen auf einer Wellenlänge liegen.

Warum ist eine Hilfestellung zu netzpolitischen Themen nötig?

Digitale Themen gewinnen auch im Alltag der Wählerinnen und Wähler zunehmend an Bedeutung, kommen in der allgemeinen Wahlberichterstattung jedoch oft zu kurz. Unter https://digital-o-mat.de gibt das Tool einen Überblick über die jeweiligen Parteipositionen zu netzpolitisch relevanten Themen und erleichtert auf dieser Grundlage die informierte Wahlentscheidung. Die netzpolitischen Standpunkte der Parteien zu den Themen Informationsfreiheitsgesetz, freier Zugang zu öffentlich finanzierten Inhalten, die Verwendung von Open-Source-Software in Bildung und öffentlicher Verwaltung, Gemeingutstatus von Kunst im digitalen Raum, automatisierte Überwachung sowie die Umsetzung der EU-Datenschutzgrundverordnung stehen dabei diesmal im Mittelpunkt.

Entwickelt wurde der Digital-O-Mat von der Koalition Freies Wissen: Wikimedia Deutschland, Bündnis Freie Bildung, Chaos Computer Club, Digitale Gesellschaft, Freifunk, Free Software Foundation Europe, Open Knowledge Foundation Deutschland. Befragt wurden alle Parteien, die bereits im Landtag vertreten sind bzw. in den Sonntagsfragen des Jahres 2018 mindestens einmal oberhalb der 5-Prozent-Hürde lagen. Neben den Antworten sollten die Parteien auch entsprechende Belege für die in ihren Antworten enthaltenen Positionen liefern, etwa aus Parteibeschlüssen, Initiativen oder den Wahlprogrammen.

Wie funktioniert der Digital-O-Mat?

Politikerinnen und Politiker haben geantwortet, nun sind die Wahlberechtigten gefragt: Um die inhaltliche Schnittmenge der Parteien mit der eigenen Haltung abzugleichen, beziehen die Wahlberechtigten zunächst selbst Stellung. Anhand von 10 Statements zu verschiedenen Themenbereichen kann per Klick auf „stimme zu”, „neutral” oder „stimme nicht zu” ganz einfach herausgefunden werden, welche Partei am ehesten der eigenen netzpolitischen Haltung entspricht.

Wurde das Häkchen bei allen Fragen gesetzt, wird im Anschluss per Ranking sichtbar, mit welchen Parteien die meisten übereinstimmenden Standpunkte in netzpolitischen Fragen erzielt wurden. In der Auswertung kann man zudem durch einen Klick auf die einzelnen Parteien deren Aussagen zu den jeweiligen Themen nachlesen.

VRConfig Final

Hi,

this is the final blog post about my project VRConfig.
VRConfig aims to improve the accessibility and usability especially for inexperienced users of OpenWrt and its Webinterface LuCI.
It achieves this by introducing a graphical configuration option. The users can configure their router by interacting with a picture of the router model they are using, instead of digging through menus full of technical terms they do not understand.
In order to be able to present to every user the correct picture of the more than 1000 different supported router models, the help of the community is needed.
Everyone can take a picture of the backside of their router and annotate the ports on that picture using the annotation-app, I developed (https://vrconfig.gitlab.io/annotator/).

The annotator can be used to mark the location of all ports of the router

 

You could then send in the jpg-file together with the annotation file (which is a json-file) to the luci app via a merge request here: https://gitlab.com/vrconfig/luci-app-vrconfig.
The makefile will automatically choose the right jpg/json file based on their file name during the build process.

The luci application is currently a demo application, which will be improved in the future.
Currently, it looks like this:

You can hover over the different ports, a click will bring you to the corresponding configuration. It also marks those LAN-ports green, which are currently connected with a LAN cable.
For that I developed a lua demon which monitors the corresponding ports in real time and provides the interface with their status.
Also there is list of all currently configured virtual interfaces. Clicking on them will mark the associated physical ports on the image.

Future Plans

In the future I plan to continue to polish the Luci interface. One extension could be to marks those ports, which currently have Internet access. Other extension could revolve around making it possible to configure some setting via drag and drop on the image.

Acknowledgments

Thanks a lot to my mentor Thomas for his excellent support and his long term visions that made this project possible in the first place.
Also thanks to my colleague Benni for his extremely helpful suggestions throughout the project.
Also thanks to Freifunk for letting me work on this project and thanks to Google for organizing GSoC.

The full source code of everything related to this project can be found here: https://gitlab.com/vrconfig

OpenWLANMap App Final Update

Hi,

This is my final update for GsoC.

In this blogpost I would like to summary all the work I have done in the last 3 months, as well as available problems and future plan.

An introduction, my progress and further information can be found under [0] [1] [2]

The new app is compatible to the old app on all basic functionalities [3]. Beside that the code is validated after google check style, contains a full java doc, clear interfaces and the app performance is partly improved.

Final architecture and app design:

Basic changes in comparison to old app

  • The old broken UI is replaced by a new designed UI.
  • Storing: The old app uses a non-standard database by writing multiple access points in bytes in file, which stores redundant data and difficult to maintain. In the new app, Sqlite is used for storing data in order to make it easier to maintain and extend. There is no redundant of multiple access points in database since BSSID is used as primary key and data is updated if RSSID is bigger. The storing process is done not by scan thread, but by a separate thread (WifiStorer), which reads from a blocking queue (WifiQueue) . A list of 50 APs (can be moved up if necessary) is put into the blocking queue as an item and storer thread is blocked on no item in the queue to pretend writing around storage the whole time.
  • Uploading: As old app, the uploading depends on user’s setting: manual, automatic on internet or on wireless connection. User can also set up the start number of APs to trigger auto upload from 5000 to 50000. User can only trigger uploading at at least 250 APs. The new app processes uploading with maximal 5000 APs once in order to pretend out-of-memory problem at device with small ram. An message contains upload process summary, new rank or error is given back to user. The WifiUploader uses UploadingQueryUtils for openwifi.su and can be changed quickly if backend changes.
  • Scanning: Scan period is dynamically set not only depending on speed but also night mode and I am working on movement detection base on sensors. Every 2s as default , the scanner thread sends Wifilocator a request on position and scanned wifis. Wifilocator uses GPS to define position, in case of no GPS defined, scanned wifis are used to define user’s position which is not working anymore at old app. The method for location can be displayed by overlay big number color as in setting.
  • Resource is managed and can be set as user’s options (kill app on low battery, on long time no gps etc.). While the old app triggers compare on noExistGps every time it scans, the new apps starts a new thread for checking resources only if user configures.
  • The old app exports only user’s own bssid and puts the export file/requires import file default in external storage. The new app allows user to import/export account with team and tag information as well as reset all settings back to default. User can browser the import file, as well as save the export file her/himself at any writable place in storage.
  • A map of all data from openwifi.su and a map from user’s contributed data is immigrated directly into the app (using osmdroid) , as well as a ranking list.
  • Min app api is moved to 19, which covers over 96% devices currently [4]. Permission is checked at runtime as required.

What I learned

I learned a lot about android developemnt.

  • Life cycles: The UI and service communicates per LocalBroadCast, the BroadCastReceiver has to be register and unregister on Resume/on Pause. Also the SettingPreferenceListener to have a proper lifecycle management of the activity.
  • Using LocalBroadCast instead of global BroadCast to keep data inside the app only.
  • From Api 23, the “dangerous permission” has to be asked at run time. Many system flags and parameters differ from api versions, which requires a lot of version control check in runtime
  • Working and managing service with a lot of parallel processes.
  • Osmdroid: open source lib for working with osm which is compatible to google map API
  • Database sqlite with lib Room
  • etc. 

    Also I learned how important architecture is since I was too fast jumping in coding where my mentor had to stop me and gave me some helpful advice. We re-designed the architecture with a component controller in center. All other components should only do its job and communicate with controller and not with others directly to make it easy to extend/replace any components.

    Difficulties I met

    It was hard to work on the app while I have no access on backend. I have to test all the APIs while analyzing the old app, which is also not nicely documented and implemented. Furthermore the backend is quite unstable and sometime unreachable. Another problem is testing. Since the app works with collected wifi access points, testing and debugging at home became very hard.

    Future plan

    There are still some points on the app performance I want to optimize further. I already started on working with the android sensors to detect movement, to scale the scan time more effectively to save resource since scanning wifi and gps are two of the services which cost bunch of phone battery.

    The app is currently only in development mode since I haven’t had a google play store account. But as soon as I do, I will release it. Until then, if you want to try it, an .apk is to download here [5]

    Acknowledgement

    Many thanks to freifunk community and my mentor Jan-Tarek Butt for this amazing opportunity. Even though there are still some small stuffs to do/fix, I am so glad that a new wardriving app is coming soon for openwifi.su. Many thank to Google summer of Code team for making this happen.

    [0] https://blog.freifunk.net/2018/05/14/introduction-openwlanmap-app/

    [1] https://blog.freifunk.net/2018/06/10/openwlanmap-app-update-1/

    [2] https://blog.freifunk.net/2018/07/09/openwlanmap-app-update-2/

    [3] https://github.com/openwifi-su/OpenWLANMap-App

    [4] https://developer.android.com/about/dashboards/

    [5] https://androidsmyadventure.wordpress.com/2018/06/03/openwlanmap/

 

Meshenger – P2P local network messenger – final update

Meshenger is meant to be an open-source, P2P audio and video communication application, that works without centralized servers, thus without a connection to the internet, does not need DHCP servers and can be used in LAN networks such as Freifunk community networks.

It was brought to life to demonstrate the use of such networks other than simply internet access as well es to discover the decentralized use of WebRTC in conjunction with IPv6.

I spent the last few weeks polishing and improving my project, getting it to a usable and stable state.

An APK with version 1.0.0 can be found here, as well as the whole source code.

In the last month i fixed a some bugs, like a wrong serialization of IPv6 addresses, making a phone ring even with the screen off, preventing duplicate contact entries, prevented the app from freezing and some more.

Of course, the app gained some new features, including a ‘settings’ page with the language, the username etc., additional information for each contact, a possibility to share contacts through third-party messengers or a QR-codeignoring calls from unsaved contacts and several more.

Oh, and if you suddenly dislike someone, you can now simply delete him.

Settings
Contact options

 

 

 

 

The app now has an ‘about’ page containing some meta-data about Meshenger as well as the license:

About page

 

I extraced a lot of hard-coded strings in order to make it easier to translate the app into different languages.

 

As of now, it is planned for the future to implement profile photos, file transfer and asynchronous messaging.

All in all, i would conclude that Meshenger was a successful project and reached most of its goals.

It gave me the chance to dive into new subjects and learn a lot about VoIP and IPv6 as well as get to know the Freifunk community and learn about other interesting ideas.

A module for OLSRv2 to throughput estimation of 2-hop wireless links

Hi to community members!

Here the final report! In this project, we introduced throughput estimation strategies in OLSRv2 based networks. Basically we follow two strategies, the first one which relies on iperf3 and a second one which relies on packet timestamping.

We prototyped the iperf3 strategy in PRINCE. The basic idea is that each node has an iperf3 server and a node can estimate the neighbor throughput by running an iperf3 evaluation.

We set up an emulation environment in CORE, then we tested PRINCE with iperf client/server in CORE. We built a simple three nodes topology (n1, n2, n3) where n1 is directly connected (in the wireless coverage) to n2 and n2 is directly connected to n3. n1 is connected to n3 through OLSR. The neighbor estimated throughput at IP level is of about 43 Mbps on a physical link of 54 Mbps (in the figure the throughput estimated from n2 towards n1).

In order to introduce a lightweight measure strategy (without a further server process), we worked on a OONF plugin to throughput estimation based on packets timestamp. The basic idea is that the plugin sends a couple o probe packets towards each neighbor. A neighbor can estimate the throughput starting from the time difference between the time of reception of the second probe minus the time of reception of the first probe (probe-size / (t2 – t1)).

We tested the plugin in our environment in CORE. Unfortunately, the time of reception of packets probe in the plugin doesn’t fit our needs since the couple of probe packets has a time difference close to 20 us (and then overestimated throughput close to Gbps on a 54 Mbps link).

We experimented by taking socket timestamps in the reception phase (required several changes in the OONF socket code). However, the results are mainly unchanged. Then an approach entirely based on oonf_rfc5444 (which is the messaging system used in the plugin) is not accurate due to possible delay or messages manipulation in the sending phase. Then, this approach requires a different messaging system, probably in both transmission and reception phases, to keep a reliable procedure in OONF.

The code is available at https://github.com/pasquimp/prince/tree/iperf, https://github.com/pasquimp/OONF/tree/neighbor-throughput.

Thank you for the opportunity and thank in particular to my mentors for the suggestions!

GSoC 2018 – Kernel-space SOCKS proxy for Linux – Final

Short description

The original plan was a full kernel-space SOCKS proxifier, but that would be a little bit complex for the goal: a faster TCP proxy. Then I found a very elegant solution for the problem: eBPF sockmap support. There is a API for redirect packets between sockets in kernel space using sockmap eBPF program. I decided to extend my shadowsocks-libev fork with the eBPF support. The disabled encryption already give some additional performance, so if anyone already using this one, there is a new option to get more performance.

Continue reading “GSoC 2018 – Kernel-space SOCKS proxy for Linux – Final”

GSoC 2018: qaul.net changes and experiences (final report)

This is my final report for Google Summer of Code, working on the userspace, backend agnostic routing protocol for qaul.net. Or also entitled: how to not go about writing a userspace, backend agnostic routing protocol (in general).

The work that was done

If you’ve been reading my first three blog posts, you will know that we had some issues designing and coming up with plausible ways for such a routing core to interact with network layers. The biggest challenge is the removal of adhoc wifi mode from Android, thus requiring root as an app to provide our own kernel module for that. Before specifying what I would be working on this summer we had a very idealised view of what a routing protocol could depend on, making a lot of assumptions about the availability (aka connection reliability) and usability of WiFi Direct and were negatively surprised when we ran into various issues with both the WiFi direct standard.

Secondly, I built a prototype that uses bluetooth mesh networking to allow multiple phones to communicate which has given us much better results from the beginning. Connections are more stable, however their range is more limited than with WiFi. It does however come with the benefit of power saving.

These prototypes will serve as a good base to play around with larger networks and more devices but won’t end up being part of the qaul.net code base. The code written to remain in qaul.net is relatively little. There is the routing-core repository which provides a shim API between a generalized routing adapter and a generic network backend, that can either be bluetooth, wifi direct, even adhoc or ethernet. We ended up not focusing on this code very much because there were too many open questions about the technologies at hand to proceed with confidence.

The code for both prototypes is available here, the routing core shims can be found here

What wasn’t done

We didn’t end up writing a userspace, network agnostic routing protocol, of the likes of BATMAN V. This is very unfortunate and probably comes down to the fact that, when summer of code started, we only had theoretically worked with WiFi Direct before, making a lot of assumptions that were ultimately wrong (and based on the way adhoc works).

The next steps

We will proceed with bluetooth meshing as our primary network backend, where we still have to figure out a few questions about the captive portal functionality, how to subdivide a network into smaller chunks and how moving between subnetworks will work. Bluetooth meshing isn’t exactly made for what we’re trying to do but it’s a close approximation.

When it comes to the actual qaul.net code, we need to write a bluetooth mesh adapter which plugs into the routing core, at which point we can start testing the protocol layout that we designed and work on the actual routing heuristics. The work for this is largely done, based mostly on the BATMAN protocol documentation.

Acknowledgements

I want to thank the Freifunk organisation and community, my mentor Mathias who worked with me on figuring out how to get around the problems we encountered. We managed to get a good step closer to moving qaul.net away from adhoc networking, even though we didn’t reach all the goals we set out to. Finally, I would like to thank Google for the Summer of Code and its efforts during all these years and for its commitment to the development of open source software.

The Turnantenna – Final evaluation update

We are at the end of the journey. Today is the last day of the 2018 version of the Google Summer of Code.

So, here what I have done during this month of hard (and hot) work!

States Machine

The SM presented in the previous article has evolved to a newer and more complete version. The whole machine is defined through the following states and transitions:

# In controller.py

class Controller(object):
    states = ["INIT", "STILL", "ERROR", "MOVING"]
    transitions = [
        {"trigger": "api_config", "source": "INIT", "dest": "STILL", "before": "setup_environment"},
        {"trigger": "api_init", "source": "STILL", "dest": "INIT", "after": "api_config"},
        {"trigger": "api_move", "source": "STILL", "dest": "MOVING", "conditions": "correct_inputs",
         "after": "engine_move"},
        {"trigger": "api_move", "source": "STILL", "dest": "ERROR", "unless": "correct_inputs",
         "after": "handle_error"},
        {"trigger": "api_error", "source": "STILL", "dest": "ERROR", "after": "handle_error"},
        {"trigger": "engine_reached_destination", "source": "MOVING", "dest": "STILL",
         "before": "check_position"},
        {"trigger": "engine_fail", "source": "MOVING", "dest": "ERROR", "after": "handle_error"},
        {"trigger": "error_solved", "source": "ERROR", "dest": "STILL", "after": "tell_position"},
        {"trigger": "error_unsolved", "source": "ERROR", "dest": "INIT", "after": ["reconfig", "tell_position"]}
    ]

There are not many differences with the older graph but, behind the appearance, there is a lot of work. Now every arrow correspond to a series of defined actions, and the scheme was implemented as a real working program.

The structure of the Turnantenna’s brain

During the last week I worked on the refactoring of all the work done until that time. The final code is available in the new dedicated “refactor” branch on GitHub.

The States Machine above is implemented in the main process, which is able to communicate with 2 other processes: the engine driver and the RESTful server.

# In turnantenna.py

from multiprocessing import Process, Queue
from controller import Controller      # import the states machine structure
from stepmotor import engine_main      # import the engine process
from api import run                    # import the api process

def main():
    engine_q = Queue()
    api_q = Queue()
    api_reader_p = Process(target=run, args=(api_q, ))
    engine_p = Process(target=engine_main, args=(engine_q, ))
    controller = Controller(api_q, engine_q)                  # start the SM

    api_reader_p.start()                                      # start the api process
    engine_p.start()                                          # start the engine process
    controller.api_config()

The processes communicate with each other through messages in the queues. Messages are json, and have the following format:

{
    'id': '1',
    'dest': 'controller',
    'command': 'move',
    'parameter': angle
}

The “id” key is needed in order to control more than one engine, this is useful for future upgrades. “dest” specify the process that should read the message, and avoid wrong deliveries. “command” is the central content of the message, while “parameter” contains detailed (optional) informations.

Processes are infinite loops, where the queues are checked continuously. An example of this loop is:

# In api.py

from queue import Empty

while True:
    try:
        msg = queue.get(block=False)
        if msg["dest"] != "api":
            queue.put(msg)       # send back the message
            msg = None
    except Empty:
        msg = None

    if msg and msg["id"] == "1":
        command = msg["command"]
        parameter = msg["parameter"]
        if command == "known_command":
            # do something

API

In order to interact with the turnantenna, I defined 3 methods: get_position(), init_engine() and move().

It is possible to call them through an HTTP request. A json needs to be attached to the request in order to make things work. In fact, APIs need some critical data: e.g. the id of the specific engine targeted, or a valid angle value to move the engine of that amount. If the request come without a json, or with a wrong one, the RESTful service respond with an error 400.

Here an example of input controls:

import requests

if not request.json or not 'id' in request.json:
    abort(400)
id = request.json['id']
if id != '1':            # still mono-engine
    abort(404)

For the moment the system works with only one engine, but in the future it will be very simple to handle more motors

...
# if id != '1':
if id not in ('1', '2')
    abort(404)
...

Final results

In these moths we started from an idea and a basic implementation, and we build-up a complete system ready to be tested. It is possible to see the Turnantenna logic run cloning the Turnantenna code from GitHub from the link Musuuu/punter_node_driver/tree/refactor.
Following the instructions in the readme file, you can run the turnantenna.py file and observe how it reacts to the HTTP requests made with curl.
The full documentation of the project could be found at turnantenna.readthedocs.io.

We are proud of the work done, and we’re ready to implement the whole system onto the hardware and make the Turnantenna turn!

DAWN – Final Post

So did I achieve my aims with DAWN?

GSOC Aims

  1. Simple Installation
  2. All patches Upstream
  3. Configuration of the nodes should be simplified
  4. Visualize the information of the participating nodes
  5. Improve the controller functionality by adding mechanisms like a channel interference detection and other useful features

1 and 2:


Everything is upstream!
All hostapd patches are merged. I even added some patch that extending the hostapd ubus functionality.
The iwinfo patches are merged too. But actually the patch from the other guy was merged that contained my patch #1210.
You can now simply add the feed and compile DAWN.

3 and 4:

I added a luci app called luci-app-dawn, there you can configure the daemon. If you do this, the daemon configuration is send to all participating nodes. So you don’t have to change the config on every node.
Furthermore, you can see in the App all participating WiFi Clients in the network and the corresponding WiFi AP. Furthermore, you can see the Hearing Map for every Client.

 

5:

So I’m still refactoring my code. Some code snippets are ugly. :/
I read stuff about 802.11k and 802.11v.
802.11v is very interesting for DAWN. It would allow DAWN a better handover for the clients. Instead of disassociating the client, the client can be guided to the next AP using a BSS Transition Management Request frame format.
This request can be sent by an AP or station (?) in response to a BSS Transition Managment Query frame or autonomously.

I want to send this request autonomously instead of disassociate clients if they support 802.11v.
For that I would set a the Disassociation Timer (the time after the AP disassicates the client if it’s not roaming to another AP) and add another AP as a candidate. Furthermore I should enable 802.11r for fast roaming…
If you want to play around with 802.11v you need a full hostapd installation and enable bss transition in the hostapd config.

bss_transition=1

The stations sends in the association frame if it supports bss transition when associating with an AP.
My plan is to extend the hostapd ubus call get_clients with the information like it’s already done with the 802.11k flags.
After this I need a new ubus call in which I build such a BSS Transition Management Request like it’s done in the neighbor reports ubus call.
I found a patch on a mailing list that adds a function to build such a bss transition frame in an easy way.

wnm_send_bss_tm_req2

Sadly, it was never merged. 80211v implementation can be found in the hostapd.

Furthermore, I could use 802.11k for asking a client to report which APs he can see. This is a better approach as collecting all the probe entries. The hearing map is very problematic, because clients are not continuously scanning the background (or they don’t scan at all). Furthermore a client can move around. Typically questions are, how long such a probe entry can be seen as valid. If the time span a probe request is seen as valid is set to big and the clients moves around, he can not leave the AP although the RSSI is very bad. (and a bad rssi is the worst thing you can have!) A bad RSSI can trigger the internal client roaming algorithm and the client tries always to roam to another AP and gets denied because there is already a hearing map entry with a very good rssi. But this entry is not valid anymore, because the client moved very fast.

My Merged Pull Requests:

My Open Pull  Requests:

My Declined Pull Requests:

GSoC 2018 – Better map for nodewatcher (Final update)

Hello everyone,

In my last update I represented solutions for most of my goals that I set in my first post. There was still one feature to implement and I worked hard to have it finished in time for GSoC.

Problem

The last feature that I am talking about is the ability to show recently offline nodes in the map. This was the hardest part to implement but also the most important. Because with it you would be able to see which nodes are offline and need maintenance and you could see exactly where they are located. Until now there was only an email alert system, but it sent out an email for every change to the node. There wasn’t a filtering option and also it would do this for every node so the inbox would get cluttered really fast. By adding this feature you can get a list of all nodes that went offline in the past 24 hours and it updates that list alongside the map.

Solution

In my last post I talked about adding a sidebar that had a list of all nodes that are currently online and showing on the map. So I just added a new tab that represented the recently offline nodes. The hardest part of adding this was that I had to use nodewatchers API v2 which was still in development and hasn’t been fully documented. I still wanted to use it because in the newest nodewatcher version every API v1 request will be replaced by v2. This way there would be less work in the future and also I took some time to document everything I have learned about it. This document has everything that I was able to gather from nodewatcher code and examples of how to use it. In the picture below you can see how the sidebar currently looks and also the list of recently offline nodes. It has the same functionalities as the online node list like the search bar, option to show the selected node on the map and to go to that specific nodes page.

What’s next?

GSoC has provided me with a unique opportunity to work on a large scale open source project and I have learned a lot in the past three months. Mostly about time management and not putting too much on my plate. It was truly an experience that will help me later on in my life. I will for sure work on other open source projects and continue my work with nodewatcher because I have analysed and figured out most of the code. It would be a shame to just let that knowledge go and move on to another project before being sure that someone else does take over and continues the work.

Important links:

Freifunk blog posts:

https://blog.freifunk.net/2018/05/14/gsoc-2018-better-map-for-nodewatcher/

https://blog.freifunk.net/2018/06/11/gsoc-2018-better-map-for-nodewatcher-1st-update/

https://blog.freifunk.net/2018/07/09/gsoc-2018-better-map-for-nodewatcher-2nd-update/

Github pull requests:

Main map code: https://github.com/wlanslovenija/nodewatcher/pull/69

API v2 documentation: https://github.com/wlanslovenija/nodewatcher/pull/70