Google Summer of Code is Google’s summer program for students to learn about, and get involved in open source. It’s happening again for the 17th year in 2021! Over 16,000 students from 111 countries have participated.
Motivate students to begin participating in open source development.
Help open source projects bring in new, excited developers into their communities who stay long after their GSoC ends.
Provide students in Computer Science and related fields the opportunity to do work related to their academic pursuits.
Give students exposure to real-world software development scenarios (e.g., testing, version control, software licensing, mailing-list etiquette, etc.).
Create more open source code.
How does GSoC work?
Programming online from their home, student participants spend 10 weeks on their projects (about 175 total hrs) earning stipends upon completion of their milestones. Volunteer mentors help students plan their time, answer questions and provide guidance on best practices, project-specific tools, and community norms while helping integrate students into their communities.
Students receive an invaluable learning experience, an introduction to the global FOSS community and something that potential employers love to see on resumes!
Mentoring orgs will gain new contributions & contributors along with recognition from Google and a higher profile for their project.
How to apply for freifunk @ GSoC 2021?
Pick an idea from our projects page and get in touch with mentors and the community.
Discuss your ideas and proposals.
Submit a draft of your proposal early, so we can give you feedback.
If you have any general questions, join our Matrix room.
Student applications are open March 29 – April 13, 2021.
Who can apply?
In short: you have to be at least 18 years when you register and you need to be enrolled in or accepted into a post-secondary academic program, including a college, university, masters program, PhD program, undergraduate program, licensed coding school. For all details, please see GSoC’s FAQ.
Am 23.04.2020 wurde auf dem Server von wiki.freifunk.net über eine Sicherheitslücke weitere Software installiert, die anscheinend auf anderen Servern nach ähnlichen Sicherheitslücken gesucht hat. Am MediaWiki selbst wurden soweit ersichtlich keine Änderungen vorgenommen oder Daten abgefischt. Der Zugriff blieb anscheinend auf den Webserver-Benutzer begrenzt. Da Analysen in der Richtung aber keine 100%ige Sicherheit bieten können, sollten alle Menschen mit Wiki-Accounts ihre Passwörter dort ändern, auch wenn die Passwörter in der Wiki-Datenbank nach Stand der Technik sicher als Hash abgelegt sind (pbkdf2/sha256). Diejenigen, die sich seit dem 23.04.2020 neu registriert haben, wurden von uns gesondert informiert. Das Wiki nutzt Accounts nur zur Abwehr von Spam und bietet keine persönlichen Postfächer oder ähnliches, entsprechend liegen im Wiki keine personenbezogenen Daten abseits vom gehashten Passwort (und ggf. E-Mail-Adresse) vor.
Der Hack wurde am 27.04.2020 bemerkt, das Wiki offline genommen, der Rechner komplett neu installiert und die Datenbank aus einem sicheren Backup aus der Zeit vor dem Hack wiederhergestellt. Am 30.04.2020 ging die neue Installation online.
Technischer Hintergrund
Als Einfallstor wurde PHPUnit benutzt, das versehentlich als Development-Dependency via PHP Composer installiert wurde. Composer installiert Abhängigkeiten im “vendor”-Verzeichnis, das in der installierten alten MediaWiki-Version von außen zugänglich war.
Verzeichnisse mit internem Code waren nach außen sichtbar (das wurde in MediaWiki vor einiger Zeit per .htaccess behoben)
Composer installiert in der Voreinstellung auch Developer-Abhängigkeiten
Wir haben die Software nicht regelmäßig aktuell gehalten. Leider ist das angesichts von Abhängigkeiten wie dem Freifunk-Skin nicht immer ganz einfach.
Maßnahmen
Der Prozess zum Einrichten und Aktualisieren des Wikis wurde weitgehend automatisiert. Wir werden das Wiki in Zukunft aktuell halten und im Zweifelsfall Aktualität der Software höher priorisieren als Funktionalität wie das Skin.
Zur Vermeidung von Datenhalden werden in Zukunft komplett ungenutzte Accounts automatisch gelöscht bzw. inaktive Accounts nach einiger Zeit deaktiviert und die diesen Accounts zugeordneten Passwort-Hashes und E-Mail-Adressen aus der Datenbank entfernt.
Beim Erstellen von Accounts wird jetzt darauf hingewiesen, ein sicheres Passwort zu setzen, das für keinen anderen Dienst benutzt wird.
Vielen Dank an alle, die sich schnell gekümmert haben und halfen, die Sicherheitslücke schnell zu schließen.
Wie erwartet hatten wir auf dem 36C3 eine tolle Zeit und sind uns einig, dass man das alte Jahr kaum besser beenden kann, als mit Gleichgesinnten die Nacht zum Tag zu machen und sich mit Mate oder Tschunk in der Hand auszutauschen.
Aber nach dem 36C3 ist vor dem 37C3 und damit uns keine Post-Congress-Depression packt, haben wir ‚Refreshing Memories‘ betrieben und keine ‚Ressource Exhaustion‘ vorgenommen. Anfang der Woche trafen sich Interessierte, die wir zuvor über das Freifunk Forum und die Mailingliste WLAN News eingeladen hatten, in einem Mumble. Wir haben offen besprochen, was gut gelaufen ist und wie wir unsere Freifunk Assembly auf dem nächsten Congress noch spannender und für alle Beteiligten besser gestalten können.
Los geht‘s mit einer kurzen Intro.
Warum eigentlich OIO?
Das OIO bietet allen Freifunk Communities einen galaktischen Orbit um
Projekte vorzustellen
vor Ort Projekte mit anderen Freifunker*innen zu realisieren
Freifunk Themen zu diskutieren
Miteinander eine gute Zeit zu haben
Neue Projekte zu entwickeln
Wer hat mitgemacht?
Für den 36C3 haben sich regelmäßig 3-4 Communities aktiv an der Orga der Freifunk Assembly beteiligt und eingebracht. Auf dieser Grundlage haben wir die Entscheidung für 2 Schiffe mit ingesamt 24 Sitzplätzen getroffen.
Ein paar Zahlen
Das OIO Orgateam hat vor Ort in Leipzig rund 670 Stunden (!) in Bau des Hafens, der Schiffe, des Leuchtturms auf dem Felsen, des Workshop-Domes und der OIO-Stage gesteckt. Dafür noch einmal Respekt und ein dickes Dankeschön!!!! Auch wenn wir viel vom letzten Jahr wiederverwenden konnten, mussten Bauteile und die schicken Flaschenlampen neu produziert werden. Für die Veranstaltungstechnik sind weitere Ausgaben angefallen und auch unvermeidliche Transportkosten schlugen zu Buche. Per 26.01.2020 fehlten noch 1.900,42 € in der Kasse und somit folgt ein ernst gemeinter
Spendenaufruf
Wir bitten jeden Menschen und vor allem die Freifunk Communities darum, sich an der Spendenaktion unter http://spenden.oio.social/ und dort unter 36C3 OIO – Open Infrastructure Orbit zu beteiligen und ein paar Taler beizusteuern. Natürlich sind diese Spenden steuerlich abzugsfähig und helfen dabei, das die Freifunk Assembly auch beim 37C3 wieder in dem sicheren OIO Hafen anlegen kann ;-).
Nachbrenner und Fazit aus dem Mumble
Aktive Teilnahme
Fangen wir mal offen und selbstkritisch damit an wie es um die aktive Teilnahme von FF Communities an der FF Assembly im OIO bestellt war. Leider waren es nur wenige und wir haben gelernt, dass es grundsätzlich nicht an Interesse oder Motivation mangelte. Vielmehr waren die im Vorfeld zum 36c3 gewählten Informationskanäle nicht ausreichend bzw. nicht umfassend genug und müssen in Zukunft noch stärker und regelmässiger mit klaren Informationen bespielt werden, damit sich mehr FF Communities aktiv an der FF Assembly im OIO beteiligen.
In 2019 haben wir diese Informationskanäle genutzt:
Wünschenswert ist, dass die lokalen Communities auf ihren Seiten mit aktuellen Hinweisen auf den 37C3 / OIO / FF Assembly verweisen und ihre eigenen Social Media Kanäle nutzen, um diese Informationen weiter zu verbreiten.
Inhalte
Gut informiert sein heißt nicht, ein wenig von Allem zu wissen, sondern alles von den Dingen auf die es ankommt. Wir Organisatoren haben offensichtlich nicht klar genug kommuniziert, dass das OIO allen Freifunk Communities einen galaktischen Orbit bietet, um Projekte vorzustellen oder solche vor Ort mit anderen Freifunker*innen zu realisieren.
Weiterhin bestand offensichtlich Unklarheit darüber, dass wir uns
aktive Mitarbeit in der Freifunk Assembly „im Hafen des OIO“ von
allen wünschten, die sich für einen Voucher unter
https://wiki.freifunk.net/36c3/Participants
gemeldet haben.
Offensichtlich war auch die von uns im Wiki
gewählte Formulierung „sich mit anderen einen Sitzplatz teilen
möchten“ wohl zu schwammig und führte zu leichten Spannungen auf
dem Congress aufgrund unterschiedlicher Erwartungshaltungen. Das Orga
Team hatte feste Sitzplätze für alle aktiven Beteiligten
einkalkuliert und kam so auf 2 Schiffe mit insgesamt 24 Plätzen.
Viele passive Participants sind allerdings davon ausgegangen, dass
ein Voucher auch automatisch einen festen Sitzplatz beinhaltet.
Das werden wir in 2020 besser machen. Für den 37C3 werden wir eine noch genauer zu definierende prozentuale Verteilung aus Named-Seats (also mit klarem Personen & Community Bezug) und Shared-Seats (nicht reservierbar) haben. Bei den Shared-Seats wird somit auch das Kleben eines Community Stickers auf dem Tisch oder das Stehen lassen eines Laptops nicht zu einem dauerhaften Sitzplatzanspruch führen. Selbstverständlich werden wir alle Inhalte verbessern und versprechen heute schon eine klare und regelmäßige Kommunikation.
Auch die entsprechenden Absätze/Formulierungen in den Vouchercode-Mails und die Mailingaktion hinsichtlich Spendenaufruf und CallForParticipation an die „erfolgreichen Vouchercode-Nutzenden“ hatte leider kaum einen erkennbaren Erfolg.
Brainstorming
Wir haben in die Runde gefragt, was sich alle Beteiligten wünschen und woran wir beim kommenden Congress denken sollen.
Mehr Beteiligung beim Auf- und Abbau von allen Communities und
Freifunker*innen
Spendentrommel stärker und früher rühren
Spenden von FF Vereinen gesondert erbitten
Spendenhöhe empfehlen- z. B. X € pro Person
Verweis auf steuerliche Abzugsfähigkeit von Spenden unter
spenden.oio.social
Fördergelder über Freifunk Landesförderung NRW beantragen –
Beispiel: FF Community X stellt Projekt A vor und dafür wird eine
Ausstellungsfläche mit X Sitzplätze benötigt, hierdurch entstehen
Kosten in Höhe von X € und deshalb wird durch die FF Community X
ein Antrag auf Förderung beim Land NRW gestellt.
Voucher in Verbindung mit Spendenaufruf
stärker an Freifunk Assembly Aktivität koppeln, aktive
Teilnehmer*innen werden somit bei der Vouchervergabe bevorzugt, denn
aufgrund der Vielfältigkeit der anfallenden Tätigkeiten kann
jede/r etwas Aktives beitragen
Wiki
wird mit klaren Formulierungen entsprechend aktualisiert.
Ideensammlung
Das Beste kommt zum Schluss! Hier sind Eure Ideen was wir diesen Jahr zusätzlich machen sollen:
Analoge Community Karte aufhängen – Größe: 2 * A0
Freifunk Logo auf geeignete große plakative Stelle drucken/sprayen /projizieren
Freifunk von erhöhter Stelle auf den Boden als „bewegtes Logo“ projizieren
Freifunker*innen in einer Vorstellungsrunde persönlich kennenlernen – Domo “mieten” – loser Austausch ohne definierte Themen
FF Projekte / Inhalte vorstellen und über alle Kanäle kommunizieren
Projekte vorbereiten, die explizit auf dem Congress bearbeitet werden (Arbeitsgruppen, Workshops), um dort auch zu Arbeitsergebnissen zu kommen.
Vielen Dank an alle die mitgemacht haben und auch in Zukunft mitmachen werden.
It has been a wild ride and this post marks the completion of GSoC 2019 for WireGuard Tunneling in BMX7. Throughout the summer a lot of effort has been put into coding, documentation and codebase restructuring; it has been a appetizer for what awaits us coming Autumn.
During the final phase of GSoC 2019, focus has been put into recombining what has already been done. An ideal goal had been to extend the wg_tun plugin functionality with the routing of traffic between associated peers, but this remains a work in progress. Testing has taken place and quite a few SIGSEGVs have monopolized interest. Also, research has been contacted on whether and how to cryptographic primitives used in BMX7 and WireGuard. This seems infeasible currently, due to the differences between cryptographic keys of the two.
Here we see the successful communication between two bmx7-wg nodes and their association; of course while in testing SIGSEGVs are out to hunt you.
The wg_tun Plugin
Our ultimate and prime goal has been the beta release of this plugin. The plugin offers BMX7 the capability of setting up an iproute2 WireGuard device interface with a unique pair of keys that is automatically looking for all available peers and establishes a connection with them.
In the current approach we have designed along with Axel an approach about the unique cryptographic address assigned to the wg interface (a concept similar to the existing), where the BMX7-WG auto-configured IPv6 address is a product of the unique prefix fd77::16 and the first 14 bytes of the node’s SHA224 hash. Research goes on in this to figure out the best approach (network and cryptography wise).
In the past days we saw the creation of wg_status option and the addition of debug flags to inform an administrator of the state of his WG device (related commit).
At this stage, testing is being performed to analyse the behaviour of the plugin after the associations among peers and include more status information and more options for the control of the plugin by the user.
The “misses”
Of course when you go too optimistic into the dark something’s gonna not work out.
In this project the misses can be summarized in: devoting more time than initially expected into studying and researching and dropping some core goals for the sake of more important ones.
The goals we “skipped” have been the Debian Package and the refactoring of Mesh Linux Containers to aid with testing.
In the studying part, I was supposed to devote only the community bonding period and part of the first phase to study white paper and get up to speed with theory between what needed be implemented and fiddled. I ended up devoting time throughout all the phases, but I’m happy with what I’ve learned.
Further Work
The intention to continue work on BMX7 is trivial to answer. GSoC has been a good experience and way to get started with the codebase of BMX7, but free flowing, detrimental work is scheduled for after it.
This marks the end of GSoC 2019 and I am proud to announce the project has come to a mature stage. It has been a great pleasure to work with my mentors Paul Spooren and Moritz Warning.
System updates with OpenWrt has been a trickier part for ever. This application is made to make the process easier and seamless. It is inspired by Attended Sysupgrade app and simplifies updating to newer versions.
The Sysupgrade app utilizes the JSONs created upon build as proposed in openwrt/openwrt#2192 (comment) (which utilizes the splitting of variables from phase 1).
Using this application:
Vanilla(Default) sysupgrade images from openwrt server can be flashed directly onto the devices.
Installed packages can be retained utilizing a custom image generated via ASU
The work for the app is still in progress and is yet to be reviewed by the community. There is still some time till it gets merged to LuCI. The PR for the same can be found here.
Conclusion
It has been a great experience working throughout the summer. I have learnt quite a lot from this experience and would like to give the experience back by being the part of the community in the future too. I would like to thank my mentors and Freifunk as they gave me the opportunity to work under them and to work for the world of open source.
Coverage report integration. Final coverage output
CI integration with Travis. The build was split in two stages: unit testing and package build:
I have writen tests for the following core parts: lime.config, lime.network, lime.wireless, lime.utils. Quality of the tests is diverse, some are just stubs so we can improve them in the future, but some are good.
Tests for packages: firstbootwizard has been improved in order to support unit testing and a first simple test is in place. To write more tests more changes to FBW are needed.
Integration tests: lime-config with device support
iwinfo fake library, with many helper functions to easily fake a device and station connected, etc.
Uci testing environment helpers
Device support: A simple device support was implemented. For the moment this needs the /etc/board.json of the device and the /etc/config/network and /etc/config/wireless that are generated by OpenWRT on the first boot. With this files a testing environment is created using uci and iwinfo so for the tests a device is emulated. Using this infrastructure a lime-config test was implemented. For the moment only LibreRouter-v1 device is supported but it is very easy to add more devices.
Here is a reference PR with all the work I did for this GSoC. In order to have this work merged I created many small PR in the LibreMesh repository: #562, #563, #564, #565, #566, #567 and #568. Some of the work is not yet in PR to LibreMesh to don’t overwhelm the reviewers.
Future work
Add more devices.
Discuss if writing an integration test that uses lime-defaults and lime-defaults-factory with a device and check that the result is what it is expected is helpful, and if it is, write this tests.
Provide a way to test packages that use ubus library.
Explore how to use this testing environment in other openwrt Lua packages outside of LibreMesh. Even C code should be easily tested with automatic Lua bindings.
Lessons learned
Unit testing framework
After working with Busted I think it has been an excellent choice choice as unit testing framework. It is very well documented, very powerful and at the same time is easy to use. I used it for writing very different tests and I never missed something. Mocks and stubs are good and asserts are powerful.
At first my idea was to create a fake library because I thought that this could be easy and at the same time very handy for the tests. I implemented quickly a fake library but it did not behave the same as the uci library in many corner cases. I realized that behaving exactly the same will take a lot of work and if it does not behave exactly the same it will be very annoying because the tests will work differently than in production. And that is a very bad idea.
So I decided to try to use the real UCI library and create a clean environment for each test with helper functions. It was very easy to do it as UCI provides a way to change the config environment.
Docker image
A side effect of basing the testing Docker image in Alpine Linux is that it is ABI compatible with OpenWrt x86_64 packages because both use musl C library. This allow us to easily use some openwrt libraries like luci.ip, uci, etc directly from public OpenWrt packages. This keeps the testing maintenance effort low as we are not having to build this libraries by ourselves.
Lua is powerful
Coming from a Python background I thought I will miss many things but from a language perspective that was not the case!
GSOC 2019 is drawing to a close and with it the first part of the HTTP API. To be completely honest this is not where I’d hoped to be at this stage, but we’ve set up a solid foundation for future work to build the rest of the API.
A Framework
I wrote about choosing a web framework a while back. The choice I arrived at, Iron, was in retrospect not the best choice. I chose it because I liked its middleware model, but it has only recently been renewed and most of its ecosystem hasn’t been updated in three years. This meant writing a lot of new middleware for the API. Additionally the only existing Iron testing framework was difficult to use and relied on fragile string manipulation to generate Request objects. I wrote a new testing framework called Anneal which uses hyper to generate Request objects and follows a builder pattern to simplify testing.
A Service
The HTTP Api is designed to operate as an independent service within a Qaul instance. An instance may disable the API, or not ever include it, and things should still work. Currently the mechanism by which services communicate is still a work in progress but a big part of the api was trying to pull as much of the boilerplate code used for validating incoming requests into the http-api service as possible.
Authentication is handled by the api, parsing of JSON:API requests (using the json-api crate I talked about in a previous post) is handled by the API, scoping of requests is handled by the api. The benefits of this model will become more apparent as we start to give services HTTP APIs.
A Plan
While my work on Qaul under GSOC is coming to a close I fully intend to complete my work on the HTTP API. I have a branch for adding unit tests to the api waiting on my user creation merge request, I have been experimenting with implementing an inter-service messaging system.
I added an http-api service, built login and logout endpoints, built a hot pluggable mount middleware for use in mounting services, implemented middleware for dealing with cookies and authenticating with cookies. I have written extensive error messages and documentation for all of these components and hopefully the foundation they have created will allow future development of the API to proceed with ease.
GSoC 2019 is coming to an end and for this reason it is unfortunately my last blog post on freifunk. For this reason, I would like to start by thanking the freifunk community, Google and, in particular, my mentors for the opportunity to participate in this rather special program. In this post I will present what has been done, what has not worked out and what is still to be done.
The aim of my project as I mentioned in my first post is to build a mobile application oriented around the chat, with a big focus on the modern look. The new logo, which is also part of this project, was intended to reflect new direction of this software.
Logo
Below you can see the new Retroshare logo, which is the main logo for this application, and in other projects its use will depend on the acceptance by users and the profile of the project.
App
I have posted my proposed design in this post. Now it’s time to show off final design, but it is difficult to describe the appearance of the application and it makes little sense to paste several screens here. Because of this I recorded a walkthrough of the application. You can check it out here:
Illustrations of empty screens are provided by Icons8. Thanks for them!
Application was fully written in Dart using Flutter. This should ensure relative code readability, an optional ability to port the code to iOS, and relatively fast application performance.
Features
Functionalities that are now available in the application:
add friend via certificate and share ours,
create with avatar, change and delete our identities,
see friends locations,
create public room (lobby),
send and receive message,
add and remove contact
search for chats, contacts, people,
discover public rooms,
see rooms participants,
signin, signup;
Roadmap
There are still many interesting things to do in this project and for this reason I intend to continue my work. In particular, I would like to focus on these features:
Make a good use of and improve Retroshare’s Json api event system. This will enable app to have message notifications and optimize its performance.
Bundle backend and frontend into one app. Current system is confusing and leaves much to be desired. Who wants to manage the process themselves that is still in the background anyway?
Explore possibility to add tor option as it is in Retroshare desktop app.
Improve chat backend, especially much needed storage of history.
After merging Retroshare’s short certs, add QR code as a default way to add friends. This will involve redesigning the way it works now.
At this point, user still have to be aware of how Retroshare works under the hood. Future improvements have to be made so user will have to only operate with identities. For example, now, after adding friend we have to yet find his identity in search box to add him to contact and finally after those steps we can message him.
The website is already a bit outdated and it could use a new look. As soon as the application is ready, I would like to refresh website.
As you can see on video, rooms name are also ‘Error’, which of course is not the name of room. Due to lack of native support of 64int in Dart, lobbies ids can’t be loaded and so does names. Thanks to my mentor, Gio, solution is already in PR, and after merge, some minor changes have yet to be done on frontend side.
Conclusion
During this program I tried to build a very good application base that can be further developed, which I hope will make futher improvements easy. Unfortunately, during this period I was not able to meet all the milestones, specifically, the chat backend has not been improved. In spite of everything, I intend, according to my Roadmap, to improve the chat backend as well as add more functions to the app so that it can be considered as a production version.
I encourage everyone, especially current Retroshare users, to test the new app. I hope you like it.
Once again, I would like to thank my mentors for their help in recent months.
During this coding phase I added OpenWrt Makefiles to package conTest and the attenuator control software. In addition some documentation was added, but most of the time I spend chasing the down the errors mentioned in the last blog post.
Figure 1Figure 2
This error seems to be, that a certain attenuation gets repeated, while the config says something different. After some long error hunting sessions RegMon[link] was added to the testbed to retrieve more information on the connection and hopefully the error itself. RegMon allows to monitor the time consumption of ath9k wireless cards In there the error does not show up. While in figure 1 the attenuation for signal 1 seems to be repeated, the RegMon diagramm (fig 2) does not show this behaviour. In figure 2 you can see the time a wireless card spends sending (red) or receiving (blue) data, idling (yellowish) or taking care of interference (pink). If the connection is dampened the needed time for sending/receiving data increases, as a more robust MCS, spatial stream and guard interval combination is selected. Based on figure 1 I would expect the cards to have a higher busy time portion than shown in figure 2 at the problematic section.
Unfortunately an update broke the functionality of the regmon evaluation scripts, so I started to port it from R to Python3 to use it in future debugging sessions. Figure 3 shows console output from the control program. The shown attention values are acquired from the digital attenuator shortly after the value was set by the software. These values are as expected and show the correct behavior. The difference between the attenuation values on signal 0 and signal 1 originate from an added analogous attenuator.
Figure 3
Interestingly, after several tests with linear increasing/decreasing attenuation, the behaviour on figure 1 could not be observed.
I started to dig into the driver software of the Vaunix Labbrick attenuator, but did not find problematic code so far. But the first next step should be, that the RegMon evaluation script is usable again. After that I will continue to look into the driver of the digital attenuator and in the worst case dive into tcpdump, to see how it acquires the signal strength values.
The GSoC program is about to finish, and this will be my last GSoC-related blog post for Retroshare’s new web interface.
I will use this blog to provide an overview of how the app works, all my work done during this period, features, completed milestones, what couldn’t be completed, and future roadmap. I will also explain and document the code structure in the hopes that potential contributors will find it easy to get started.
The purpose is very simple; A web-app that can be used to manage your Retroshare node, interact with friend nodes, and make use of Retroshare’s features. In other words, an alternative to the Qt-based interface of the client app.
This is made possible through the JSON API provided by Retroshare, which allows everyone to utilize the power of Retroshare’s technology to create their own services, interfaces, or even build apps on top of Retroshare.
The web interface itself works in a pretty straightforward manner, making use of modern browsers to act as a front end for the Retroshare platform and it’s services. Made using JavaScript, and the only external library being used is Mithril, which is a very fast and lightweight framework for building single page web applications.
Build process
If you look at the source code, you can see that it is built using qmake, the config file webui.pro executes build scripts in webui-src/make-src.
The build scripts in webui-src/make-src (most notably build.sh) iterate over all files from the source directory(webui-src/app), copying files into their respective destinations.
All JavaScript files are compiled into app.js and CSS files into app.css, these compiled files are put into the destination directory which is webui. The build scripts also copy all the static files, from webui-src/assets over to the destination directory, maintaining their directory hierarchy. Static files are the ones that do not require any modification in order to be used, like the HTML, font files, some CSS styles, and so on.
Another important aspect of the build process is how it compiles all the JavaScript files into a single file. Since CSS is simply a set of rules without any structure, the output file can be built simply by appending all the source .css files together, JavaScript however doesn’t work that way:
You may have noticed another file in make-src called template.js. This file is used to create an entry point for the JavaScript files. It can be thought of as a kind of polyfill for require. What this essentially does is, take all the .js source files and store them using objects in such a way that they are isolated from each other, and then enable interaction between them through exporting objects.
To make a module’s components public, we have to refer them in the module.exports object, and we can use them inside other modules by importing them using the require() method. The module.exports object is the only data that can be accessed outside of the module.
Structure
Now that we know how require() and module.exports work, we can look into how the source code functions:
The source files are all in webui-src/app. I have tried to implement a structure loosely based on the MVC design pattern. Aside from separating data and views as objects and components, it makes intuitive use of Mithril components and routing concepts.
Each folder contains the views and models for a single tab. All tabs have their own route resolver that takes in the route parameters and resolves them to return the correct views for rendering.
The entry point of all tabs happens in the resolver file, which also defines the layout of that tab. I will explain layout types shortly.
Note how the file names consist of their respective tab names too. This is not just for convention, our require polyfill does not yet have the concept of directory structure, any file present in any directory and may be imported by using only it’s name. This causes issues when accessing files having same names, which is why tab name supersedes the file name. It is important to have unique names for all files.
The main.js file contains mithril’s m.route, that defines the routing table and allows all navigation on the app. It detects whether the login keys have been verified and upon failing, reroutes to the login page using the onSuccess() callback.
The rswebui.js contains methods that act as the bridge between the web interface and the Retroshare client. Mainly, abstracting the API calls and managing async background tasks.
In a previous post, I mentioned that I did a lot of reading on UI and UX design, highlighting how it shaped the look and feel of the web interface. After learning the importance of consistency when studying interaction design, I set out to make the interface more consistent.
In a nutshell, consistency refers to having uniformity in the UI, a form of repetition such that an action becomes predictable and intuitive to the user.
It can easily be achieved by having a predefined set of rules on how the UI should behave when interacted with, and the best way to do this is to make a set of reusable components. And since this is about the UI and visuals, it has more to do with CSS than JavaScript. Most of the layout rules are defined in theme.css.
The CSS class that houses all other widgets is the tab-page class. It’s the one containing all the elements under the navbar. All top level tab layouts use this and extend upon it. It can hold both full and half-width widgets, and position them according to the space taken by each.
The default blank layout created by the tab-page class.
The next is the sidebar class, which defines the sidebar on the left of some tabs, allowing to choose sub sections within the tab. It must be used when there are multiple sections but are logically grouped inside the same tab. Since this is a very commonly used widget, I have put this inside widgets.js, a file that contains a collection of the most used components, so that it is easily available everywhere. It takes in two parameters: the list of sections, and the base route link of the tab. Note that the section link must be the same as it’s name for it to be resolved properly:
The widget class is used as a preliminary frame for displaying small groups of input types together. For consistency, a directly interactable input must never be shown directly inside tab-page, but must be inside at least one widget frame. For additional uniformity, I have been using the <h3> followed by the <hr> tags as the immediately following elements to display and categorize a frame’s contents.
A widget being used to create the interface for adding certificates.
The progressbar widget is a combination of a <span> tag relatively placed inside a <div> tag using the block-inline display attribute. To create the progressbar in mithril, just use:
A modal or popup box can be used to display content which might be triggered by a user’s action, or can display information that requires immediate attention of the user. This is also present in the widgets.js file. It is made as a mithril component, so can be used normally with the m() selector. It also takes in other mithril components as attributes, allowing it to display any given html tag.
widgets.popupMessage([/* Array of components to render */]);
The popup view used in files tab.
Creating custom input types making use of the <input> tag is incredibly easy in mithril, but may initially be confusing to programmers used to vanilla JavaScript for event handling. Normally, to create a JS-controlled input field, you would do something like:
let text = document.getElementById('input').value;
But mithril components can be controlled very easily by making use of the onchange and oninput event handlers:
And text gets automatically updated with any value the user enters. Use oninput to get value after all the text is entered, and onchange for more finer control, which is fired every time a button is pressed. This method can be used with any input type like text, number, radio, checkbox, etc. and can be made to run any function, allowing for huge flexibility and control.
Features
All the features and milestones that were successfully completed:
Get your certificate, add new friends by copying in their certificates.
View, manage your identities and get info about friend identities.
View and manage all your friend nodes and each of their locations, and basic info about them.
Get info about your upload/download files and manage them, add new downloads through links.
Check all your mails.
View info about subscribed chat lobbies and publicly available lobbies.
Change various configuration options of your Retroshare node such as network limits, file locations, default behavior, and such.
Roadmap
Retroshare has a huge amount of features, and unfortunately this period wasn’t enough for me to cover all of them into the web interface. I plan on implementing the incomplete tabs and then extending the app with new functionality:
Turtle search: As my mentor Cyril told me, this feature is very important since it makes it very easy to find and download new files, and is one of the features making use of stream data from the API. Getting stream data has been a problem due to CORS implemented on browsers, which is the reason this feature couldn’t be finished. I am constantly looking for a viable solution and will immediately finish implementing this when I find one.
Sending mails: The web interface can only read mails for now, and it would be very nice to be able to send mails too.
Forums: I have already started work on the forums tab, and will finish it soon. This will allow users to interact with and manage forums entirely from inside the web interface.
Channels: Similarly, I am also working on channels. Another nice feature to have on the Web UI.
Build Process: As shown above, the current build process is very barebones, and the require polyfill has no concept of directory hierarchy. This will eventually cause issues as the app grows. We need to upgrade the build tools, or find a new one. I think the most important point to keep in mind if choosing to go with a new one, is that the user should not have to install any additional dependencies.
That’s about it. I encourage everyone to try out the app, it is very easy to install the web interface. There are even simple installation instructions on the source page! Feel free to get in touch if you have any suggestions or queries. You can generally find me lurking in the Developer forums in Retroshare.
Many thanks to Google, and the amazing Freifunk community, especially my mentors, for giving me this opportunity. This has been a wonderful time for me, I learned a lot of new things that would help me contribute more towards free and open software.