GSoC 2024: New release for Project Libremesh Pirania.

Hi!

Very happy to be in this project. Piranha captive portal solves a well-known problem in community networks: the ability to manage vouchers and access to the internet and local services. As it says in it’s README:

It could be used in a community that wants to share an Internet connection and for that the user’s pay a fraction each, but needs the payment from everyone. So the vouchers allows to control the payments via the control of the access to Internet.

My name is Henrique and I’m currently working as a substitute teacher. My background is system administration and computer networks, so developing this project will be really challenging but I feel very comfortable doing so.

I’m also part of Coolab, a collaborative laboratory that fosters communities networks in Brazil.

Context

This project aims to develop the new release of Piranha, a package from Libremesh that enables communities networks to setup a captive portal and control/share internet access in a sustainable way. Currently Piranha is only supported in OpenWRT version 19.07.

The following are objectives of this project:

  • Migrate from iptables to nftables;
  • Include Piranha package on OpenWRT repository
  • Make necessary changes to work with DSA on newer routers

The use of captive portals in communities enables the creation of vouchers and parental control, for example, it’s possible to disable access to social networks during night time. Since community networks can have multiple gateways to the internet, there’s a need to share information about current vouchers. This problem is solved by the shared-state package. Below is an illustration of a home user setup and a community network setup:

Regular internet access

Community network internet access with multiple gateways

Acknowledgment

I would like to thanks Hiure and Illario from being my mentors on this project. Libremesh is an awesome project that enables non-technical people to deploy a mesh network in a matter of seconds.

Conclusion

I’ve never developed or upgraded a package before, i’m more into system administration, so it will be really challenging. 🙂

Thanks for reading and see you in the next post! Happy coding!

GSoC 2024: eBPF performance optimizations for a new OpenWrt Firewall

Introduction

Hello everybody! My name is Til, and I am currently a Master’s engineering student at the University of Applied Sciences Nordhausen. For the past year, I focused on Linux network programming, especially eBPF, which brings me to my GSoC 2024 topic.

Over the years, the network stack has evolved to support new protocols and features. The side effect is that this introduces overhead and might hinder network throughput, especially on CPU-limited devices. One solution to overcome this is eBPF.

eBPF and its hooks

eBPF (extended Berkeley Packet Filter) is a technology within the Linux kernel that allows to dynamically re-program the kernel without recompiling or restarting it. Developers can write programs in C code, compile them to BPF objects, and attach them to several so-called hooks inside the Linux kernel. You can use two hooks to redirect incoming data packages to other NICs inside the Linux network stack: XDP and TC.

The XDP (eXpress Data Path) hook is located even before the network stack itself; its programs are attached to the driver of the NIC. That should make it the fastest eBPF hook throughput-wise for redirecting packages.
But there is a catch: NIC drivers must support this so-called “XDP native” mode. Otherwise, you need to attach them through the so-called “XDP generic” mode inside the network stack, which is significantly slower than native, which you will see soon in this post.

The TC (Traffic Control) eBPF hook is already located inside the Linux network stack but is driver-independent, so only the Linux Kernel needs to support it, i.e., compiled with the respective symbol enabled.

My GSoC 2024 project

My GSoC 2024 topic/goal is to introduce a new firewall software offloading variant to OpenWrt using eBPF. The idea is to intercept an incoming data packet from the NIC as early as possible inside or even before the network stack through XDP or TC. Then, for the actual firewall part, apply possible NAT (Network Address Translation) and drop or redirect the intercepted package to another network interface. That saves some CPU cycles and hopefully increases the network throughput on CPU-limited devices.

There are mainly three parts that need to be designed and implemented:

  • The eBPF program which intercepts, modifies, and redirects or drops network packages
  • A user-space program that attaches the eBPF program to the NIC(s) and checks the actual firewall rules
  • A BPF map used for the communication between the BPF and the user-space program

The caveat of XDP generic

As mentioned, you can use the XDP generic mode if your NIC doesn’t support XDP native. Theoretically, the XDP generic mode should be faster than TC because the XDP generic hook still comes before the TC hook. But there is a problem: For XDP generic, the Linux kernel has already allocated the SKB for the network package, and XDP programs must have a package headroom of 256 Bytes. That means if the pre-allocated SKB doesn’t have a sufficient headroom of 256 Bytes, it gets expanded, which involves copy operations that effectively negate the actual performance gain.

I created a patch that, rather than making this package headroom value constant, creates a Linux kernel variable for that headroom exposed through the sysfs interface to Linux user space. It still has the default value of 256 Bytes, but then the user can explicitly lower the XDP generic package headroom according to his requirements.

The following table presents the head of a report generated by the Linux tool perf after running an iperf3 TCP test for 30 seconds through an MIPS router. I tested using an XDP generic package headroom of 256 Bytes first and then 32 Bytes. It shows how much CPU cycles this copy operation wastes.

26.91% ksoftirqd/0  __raw_copy_to_user
3.73%  ksoftirqd/0  __netif_receive_skb_core.constprop.0
3.04%  ksoftirqd/0  bpf_prog_3b0d72111862cc6a_ipv4_forward_func
2.52%  ksoftirqd/0  __kmem_cache_alloc_node
2.32%  ksoftirqd/0  do_xdp_generic
2.06%  ksoftirqd/0  __qdisc_run
1.99%  ksoftirqd/0  bpf_prog_run_generic_xdp
5.70%  ksoftirqd/0  bpf_prog_3b0d72111862cc6a_ipv4_forward_func
5.23%  ksoftirqd/0  __netif_receive_skb_core.constprop.0
3.68%  ksoftirqd/0  do_xdp_generic
3.02%  ksoftirqd/0  __qdisc_run
3.00%  ksoftirqd/0  bpf_prog_run_generic_xdp

I will tidy up and try to submit that patch to the upstream Linux Kernel. Some people also tried similar approaches to fix the XDP generic performance by reducing the package headroom constant but never got accepted. So I hope my different approach has more success.

What to expect throughput-wise

To get some impression about the performance potential of eBPF, I have created a little BPF program that forwards all incoming IPv4 network packages. To test the program, I used an AVM FRITZ!Box 7360 v2 running OpenWrt with Linux Kernel version 6.6.30, whose CPU limits the throughput performance of its Gigabit ports. Then I grabbed a PC with two network ports, connected both ports with one port of the FritzBox respectively, and created two network namespaces at the PC to force the network traffic through the FritzBox. I used iperf3 to generate TCP traffic for 60 seconds for each tested setting respectively; you can find the results inside the following plot:

The settings/parts are the following:

  • off: OpenWrt’s Firewall stopped (/etc/init.d/firewall stop)
  • on: OpenWrt’s Firewall started but without any offloading enabled
  • sw_flow: Netfilter’s software offloading enabled (flow_offloading)
  • hw_flow: Netfilter’s hardware offloading enabled (flow_offloading_hw)
  • xdp256: The eBPF IPv4 forwarding program attached to the XDP generic hook with the default package headroom of 256 Bytes
  • xdp32: The eBPF IPv4 forwarding program attached to the XDP generic hook with a custom package headroom set to 32 Bytes allowed by my patch
  • tc: The eBPF IPv4 forwarding program attached to the TC hook

Unfortunately, I couldn’t test XDP native yet because I don’t have any hardware around whose driver supports XDP.

As you can see and as I already mentioned, there is no performance gain from the XDP generic mode with the default 256 Bytes packet headroom due to the SKB re-allocation. Contrary to the patched XDP generic and TC, the network throughput about doubled compared to OpenWrt’s Firewall without any offloading.

Compared to Netfilter’s offloading implementations, there is also a performance gain, but admittedly only a small one. When we look at the Linux kernel source code here, this becomes plausible because the TC hook is located right before the Netfilter ingress hook. The XDP generic hook comes earlier than those two, even before taps (e.g., where tcpdump could listen).

So what’s next?

These are the upcoming milestones for the project:

  • Creation of a user-space program for loading and communicating with the eBPF program
  • Extending the user-space program with a firewall NAT/rule parser
  • Extending the eBPF program to apply the parsed firewall configuration on incoming packages
  • Evaluating the forwarding and dropping performance of the implementation

You can find the current eBPF IPv4 package forwarder, the XDP generic headroom patch, and the measurement and plotting script in my GitHub repository here: https://github.com/tk154/GSoC2024_eBPF-Firewall.

I will also upload the source code of the firewall there, which I am excited to implement in the upcoming weeks. I think the first small forwarding performance test already shows the potential of eBPF as a new offloading variant, and I haven’t even tested XDP native yet.

If you have any questions, don’t hesitate to ask them, and thank you for reading my first post.

GSoC 2024: LibreMesh Cable Purpose Autodetection

Hi everyone! I’m Nemael. I’m a Software Developer with a few years of experience. I recently moved to Toronto, where I’m looking for a Software Development job.

During this summer, I will be participating in the Google Summer of Code 2024, where I will be working on Cable Purpose Autodetection software for LibreMesh. For this project, I will create special configurations that routers can automatically deploy if they detect specific recognizable scenarios.

This first blog post is meant to provide details to understand the project and its implementation.

LibreMesh?

LibreMesh is a modular framework for creating OpenWrt-based firmwares for wireless mesh nodes. As it support most OpenWrt supported devices, following the documentation you can build the firmware and configure the routers to start using them as part of a mesh network. Many communities around the world already use LibreMesh and mesh networks as their network of choice.

There is also a list of available community-specific configurations on LibreMesh’s git. These configurations contain information about how to configure your router (and your network) to join the community mesh network.

Motivations

In some scenarios, routers running LibreMesh benefit greatly from having specific configurations applied, which are not yet natively integrated — As of now, an expert needs to step in and setup these configurations manually, which is not always an option for communities.

For this project, I will take a look at scenarios that would benefit from automatically applied configurations. I will focus on Ethernet ports and WAN networks. As some of these scenario can be recognized automatically at runtime, the idea is to prepare configurations ahead of time, and apply them (either automatically or after pressing a hypothetical “configure router” button) if one of these scenario is recognized. Here are a few scenarios that I will be working on:

  1. A router is connected to the internet (Most routers use the WAN port for this).
  2. A LibreMesh running router is connected to another router of the same network using the Ethernet port, to extend coverage.
  3. A router is connected to a dummy wifi device using Ethernet for a point-to-point link. This occurs, for example, when a router with its original firmware is employed for wifi links, connected to another router running LibreMesh, which manages the routing. This scenario is very uncommon, so it might not be useful to auto-detect such scenarios.
  4. A cabled client is connected via Ethernet to a LibreMesh running router.

More scenarios might come up, and some might be deemed not occurring often enough to warrant an auto-configuration.

Methodology

I will first work in a virtualized setting, using documentation from the VIRTUALIZING.md file available on LibreMesh’s git. Once the project has progressed far enough to warrant testing on actual hardware, I will either buy a few routers, or will work with Toronto’s Hackerspace to setup LibreMesh’s software on some of their routers.

I forked LibreMesh’s project to work on my own side, and once it is ready and mergeable, I will make a PR which will be ~hopefully~ integrated into LibreMesh’s codebase.

Concluding thoughts

As I am not very much experienced in network management, I took a lot of time before the coding period started to read up on LibreMesh’s documentation, setup a development environment, and to discuss with my mentors about what the project will look like. I am not quite yet finished with that, but things are moving forward quickly! I am very excited for the advancement of this project and how it would fit within LibreMesh 😀


Thank you Ilario for your help writing details in this post, and for your information about the scenarios where it would be useful to have auto configuration.

See you next update!

GSoC 2024 Qaul: Qaul BLE module for Linux

Abstract

The project aims to create a qaul Ble(Bluetooth low energy) module in Rust for Linux which is compatible with Ble modules for Android and iOS. The module should provide the following functionalities :

  1. BLE discovery and connections: The project should be able to scan other discoverable devices and send BLE advertisements to nearby nodes at regular intervals.
  2. Establish data communication: The connection should be able to exchange messages between qaul nodes.

Plan Of Action

This project aims to establish a BLE connection between Linux and other compatible devices.

BLE GATT Service ➖

  • GATT Server: A GATT server corresponds to an ATT server. It receives requests from clients and sends responses back.
  • GATT Client: A GATT client corresponds to an ATT client. It sends requests to the server and receives responses. After the discovery of the server’s service attributes, it can start reading/writing about attributes found on the server based on permissions provided by the server.

GATT data hierarchy.

Implementation

  • Initiate BLE GATT server  :
    • Initialize the Bluetooth session and activate the Bluetooth adapter.
    • Initiate local Ble GATT server using the bluer crate and set up Bluetooth GATT Services and Characteristics, which become available for remote servers.
  • Initiate BLE GATT client :
    • Initiate advertisement to look for nearby qaul nodes with specified service UUID and characteristic UUID.
    • Connect to discovered devices and send RPC messages to libqaul.
  • Establish communication between devices :
    • On receiving the message.DirectSend request from the libqaul. Write the data into the characteristic.value if the service.UUID() and characteristic.UUID() is compatible with other qaul devices.
    • Create a message listener and forward the characteristic value passed in the event to libqaul to be serialized as protobufs and displayed to the user.
Qaul BLE communication architecture

Deliverables :

  • Creating a stable working ble_module for Linux including fixing, and testing the current prototype.
  • Establish communication of the ble_module with libqaul.
  • Make sure the Linux Bluetooth code is interoperable with Android and iOS Bluetooth modules.
  • Try to optimize the Bluetooth module for better stability and connection.

Concluding Thoughts:

I have always been inspired by the idea of a world built on free and decentralized architecture. When I read the project’s description, I felt motivated to join a community dedicated to achieving borderless, decentralized communication that rivals established centralized applications.

I would like to thank my mentors, Mathias Jud and Brenodt, for guiding me through the project and helping me understand the internal workings of the applications. I am eager to contribute to the project with great enthusiasm.

Our Google Summer of Code 2024 Projects

We are thrilled to announce that freifunk.net has been accepted to participate in the Google Summer of Code 2024! This year, we’re again not just going solo; we’re in great company alongside qaul.net, Libremesh, Retroshare, and OpenWRT. Together, we are set to push the boundaries of open-source technology with a series of exciting projects.

How Does Google Summer of Code work?

Google Summer of Code is an international program that invites students and contributors to engage in open source software development. Each project has a predetermined size, with contributors dedicating between 95 and 350 hours to complete their tasks. The duration of the projects can vary, ranging from 8 to 22 weeks, though the typical period is set at 12 weeks. This structured timeline ensures that contributors have ample time to design, develop, and refine their software, while also fostering mentorship and growth within the open source community.

Our Diverse Project Lineup

This year’s lineup features innovative projects proposed by a talented group of contributors. These projects aim to enhance community networks and develop advanced tools for networks and applications. Here’s a quick introduction to the contributors and their projects:

Each contributor brings unique ideas and expertise, driving forward our mission to foster a vibrant community of open-source enthusiasts.

Follow Our Journey

These projects showcase our commitment to promoting open-source values and enhancing the functionality of community networks worldwide. We invite you to follow our progress as these projects unfold over the summer. Detailed descriptions of each project will be provided soon, allowing you to explore the innovative work our contributors are planning. If you are curious about what we have achieved in the past, these blog posts will give you a glimpse into our previous successes.

Stay tuned for updates, and join us in supporting these innovative endeavors in what promises to be a groundbreaking summer of coding!

GSoC ’23: Final Report on Joint Power and Rate Control in Userspace

Hello, everyone! In this concluding blog post, I’m excited to highlight the achievements in the field of Joint Power and Rate Control in User Space, and also offer insights into the future of this research and development endeavor. If this is your first encounter with my work, I strongly encourage you to explore the introductory blog posts from GSoC ’22 and GSoC ’23, as they provide a comprehensive overview of resource allocation in IEEE 802.11 networks.

GSoC ’22 blog posts: Introduction Mid-Term Final

GSoC ’23 blog posts: Introduction Mid-Term

Passive-Minstrel-HT in user space

After the first-half of the GSoC ’23 coding timeline, I did some more modifications to the passive user space Minstrel-HT such that it is more robust and also compatible with the new WPCA API. In my mid-term report, I conducted passive measurements on a link between a BananaPi with an MT7615 chip as the Access Point (AP) and a Xiaomi Redmi 4A Gigabit Edition with MT7621 as the Station (STA). However, it’s worth noting that this setup did not support aggregation in the transmission frames.

Since the kernel Minstrel-HT also consider real-time Aggregate MAC Protocol Data Unit (AMPDU) length to calculate the estimated throughput, it was crucial to test to test the behaviour of the user space Minstrel-HT (Py-Minstrel-HT) with its kernel counterpart in an aggregation context. To do this, the experiment setup was changed to involve two identical TP-Link WDR4900 routers, both equipped with ATH9K chips. One router operated as an Access Point (AP), and the other served as a Station (STA)

Initially, the rate selection between kernel Minstrel-HT and Py-Minstrel-HT for the pure ATH9k link with frame aggregation showed a significantly higher disparity compared to previous experiments on the MT76 link, where errors were consistently below 2%. Upon closer examination, it became evident that during the refactoring of Python-WiFi-Manager and Py-Minstrel-HT, there had been a change in how AMPDU length was calculated, resulting in an incorrect calculation.

Furthermore, upon observing a consistently higher error rate associated with the maximum probability rate annotated at the end of the Multi-Rate Retry chain (MRR), I discovered a peculiarity in the kernel Minstrel-HT code. Specifically, when encountering initial attempt statistics for a rate that wasn’t used, the code assigned a previous average success probability of 0. This behavior occurred exclusively in situations where data rates had inherited their success probabilities from higher rates within the same group. It remains uncertain whether this aspect of the code logic was intentional or not.

To enhance the robustness of the measurements, I made adjustments to the execution of Passive Py-Minstrel-HT. Now, before factoring in transmission data, it follows a new sequence. Initially, it explicitly sets the lowest supported data rate, then resets the kernel rate statistics, and subsequently waits until the effect of the kernel rate statistics becomes evident in the API output trace. This revised approach ensures strict alignment between the transmission statistics in user space and those in kernel space right from the outset. As a result of these modifications, the rate selection between the kernel and user space Minstrel-HT is now identical.

In the following section, I present two comparison experiments conducted between kernel Minstrel-HT and Py-Minstrel-HT. The results are presented in terms of the number of errors, discrepancies in rate selection between the kernel and user space Minstrel-HT, and the percentage of errors observed at each Multi-Rate Retry (MRR) stage. These MRR stages, numbered from 0 to 3, are populated with rates ranked in descending order of their estimated throughput, with the final MRR stage reserved for the maximum probability rate to enhance robustness.

Experiment 1

MRR Setting info:
0 {'correct_instances': 19997, 'incorrect_instances': 0, 'percent_error': 0.0}
1 {'correct_instances': 19997, 'incorrect_instances': 0, 'percent_error': 0.0}
2 {'correct_instances': 19997, 'incorrect_instances': 0, 'percent_error': 0.0}
3 {'correct_instances': 19997, 'incorrect_instances': 0, 'percent_error': 0.0}
4 {'correct_instances': 19997, 'incorrect_instances': 0, 'percent_error': 0.0}

Experiment 2

MRR Setting info:
0 {'correct_instances': 19981, 'incorrect_instances': 0, 'percent_error': 0.0}
1 {'correct_instances': 19981, 'incorrect_instances': 0, 'percent_error': 0.0}
2 {'correct_instances': 19981, 'incorrect_instances': 0, 'percent_error': 0.0}
3 {'correct_instances': 19981, 'incorrect_instances': 0, 'percent_error': 0.0}
4 {'correct_instances': 19981, 'incorrect_instances': 0, 'percent_error': 0.0}

Given the congruent behavior observed between the user space and kernel space Minstrel-HT in terms of statistics collection and rate selection, this sets a robust foundation for expanding the capabilities of the user space Minstrel-HT with power control. This extension will allow for a performance comparison between the user space Minstrel-HT and kernel Minstrel-HT, offering valuable insights into their respective impacts.

Extending Minstrel-HT with Power Control (Joint Controller)

In this section, I will detail the implemented joint power and rate controller, which draws its inspiration from Minstrel-Blues, a framework originally developed by Prof. Thomas Hühn in 2011. The joint controller has demonstrated its effectiveness by notably reducing interference and enhancing spatial reuse, particularly in scenarios involving multiple access points that utilize the joint controller. The subsequent sub-sections will present the algorithm in a format mirroring the structure of the proposed joint controller, facilitating a straightforward comparison between the two.

Configurable Parameters

The parameters listed in the table are the precise variable names used when specifying the rate control options (rc_opts) to Py-Minstrel-HT via WiFi-Manager.

Initialisation

During initialisation, the lowest supported rates are set using the reference power, unless Py-Minstrel-HT is executed with fixed power mode. Given that the implementation already incorporates a reference power, the ceiling mode has been eliminated from the extended controller.

Updating Rate Statistics

During each update interval, the extended Minstrel-HT algorithm updates the statistics for all the rates and power levels currently in use. Moreover, it also updates the success probability for unused rates at each power level, provided that the rate group contains at least one higher rate with attempt statistics from the same group and the same power level.

Following the statistics update, the selection of the reference power and sample power for all rates is adjusted based on their success probability and the constraints specified in the rc_opts parameters. If the success probability of the sample power is within the the inc_prob_tol of the reference power, the sample power is decrease by pwr_dec. On the other hand, if the success probability of the sample power is lower the the dec_prob_tol of the reference power, the sample power is increased by pwr_inc to increase throughput. The reference power is also updated in the same way but the tolerance being based on 100% (1.0) success probability.

For example, if the successful probability of the reference power is 0.95, the dec_prob_tol is 0.1 and the pwr_dec is 1, then the reference power will decrease by 1 dB because the probability of the reference power is greater than (1.0 – 0.1)=0.9. Finally, the best power level for all the rates is set to sample_power + opt_pwr_offset.

Rate and Power Sampling

The rate sampling process remains unchanged, as it continues to be carried out by the Minstrel-HT algorithm just as it was before the extension. However, there is a modification in that the power annotation for the sampled rates is now set to the reference power. Furthermore, the algorithm has been expanded to include power sampling, which cycles through the Multi-Rate Retry (MRR) selection process in a cyclic manner.

Selecting Best Rates for the MRR chain

In order to select the best rates, the algorithm defines a linear utility function that exposes trade-off between throughput (benefit) and interferences to other transmissions (cost) of each rate.

The benefit function is defined based on its estimated throughput and the current maximal throughput.

The cost factor serves as a representation of the interference costs associated with achieving a specific throughput at a given power level. In essence, this interference cost depends on two key factors: the extent of interference coverage and the duration of the interference event. However, due to the inherent complexity of modeling such effects, the following definition opts for a simplified approach, approximating the interference area solely based on power(ratei). Similarly, the duration of interference is approximated as 1 divided by the throughput (1/throughput(ratei)).

Using the utility function, the extended joint controller selects the best rates which have the highest utility, contrary to the Minstrel-HT algorithm which only considers the estimated throughput.

Analysing Performance of the Joint Controller

In this section, I will demonstrate the performance of the joint controller in comparison to the kernel Minstrel-HT and the Py-Minstrel-HT without power control extension.

Experiment Description

The experiments consisted of two variations based on the measurement tool:

Iperf3 and tcpdump

The traffic for the measurement was generated using iperf3, with the server hosted on the STA and the client on the AP. Consequently, the traffic flow traversed from the AP to the STA, aligning with the experiment’s objective of running resource controllers on the AP. Furthermore, the network traffic was monitored using tcpdump with a snapshot length of 150 bytes per packet to limit the amount of payload data captured. This data was then stored in a pcap file, which we subsequently parsed to extract the achieved throughput values.

The experiment setup involved two TP-Link WDR4900 routers, both featuring ATH9K chips. One router operated as an Access Point (AP), while the other functioned as a Station (STA).

Flent

Flent is a tool that serves as a wrapper around netperf and similar utilities to execute predefined tests, aggregate the outcomes, and generate plots. It also retains the raw data, providing users the opportunity to analyze and visualize the data, alongside the automated plots. The measurement traffic was generated using flent, with the server hosted on the AP and the client on the STA. However, the traffic direction was inverted using the “–swap-up-down” option. Notably, Flent can measure packet latency, a capability unavailable with iperf3.

The experiment setup involved a TP-Link WDR4900 router as an Access Point (AP) with a Macbook Pro 13′ 2017 as the Station (STA).

Results

The results have revealed a rather surprising trend: the joint controller consistently delivers improved throughput, even in single-link experiments. When compared to the kernel Minstrel-HT and Py-Minstrel-HT without the power control extension, the joint controller demonstrates a noteworthy throughput gain in UDP experiments, consistently falling within the range of 10-25%. These results highlight the joint controller’s capability to improve aggregation and dynamically select power levels.

Additionally, the results showcase various runs of the joint controller, each with a utility weight factor set to 1, 10, and 100. It is worth noting that the experiment using Flent utilised TCP traffic.

Iperf3 and tcpdump
Flent

Conclusion and Outlook

The analysis of the joint controller shows immense promise, and I’m excited about the prospect of continuing to run and test it with multiple interfering routers even beyond the conclusion of GSoC ’23. Regrettably, newer WiFi chips are becoming increasingly closed-source, limiting access to information and functionality related to resource control for the general public. Unfortunately, these newer chips still support power control and provide real-time estimates of throughput. As a result, our plan is to develop an independent power controller after completing Minstrel-Blues in the user space.

GSoC’23 (Final Report) : Qaul Matrix Bridge Tutorial

I am happy to share the completion of the Google Summer of Code Program for 2023 and into this post, You will find about how you can use my project for interconnecting the Qaul and the Matrix chat applications.

I would recommend reading about the project via my previous blog posts.

Requirements

You should have an account on matrix that can act as a bridge bot.

For more secure communications you can opt for running your own matrix homeserver but that is not necessary since our bridge works well on the default matrix server as well.

You should have the binary to the bridge either as code or distributed package.

We are still working on packaging the binary for end users and then in place of cargo run –bin qaul-matrix-bridge you can simply run our binary.

Initialization and Configuration

As of now, We are supporting the bridge only as a daemon process binary without any control on it via CLI or GUI. All the logics are integrated on matrix-sdk or ruma and qaul or libqaul.

On the server where you wish to start a binary between a local qaul network, You need to have one node running the binary and rest will follow along the way.

After installing the qaul project you can

cargo run --bin qaul-matrix-bridge {homeserver-url} {bot-matrix-id} {bot-account-password}

- homeserver-url : The URL for matrix homeserver. Default can be [https://matrix.org]
- bot-matrix-id : The user account of the bot on matrix. Eg : @qaul-bot:matrix.org then id is [qaul-bot]
- bot-account-password : The password for your bot account on matrix

Inviting the Bridge to Matrix Room

Once the bridge is running up. You have to go to your own matrix account and create a new room. Please make sure to turn off any encryption. Now invite the bot to your matrix room and it automatically joins the room.

1. Create a matrix room and disable end to end encryption

2. Invite the bot account into the room

Navigating through Matrix Menu

We have multiple menu options available in our Matrix Menu. You can see the list of all the possible functionalities with !help command.

!help

!qaul

!users

Here the users are displayed over a local peer network in qaul. Qaul Matrix Bridge Bot is the default bridge user unless opted for other users. Along with their name the random strings which you can see are the PeerID of a user on qaul-network.

!invite

!group-info

!remove

Not just messages 🎉 but exchange files too

Closing Notes

I am glad to share the prototype of the bridge. It was a great experience learning from different matrix specifications and their protocol design. In future, We are looking forward to incorporate Qaul Menu, distribute the binaries over various mediums, Styling the bot responses via emotes and lot more.

A special thanks to my mentor MathJud who have helped with serious problems whenever we faced it and with him I have learned very important principles when designing a chat application. He also introduced me to Matrix Camp in Chaos Communication Camp where we presented the prototype (bit buggy at that time) to people who are working on Matrix protocol.

I would also encourage people to come and contribute to the Qaul project because this project serves an important use case to allow communication during Internet break-outs. Developing a solution to stay hidden from government will help sustain the openess of speech and freedom.

GSoC’23: Implementation of Web Interface of Retroshare – Final Report

Hello again folks 👋,
My journey in Google Summer of Code has come to an end, and I am excited to share all the things I have accomplished during this program and the progress done in the Implementation of Web Interface of Retroshare.

Project Description

About Retroshare

Retroshare provides a decentralized, encrypted connection with maximum security between nodes where they can chat, share files, mail, etc. It uses GXS (Generic eXchange System) that provides asynchronous distribution, authentication, privacy, security of generic data. It is designed to provide maximum security and anonymity to its users beyond direct friends. Likewise, it is entirely free and open-source software.

It is a C++ software program that comprises a headless lib called libretroshare. And, this lib helps in making a headless server (retroshare-service), a standalone app with a user interface built using Qt, an android client and more.

Project Goal

The main goal of my project was Implementation of Web Interface of Retroshare in which I had to improve the Web UI and add missing features from the Qt counterpart of Retroshare. The milestones which I had to achieve during GSoC are described below.

Milestones Achieved

File Section

  • Implemented File Search feature using JavaScript Proxy which was pending from a long time.
  • Implemented feature to view which chunks are getting downloaded in the progress bar and the state of the chunks getting downloaded. Also, fixed the Download action buttons and implemented the feature to manage the chunk strategy for a file getting downloaded.
  • Implemented the Share Manager feature, which lets you add and manage the shared directories and manage different level of access and permissions for each directory. It is in progress and will be complete soon.

Config Section

  • Implemented the mail config panel to create and manage all the mail tags.
  • Implemented features to set Dynamic DNS and configure Tor/I2P Socks Proxy from Web UI and fixed NAT and some other existing options in the network config section.

Mail Section

  • Improved and made the mail composer reusable to help user to search and select the nodes (contacts) and improved the UI of composer.
  • Implemented feature to reply to any mail from Web UI by reusing the mail composer.
  • Implemented feature to view all the Attachments in one place and also to view the attachments sent in a mail.
  • Fixed mail view and the order of mail in all mail sections.

Forums Section

  • Implemented Caching in Forums section to reduce repeated numerous API calls and reuse the already fetched data and invalidate the cache and refetch the data again. It is also a WIP.
  • Fixed the bug which was causing infinite calls to the same endpoint and freezing or crashing the Web UI.

I also improved the whole User Interface of the Web UI and made it more user-friendly. Furthermore, I refactored the directory structure, tidied up the repository and optimized the build.

You can view all of my contributions at once – GitHub Repo.

Other Contributions

Apart from the contributions in the Web UI, I also contributed to the libretroshare where I

  • Added code to generate the jsonapi endpoints for the file section and other sections in Web UI.
  • Implemented getChunkStrategy() endpoint in the libretroshare in C++ and also added more code to generate other API endpoints.
  • Fixed MIME type for content-type in HTTP header and added routes for new directory structure.

What’s Next?

The Web Interface of Retroshare has improved so much since I have undertaken this project, and It’s almost ready for release in the next release cycle of Retroshare. Still, there is a lot of room for improvement on the Web UI and I will keep adding more features in it.
I will complete all the features which are currently in progress and even after GSoC I would love to keep contributing and improving this truly wonderful open sourced project created solely for the welfare of society.

Wrapping Up

My journey in GSoC with Retroshare has been an incredible learning experience for me. I got to learn so much all thanks to my mentor Cyril Soler, M. Saud and fellow community members. GSoC has bridged the gap between aspiring open source contributors and industry level experts and has enabled folks like me to gain experience by working on enterprise level projects under the guidance of wonderful people.

This summer has turned out to be a pivotal point in my career and has boosted my confidence and helped me to strengthen my arsenal by learning new skills, gaining hands-on experience and improving my programming skills.

As I keep moving forward in my coding journey after this project, I’m excited to utilize the knowledge and experience I’ve gained here in future projects. I plan to stay engaged in this community after GSoC and My aim is to continue contributing actively to the project’s growth and achievements.

GSoC’23: Automation tools for LibreMesh firmware build and monitoring – final

Previous post: https://blog.freifunk.net/2023/07/08/gsoc23-automation-tools-for-libremesh-firmware-build-and-monitoring-part-2/

Project results

These are the repositories with the produced code, where the first is the main:

https://gitlab.com/a-gave/libremesh-ansible-playbooks

https://gitlab.com/a-gave/libremesh-ansible-collection

https://gitlab.com/a-gave/ansible_openwrt_buildroot

Playbooks and roles to build releases

In this final part other than improving the code, I also used it a lot to prepare a list of firmware images with the new releases of LibreMesh: the v2020.3 based on OpenWrt 19.07.10 and the upcoming new relase v2023.1-rc1 that support the latest OpenWrt 22.03.5.

I extended and tested the automation tools to build for all devices of a defined target/subtarget to make the precompiled firmwares for the latest releases of LibreMesh. The list of target/subtarget is based on the advice of one of the last LibreMesh meeting about the architectures most used, that mainly match also those with a lot of low-cost devices.

Building for all devices has meant to encounter these kind of issues:

  • the `default` set of lime-packages doesn’t fit in the factory and/or the sysupgrade image of a device and this cause a build failure.
  • as the previous but the devices fail silently without interrupting the build
  • multicore/parallel compiling randomly fails

For the first problem, there isn’t in the OpenWrt Buildroot a mechanism to predict the resulting firmware size i.e. based on the list of the selected packages. This seems related to the compression tools used to produce the firmware that could change depending on the devices, and leads to have successful builds for a device with an image size of 7168k while another with the same image size will fail. This means to not be able to predict if the produced LibreMesh firmware will fit in the device memory, and the devices that cause a build failure should be identified singularly.

I take notes of devices that cause a build failure for the selected list of target/subtarget for the mentioned OpenWrt and LibreMesh releases using the default set of LibreMesh packages. Firstly providing a mechanism to exclude the devices that throws an error. This doesn’t mean that the other devices are somehow ‘supported’ but simply that the build doesn’t fail. Since this work of mapping of all devices is still in development it is available at the branch dev of the collection of roles:

Since the support of LibreMesh for OpenWrt 22.03.5 is still in testing, and the amount of time and space needed to reproduce all the images for the selected architectures may be considerable. I put in place two mechanism:

  • having a list of `supported_devices`, that can be used to rebuilt only a sublist of devices, ideally those of people/community who can test them.
  • I started to setup a set of docker images for differents OpenWrt targets/subtargets to build with the default packages of LibreMesh, that I briefly explain.

Dockerized buildroot for each target/subtarget

There is a set of Dockerfiles that I’m including in the set of ansible playbooks/roles to speed up the process of building and to save space among different LibreMesh releases based on the same OpenWrt releases.

https://github.com/a-gave/libremesh_openwrt_buildroot_docker

It is thinked to be able to rebuild different versions of LibreMesh with a separated environment but avoiding rebuilding the same OpenWrt tools and toolchain more times. For instance if the build of LibreMesh for all devices of the OpenWrt target ath79/generic takes 1 hour and has a size of 28.8GB, a subsequent builds with minor changes will take around 30 min and increase the size of the produced docker image by only 15GB. In a similar way a docker image with pre-compiled tools, toolchain, pre-extracted kernel and precompiled kmods and packages, that build without specifying a device but only the target/subtarget will take 11.2GB of space but allow to build then an image, that has pre-selected the set of LibreMesh suggested packages, within 4 minutes. This time also depends on the amount of additional packages selected and on the availability of computing resources for the builder machine.

Other contributions

In this period I also contributed to LibreMesh project providing metrics for this analysis of the effect of changing the default distance for long wireless links.

https://github.com/ilario/wifi-distance-setting-exploration

This is a kind of regression for outdoor devices, still unfixed due to two fact:

  • neither OpenWrt nor LibreMesh, has an easy way to determine if the device is manufactored to be used indoor (tipically a router) or outdoor (an antenna).
  • to improve the performances of routers, a LibreMesh choice was to lower the default distance, but so remain disadvantaged the antennas. This means they are more susceptible to have a broken wireless link (as if it were a cut cable) if the device is accidentally reset and so they require a custom build with different defaults.

Conclusion

A thanks to Ilario and Stefca for having been my mentors, to the organizations of FreiFunk and LibreMesh that made this work possible, and to all folks of OpenWrt and Gluon that contribute to free and open source networks.

GSoC ’23: Migrating luci-app-mjpg-streamer to JavaScript: A Comprehensive Guide

The latest OpenWRT versions introduced a new web interface system that eliminated the need for Lua. Instead, the client’s browser handles the rendering and computation, allowing routers to focus on their primary tasks. This change has the advantage of eliminating the lua runtime, saving storage space, and having faster routers. In the previous CBI-based system, pages were rendered on the router and sent as HTML to the browser, increasing the load on the routers. This inefficiency can result in performance problems. To aid in this transition, LuCI offers the LuCI-JavaScript API, which is now utilized for constructing web interfaces.

luci-app-mjpg-streamer

I have successfully migrated luci-app-mjpg-streamer to JavaScript, making it a valuable example for building or migrating LuCI apps. This tutorial covers the essential aspects of the process, providing a comprehensive guide.

Below is the tree view representation of the directory structure for the app:

.
├── Makefile
├── htdocs
│   └── luci-static
│       └── resources
│           └── view
│               └── mjpg-streamer
│                   └── mjpg-streamer.js
├── po
│   ├── ar
│   │   └── mjpg-streamer.po
│   ├── ...
│   
│    
│       
└── root
    └── usr
        └── share
            ├── luci
            │   └── menu.d
            │       └── luci-app-mjpg-streamer.json
            └── rpcd
                └── acl.d
                    └── luci-app-mjpg-streamer.json

How to migrate your app :

ACLs

In the file  root/usr/share/rpcd/acl.d/luci-app-mjpg-streamer.json, we provide all the necessary access permissions for our application to function properly.

{
	"luci-app-mjpg-streamer": {
		"description": "Grant UCI access for luci-app-mjpg-streamer",
		"read": {
			"uci": [
				"mjpg-streamer"
			]
		},
		"write": {
			"uci": [
				"mjpg-streamer"
			]
		}
	}
}

For example, in my previously migrated app: luci-app-olsr, when there is a need to grant public access to specific pages, we ensure that all essential access permissions are appropriately configured in root/usr/share/rpcd/acl.d/luci-app-olsr-unauthenticated.json.

These permissions are necessary for the operation of our application when a user is not yet authenticated-

{
	"unauthenticated": {
		"description": "Grant read access",
		"read": {
			"ubus": {
				"uci": ["get"],
				"luci-rpc": ["*"],
				"network.interface": ["dump"],
				"network": ["get_proto_handlers"],
				"olsrd": ["olsrd_jsoninfo"],
				"olsrd6": ["olsrd_jsoninfo"],
				"olsrinfo": ["getjsondata", "hasipip", "hosts"],
				"file": ["read"],
				"iwinfo": ["assoclist"]

			},
			"uci": ["luci_olsr", "olsrd", "olsrd6", "network", "network.interface"]
		}
	}
}
 

To learn more about how ACL (Access Control List) works, you can refer to this resource: OpenWRT’s docs  It is important to consider applying the principle of least privilege when configuring ACLs.

MENU

In the file root/usr/share/luci/menu.d/luci-app-mjpg-streamer.json, we define the location where our view will be displayed in the admin menu. This is utilized for admin specific views

{
	"admin/services/mjpg-streamer": {
		"title": "MJPG-streamer",
		"action": {
			"type": "view",
			"path": "mjpg-streamer/mjpg-streamer"
		},
		"depends": {
			"acl": [
				"luci-app-mjpg-streamer"
			],
			"uci": {
				"mjpg-streamer": true
			}
		}
	}
}

The path indicates the location where the JavaScript view to be rendered is present with respect to the htdocs/luci-static/resources/view directory.

FORMS

To explore the JavaScript APIs offered by LuCI, you can visit the following link: LuCI client side API documentation. A recommended starting point is the core luci.js class.

LuCI forms allow you to create UCI or JSON-backed configuration forms. To create a typical form, you start by creating an instance of either LuCI.form.Map or LuCI.form.JSONMap using new. Then, you can add sections and options to the form instance. Finally, invoking the render() method on the instance generates the HTML markup and inserts it into the Document Object Model(DOM). For a better understanding of how LuCI forms work, you can refer to the following : LuCI.form.

This is an example demonstrating the usage of LuCI.form within one of the admin’s views, using a small portion of the mjpg-streamer.js code. The full code for the file can be found here.

'use strict';
'require view';
'require form';
'require uci';
'require ui';
'require poll';

/* Copyright 2014 Roger D < rogerdammit@gmail.com>
Licensed to the public under the Apache License 2.0. */

return view.extend({
	load: function () {
		var self = this;
		poll.add(function () {
			self.render();
		}, 5);

		document
			.querySelector('head')
			.appendChild(
				E('style', { type: 'text/css' }, [
					'.img-preview {display: inline-block !important;height: auto;width: 640px;padding: 4px;line-height: 1.428571429;background-color: #fff;border: 1px solid #ddd;border-radius: 4px;-webkit-transition: all .2s ease-in-out;transition: all .2s ease-in-out;margin-bottom: 5px;display: none;}',
				]),
			);

		return Promise.all([uci.load('mjpg-streamer')]);
	},
	render: function () {
		var m, s, o;

		m = new form.Map('mjpg-streamer', 'MJPG-streamer', _('mjpg streamer is a streaming application for Linux-UVC compatible webcams'));

		//General settings

		var section_gen = m.section(form.TypedSection, 'mjpg-streamer', _('General'));
		section_gen.addremove = false;
		section_gen.anonymous = true;

		var enabled = section_gen.option(form.Flag, 'enabled', _('Enabled'), _('Enable MJPG-streamer'));

		var input = section_gen.option(form.ListValue, 'input', _('Input plugin'));
		input.depends('enabled', '1');
		input.value('uvc', 'UVC');
		// input: value("file", "File")
		input.optional = false;

		var output = section_gen.option(form.ListValue, 'output', _('Output plugin'));
		output.depends('enabled', '1');
		output.value('http', 'HTTP');
		output.value('file', 'File');
		output.optional = false;

		//Plugin settings

		s = m.section(form.TypedSection, 'mjpg-streamer', _('Plugin settings'));
		s.addremove = false;
		s.anonymous = true;

		s.tab('output_http', _('HTTP output'));
		s.tab('output_file', _('File output'));
		s.tab('input_uvc', _('UVC input'));
		// s: tab("input_file", _("File input"))

		// Input UVC settings

		var this_tab = 'input_uvc';

		var device = s.taboption(this_tab, form.Value, 'device', _('Device'));
		device.default = '/dev/video0';
		//device.datatype = "device"
		device.value('/dev/video0', '/dev/video0');
		device.value('/dev/video1', '/dev/video1');
		device.value('/dev/video2', '/dev/video2');
		device.optional = false;

                  //... This snippet represents only a small portion of the complete code.

		var ringbuffer = s.taboption(this_tab, form.Value, 'ringbuffer', _('Ring buffer size'), _('Max. number of pictures to hold'));
		ringbuffer.placeholder = '10';
		ringbuffer.datatype = 'uinteger';

		var exceed = s.taboption(this_tab, form.Value, 'exceed', _('Exceed'), _('Allow ringbuffer to exceed limit by this amount'));
		exceed.datatype = 'uinteger';

		var command = s.taboption(
			this_tab,
			form.Value,
			'command',
			_('Command to run'),
			_('Execute command after saving picture. Mjpg-streamer parses the filename as first parameter to your script.'),
		);

		var link = s.taboption(this_tab, form.Value, 'link', _('Link newest picture to fixed file name'), _('Link the last picture in ringbuffer to fixed named file provided.'));

		return m.render();
	},
});

Flexible Views

For enhanced flexibility in our pages, we have the option to manually define the HTML, which I have used in the status views. This approach allows us to have more control over the page structure and content, providing greater customization possibilities

This is an example demonstrating the usage of flexible views within one of the status’s views, using a small portion of the topology.js code. The full code for the file can be found here.

'use strict';
'require uci';
'require view';
'require poll';
'require rpc';
'require ui';


return view.extend({
	callGetJsonStatus: rpc.declare({
		object: 'olsrinfo',
		method: 'getjsondata',
		params: ['otable', 'v4_port', 'v6_port'],
	}),

	fetch_jsoninfo: function (otable) {
		var jsonreq4 = '';
		var jsonreq6 = '';
		var v4_port = parseInt(uci.get('olsrd', 'olsrd_jsoninfo', 'port') || '') || 9090;
		var v6_port = parseInt(uci.get('olsrd6', 'olsrd_jsoninfo', 'port') || '') || 9090;
		var json;
		var self = this;
		return new Promise(function (resolve, reject) {
			L.resolveDefault(self.callGetJsonStatus(otable, v4_port, v6_port), {})
				.then(function (res) {
					json = res;

					jsonreq4 = JSON.parse(json.jsonreq4);
					jsonreq6 = json.jsonreq6 !== '' ? JSON.parse(json.jsonreq6) : [];
					var jsondata4 = {};
					var jsondata6 = {};
					var data4 = [];
					var data6 = [];
					var has_v4 = false;
					var has_v6 = false;

					if (jsonreq4 === '' && jsonreq6 === '') {
						window.location.href = 'error_olsr';
						reject([null, 0, 0, true]);
						return;
					}

					if (jsonreq4 !== '') {
						has_v4 = true;
						jsondata4 = jsonreq4 || {};
						if (otable === 'status') {
							data4 = jsondata4;
						} else {
							data4 = jsondata4[otable] || [];
						}

						for (var i = 0; i < data4.length; i++) {
							data4[i]['proto'] = '4';
						}
					}

					if (jsonreq6 !== '') {
						has_v6 = true;
						jsondata6 = jsonreq6 || {};
						if (otable === 'status') {
							data6 = jsondata6;
						} else {
							data6 = jsondata6[otable] || [];
						}

						for (var j = 0; j < data6.length; j++) {
							data6[j]['proto'] = '6';
						}
					}

					for (var k = 0; k < data6.length; k++) {
						data4.push(data6[k]);
					}

					resolve([data4, has_v4, has_v6, false]);
				})
				.catch(function (err) {
					console.error(err);
					reject([null, 0, 0, true]);
				});
		});
	},
	action_topology: function () {
		var self = this;
		return new Promise(function (resolve, reject) {
			self
				.fetch_jsoninfo('topology')
				.then(function ([data, has_v4, has_v6, error]) {
					if (error) {
						reject(error);
					}

					function compare(a, b) {
						if (a.proto === b.proto) {
							return a.tcEdgeCost < b.tcEdgeCost;
						} else {
							return a.proto < b.proto;
						}
					}

					data.sort(compare);

					var result = { routes: data, has_v4: has_v4, has_v6: has_v6 };
					resolve(result);
				})
				.catch(function (err) {
					reject(err);
				});
		});
	},
	load: function () {
		return Promise.all([uci.load('olsrd'), uci.load('luci_olsr')]);
	},
	render: function () {
		var routes_res;
		var has_v4;
		var has_v6;

		return this.action_topology()
			.then(function (result) {
				routes_res = result.routes;
				has_v4 = result.has_v4;
				has_v6 = result.has_v6;
				var table = E('div', { 'class': 'table cbi-section-table' }, [
					E('div', { 'class': 'tr cbi-section-table-titles' }, [
						E('div', { 'class': 'th cbi-section-table-cell' }, _('OLSR node')),
						E('div', { 'class': 'th cbi-section-table-cell' }, _('Last hop')),
						E('div', { 'class': 'th cbi-section-table-cell' }, _('LQ')),
						E('div', { 'class': 'th cbi-section-table-cell' }, _('NLQ')),
						E('div', { 'class': 'th cbi-section-table-cell' }, _('ETX')),
					]),
				]);
				var i = 1;

				for (var k = 0; k < routes_res.length; k++) {
					var route = routes_res[k];
					var cost = (parseInt(route.tcEdgeCost) || 0).toFixed(3);
					var color = etx_color(parseInt(cost));
					var lq = (parseInt(route.linkQuality) || 0).toFixed(3);
					var nlq = (parseInt(route.neighborLinkQuality) || 0).toFixed(3);

					var tr = E('div', { 'class': 'tr cbi-section-table-row cbi-rowstyle-' + i + ' proto-' + route.proto }, [
						route.proto === '6'
							? E('div', { 'class': 'td cbi-section-table-cell left' }, [E('a', { 'href': 'http://[' + route.destinationIP + ']/cgi-bin-status.html' }, route.destinationIP)])
							: E('div', { 'class': 'td cbi-section-table-cell left' }, [E('a', { 'href': 'http://' + route.destinationIP + '/cgi-bin-status.html' }, route.destinationIP)]),
						route.proto === '6'
							? E('div', { 'class': 'td cbi-section-table-cell left' }, [E('a', { 'href': 'http://[' + route.lastHopIP + ']/cgi-bin-status.html' }, route.lastHopIP)])
							: E('div', { 'class': 'td cbi-section-table-cell left' }, [E('a', { 'href': 'http://' + route.lastHopIP + '/cgi-bin-status.html' }, route.lastHopIP)]),
						E('div', { 'class': 'td cbi-section-table-cell left' }, lq),
						E('div', { 'class': 'td cbi-section-table-cell left' }, nlq),
						E('div', { 'class': 'td cbi-section-table-cell left', 'style': 'background-color:' + color }, cost),
					]);

					table.appendChild(tr);
					i = (i % 2) + 1;
				}

				var fieldset = E('fieldset', { 'class': 'cbi-section' }, [E('legend', {}, _('Overview of currently known OLSR nodes')), table]);

                //... This snippet represents only a small portion of the complete code.

				var result = E([], {}, [h2, divToggleButtons, fieldset, statusOlsrLegend, statusOlsrCommonJs]);

				return result;
			})
			.catch(function (error) {
				console.error(error);
			});
	},
	handleSaveApply: null,
	handleSave: null,
});

Feel free to reach out to me via email if you have any doubts or questions. I’m here to help! Stay tuned for more valuable content as I continue to share useful information and resources. Thank you for your support!