You might have noticed a small change when running the
dmesgcommand in Ubuntu 20.10 Groovy Gorilla, since it now errors out with:
dmesg: read kernel buffer failed: Operation not permitted
Don’t worry, it still works, it has just become a privileged operation, and it works fine with
sudo dmesg. But why the change?
Well, I happen to be the one who proposed for this change to be made, and followed up on getting the configuration changes made. This blog post will describe how it slightly improves the security of Ubuntu, and the journey to getting the changes landed in a release.
So stay tuned, and let’s dive into
Recently I worked a particularly interesting case where an OpenStack compute node had all of its virtual machines pause at the same time, which I attributed to a reference counter overflowing in the kernel’s
Today, we are going to take a in-depth look at the problem at hand, and see how I debugged and fixed the issue, from beginning to completion.
Let’s get started.
One of the more recent killer features implemented by most major Linux distros these days is the ability to patch the kernel while it is running, without the need for a reboot.
While this may sound like sorcery for some, this is a very real feature, called Livepatch. Livepatch uses ftrace in new and interesting ways, by patching in calls at the beginning of existing functions to new patched functions, delivered as kernel modules.
This lets you update and fix bugs on the fly, although its use is typically reserved for security critical fixes only.
The whole concept is extremely interesting, so today we will look into what Livepatch is, how it is implemented across several distros, we will write some Livepatches of our own, and look at how Livepatch works in Ubuntu for end users.
The next article in my series of learning about cloud computing is tackling one of the larger and more widely used cloud software packages - OpenStack.
OpenStack is a service which lets you provision and manage virtual machines across a pool of hardware which may have differing specifications and vendors.
Today, we will be deploying a small five node OpenStack cluster in Ubuntu 19.10 Eoan Ermine, so follow along, and let’s get this cluster running.
We will cover what OpenStack is, the services it is comprised of, how to deploy it, and using our cluster to provision some virtual machines.
Let’s get started.
As mentioned previously, I will write about particularly interesting cases I have worked from start to completion from time to time on this blog.
This is another of those cases. Today, we are going to look at a case where creating a seemingly innocent RAID array triggers a kernel bug which causes the system to allocate all of its memory and subsequently crash.
Let’s start digging into this and get this fixed.
In my previous blog post about Juju, a tool which lets you deploy and scale software easily, we learned what Juju is, how to deploy some common software packages, debug them, and scale them.
Juju deploys Charms, a set of instructions on how to install, configure and scale a particular software package. To be able to deploy software as a Charm, a Charm has to be written first. Usually Charms are written by experts in operating that software package, so that the Charm represents the best way to configure and tune that application. But what happens if no Charm exists for something you want to deploy?
Today we are going to learn how to write our own Charms using the original Charm writing method, by making a Charm for the Minetest game server. So fire up your favourite text editor, and lets get started.
The next piece of cloud software which has caught my attention is Juju, a tool which allows you to effortlessly deploy software to any cloud, and scale it with the touch of a button.
Juju deploys software in the form of Charms, a collection of scripts and deployment instructions that implements industry best practices and knowledge about that particular software package.
Today we are going to take a deep dive into Juju, and explore how it works, how to set it up and how to deploy and scale some common software.
Have you ever wondered what a day in the life of a Sustaining Engineer at Canonical looks like?
Well today, we are going to have a look into a particularly interesting case I worked from start to completion, as it demanded that I dive into the world of Linux performance analysis tools to track down and solve the problem.
The problem is that there is a large performance degradation when reading files from a mounted read-only LVM snapshot on the Ubuntu 4.4 kernel, when compared to reading from a standard LVM volume. Reads can take anywhere from 14-25x the amount of time, which is a serious problem.
Lets get to the bottom of this, and get this fixed.
One of the major advancements in recent technology is the rise of cloud computing, and to be perfectly honest with you, I really don’t understand how the whole cloud thing works.
So, I’m going to start a series of blog posts where I will deploy some cloud services, and learn how they work.
Today we will learn how to deploy a Ceph cluster on Ubuntu 19.04 Disco Dingo, so get ready, fire up some VMs with me and follow along.
We will cover what Ceph is, how to deploy it, and what it’s primary use cases are.
Let’s get started.
If you have been reading this blog, you have probably noticed how all the debugging and analysis of applications have been on Windows executables, and although I did create my own Linux distribution, Dapper Linux, I haven’t written much about debugging on Linux.
Time to change that. Today, we are going to look into how debugging Linux kernel crash dumps works on Ubuntu 18.10 Cosmic Cuttlefish. Fire up a virtual machine, and follow along.
We will cover how to install and configure
kdump, a little on how each tool works, and finding the root cause of a basic panic.
Let’s get started.
Recently I was asked to do some homework to prepare for an interview on Linux kernel internals, and I was given the following to analyse:
Specifically, we would like you to study and be able to discuss the code path that is exercised when a kernel caller allocates an object from the kernel memory allocator using a call of the form:
object = kmalloc(sizeof(*object), GFP_KERNEL);
For this discussion, assume that (a) sizeof(*object) is 128, (b) there is no process context associated with the allocation, and (c) we’re referencing an Ubuntu 4.4 series kernel, as found at
In addition, we will discuss the overall architecture of the SLUB allocator and memory management in the kernel, and the specifics of the slab_alloc_node() function in mm/slub.c.
I spent quite a lot of time, maybe 8-10 hours, studying how the SLUB memory allocator functions, and looking at the implementation of
kmalloc(). It’s a pretty interesting process, and it is well worth writing up.
Let’s get started, and I will try to keep this simple.