Fixing yum repos on CentOS 6 now it’s EOL

First of all, if you can, you really should upgrade, to either CentOS Stream if a rolling release works for you, or Alpine or Rocky Linux if you want the same sort of release cadence as CentOS historically had, and before anyone points out that there’s no direct upgrade path, I know, and that makes upgrading basically a reprovision exercise, but still in the longer term, you’ll be better off. This is a small note I found regarding the current CentOS 6 status:

CentOS 6 is *dead* and *shouldn't* be used anywhere at *all*

Also, if you’re considering the last non-rolling release of CentOS, CentOS 8, keep in mind that CentOS 8 has had the rug pulled from under it in terms of lifecycle and should have been supported until the end of 2029, but that was brought forward to the end of 2021, and so is also end of life.

For the purposes of what follows though, I’m assuming that you can’t upgrade easily for some reason and that’s why you’re here, stuck in the same hole I was.

So, you’ll see an error similar to the below when you run the usual yum update commands:

Setting up Upgrade Process
YumRepo Error: All mirror URLUniform Resource Identifiers are not using ftp, http[s] or file.
Eg. Invalid release/repo/arch combination/
removing mirrorlist with no valid mirrors: /var/cache/yum/x86_64/6/base/mirrorlist.txt
Error: Cannot retrieve repository metadata (repomd.xml) for repository: base. Please verify its path and try again

The fix here is fairly simple and it’s to use the CentOS vault repos, which are snapshots of older release trees.

So to fix the base repo, just copy the following into /etc/yum.repos.d/CentOS-Base.repo

[C6.10-base]
name=CentOS-6.10 - Base
baseurl=http://linuxsoft.cern.ch/centos-vault/6.10/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=1
metadata_expire=never

[C6.10-updates]
name=CentOS-6.10 - Updates
baseurl=http://linuxsoft.cern.ch/centos-vault/6.10/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=1
metadata_expire=never

[C6.10-extras]
name=CentOS-6.10 - Extras
baseurl=http://linuxsoft.cern.ch/centos-vault/6.10/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=1
metadata_expire=never

[C6.10-contrib]
name=CentOS-6.10 - Contrib
baseurl=http://linuxsoft.cern.ch/centos-vault/6.10/contrib/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=0
metadata_expire=never

[C6.10-centosplus]
name=CentOS-6.10 - CentOSPlus
baseurl=http://linuxsoft.cern.ch/centos-vault/6.10/centosplus/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=0
metadata_expire=never

Then to fix the epel repo, this is the vault config to go into /etc/yum.repos.d/epel.repo

[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
baseurl=http://mirror.math.princeton.edu/pub/fedora-archive/epel/6/$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
baseurl=http://mirror.math.princeton.edu/pub/fedora-archive/epel/6/$basearch/debug
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

If you prefer you can just curl the files down that contain the above config and overwrite the existing old configs:

curl https://www.mark-gilbert.co.uk/wp-content/uploads/2021/08/CentOS-Base.repo --output /etc/yum.repos.d/CentOS-Base.repo
curl https://www.mark-gilbert.co.uk/wp-content/uploads/2021/08/epel.repo --output /etc/yum.repos.d/epel.repo

Update – Dec 2021 – Someone posted in the comments to say they couldn’t download the configs using the commands I included in the article, and I realise this is due to the various https settings I employ on the website and older CentOS installs not being compatible with what I enforce, so I’ve added the same commands that are pulling from AWS S3 below to get around this.

AWS S3 hosted versions of the same files and the relevant commands are below:

curl http://mark-gilbert-co-uk.s3-website-eu-west-1.amazonaws.com/CentOS-Base.repo –output /etc/yum.repos.d/CentOS-Base.repo
curl http://mark-gilbert-co-uk.s3-website-eu-west-1.amazonaws.com/epel.repo –output /etc/yum.repos.d/epel.repo

Update – Feb 2022 – I’ve had to amend the details here again as more and more http mirrors are moving to redirect to https, meaning that on a server with extremely old software packages, you won’t be able to connect as you’ll be pushed to https. Unfortunately this is just to happen more and more, and you really, really should migrate to something more modern, that’s still supported.

Update – April 2022 – I’ve updated the epel mirrors to use the Princeton University servers in the US, as someone in the comments pointed out that the epel mirrors were also not working now.

Validity fingerprint sensor 06cb:009a on Lenovo X1 Carbon running Fedora

I’ve been using Linux on my work laptop for a while now, but found I really missed having the fingerprint sensor for login and authentication. I tried sometime last year in Ubuntu to get it working, but as I’d just switched to Fedora, (because why not?) I thought I’d have another go.

So in really brief form, and apologies if I’ve missed anything here, the steps I took to get my fingerprint sensor working on my Lenovo X1 Carbon under Fedora 34 were as follows:


Grab RPM packages from this post – https://github.com/uunicorn/python-validity/issues/54#issuecomment-727092786 and extract RPMs from the zip files

Install RPMs
$ sudo dnf install python3-validity-0.12-1.fc33.noarch.rpm open-fprintd-0.6-1.fc33.noarch.rpm fprintd-clients-1.90.1-2.fc33.x86_64.rpm fprintd-clients-pam-1.90.1-2.fc33.x86_64.rpm

Create two missing files that don’t get created
$ sudo touch /usr/share/python-validity/backoff && sudo touch /usr/share/python-validity/calib-data.bin

Find driver file in the install folder, in my case there is driver named: “6_07f_lenovo_mis_qm.xpfwext”, so change the permissions on that
$ cd /usr/share/python-validity && ls -la
$ sudo chmod 755 6_07f_lenovo_mis_qm.xpfwext

Reset the fingerprint sensor, as this had been used previously
$ sudo systemctl stop python3-validity
$ sudo validity-sensors-firmware
$ sudo python3 /usr/share/python-validity/playground/factory-reset.py

Start the sensor package again and set to run at boot
$ sudo systemctl enable python3-validity && sudo systemctl start python3-validity

Check the validity package has started correcty
$ sudo systemctl status python3-validity

If the service is all running ok, then go ahead and enroll your fingerprints
$ fprintd-enroll

At this point it let me enroll my fingerprints, so go ahead and enable fingerprints as a login method and then I had the fingerprint sensor working and could use them for login to both Gnome and for sudo at the terminal
$ sudo authselect enable-feature with-fingerprint

Sources
https://github.com/uunicorn/python-validity/issues/54#issuecomment-727092786
https://github.com/uunicorn/python-validity#errors-on-startup
https://github.com/uunicorn/python-validity/issues/3
https://bugzilla.redhat.com/show_bug.cgi?id=1943406

Benchmarking AWS ARM Instances Part 1

As AWS have made the t4g.micro instance free until the end of June, it’d be a crime to not test them out. My test comparisons will be between two burstable small instances, the sort you might use for a small website, a t3a.micro and a t4g.micro, and for pure CPU benchmarking, larger general purpose instances, a m5.2xlarge and a m6g.2xlarge. The ARM instances here run on the AWS Graviton processors and AnandTech have a fantastic writeup on them.

So diving straight into my admittedly small and basic benchmarks, the first test I ran was an ApacheBench test against a website being served from each instance. In this case I replicated this site to a LEMP stack on the instance and ran against that, and again the command ran is below:

ab -t 120 -n 100000 -c 100 https://www.test-site.com/

t3a.micro ApacheBench
Requests per second: 14.95 #/sec
Time per request: 6690.257 ms
Time per request: 66.903 [ms] (mean, across all concurrent requests)
Transfer rate: 1626.65 [Kbytes/sec] received

t4g.micro ApacheBench
Requests per second: 24.65 #/sec
Time per request: 4057.414 ms
Time per request: 40.574 [ms] (mean, across all concurrent requests)
Transfer rate: 2680.52 [Kbytes/sec] received

m5.2xlarge ApacheBench
Requests per second: 67.22 #/sec
Time per request: 1487.566 ms
Time per request: 14.876 [ms] (mean, across all concurrent requests)
Transfer rate: 6876.10 [Kbytes/sec] received

m6g.2xlarge ApacheBench
Requests per second: 67.88 #/sec
Time per request: 1473.144 ms
Time per request: 14.731 [ms] (mean, across all concurrent requests)
Transfer rate: 7502.57 [Kbytes/sec] received

The performance difference here on the smaller instance types is incredible really, the ARM instance is 64.8% quicker, and these instances cost about 10-20% less than the equivalent Intel or AMD powered instance. For the larger instances the figures are so similar I suspect we’re hitting another limiting factor somewhere, possibly database performance.

Next up was a basic Sysbench CPU benchmark, which will verify prime numbers by calculating primes up to a limit and give a raw CPU performance figure. This also includes the larger general purpose instances here too. The command used for this is below, with threads to match the vCPU in the instance:

sysbench cpu --cpu-max-prime=20000 --threads=2 --time=120 run

t3a.micro Sysbench
CPU speed: events per second: 500.22

t4g.micro Sysbench
CPU speed: events per second: 2187.58

m5.2xlarge Sysbench
CPU speed: events per second: 2693.14

m6g.2xlarge Sysbench
CPU speed: events per second: 8774.13

As with the ApacheBench tests, the ARM instances absolutely wipe the floor with the Intel and AMD instances, and they’re 10-20% cheaper. There’s no wonder AWS are saying that the Graviton 2 based ARM instances represent a 40% price-performance improvement over other instance types.

Based on this I’ll certainly be looking at trying to use ARM instances where I can.

Sonos S2 app stuck on “Sonos” splash screen

I came to start the Sonos S2 Windows application today and noticed that it wouldn’t run, it just sticks with the Sonos logo on screen and never actually launches into the application. So I checked the log files under C:\ProgramData\SonosV2,_Inc\runtime and looked at the most recent XML file in there, which should be the last launch log, and I could see the error below in the XML file:

<ApplicationData>@Module:module @Message:Can&apos;t bind (port 3410)</ApplicationData>

This suggests that the Sonos app is trying to open a port and failing, presumably as it’s already reserved for something else. So I checked current excluded ports using the command below and port 3410 that Sonos is trying to grab is already excluded in a range spanning 3390-3489.

netsh int ipv4 show excludedportrange protocol=tcp

A little further Googling and I found that this might be related to Hyper-V grabbing ports, possibly more ports than it really needs, and as I use WSL2 on my machine I do have Hyper-V installed. So the solution was to simply disable Hyper-V, reboot, add an exclusion for port 3410 so that Hyper-V can’t grab it, then re-enable Hyper-V, and the code for that is below:

dism.exe /Online /Disable-Feature:Microsoft-Hyper-V
netsh int ipv4 add excludedportrange protocol=tcp startport=3410 numberofports=1
dism.exe /Online /Enable-Feature:Microsoft-Hyper-V

After I’d done this, the Sonos app launches again, and Hyper-V works as expected. Hopefully this helps someone, as I found this problem mentioned in a closed topic on the Sonos forums, but with no solution listed.