“Windows could not prepare the computer to boot into the next phase of installation” error when installing Server 2022 on HPE Proliant server

I’d been trying to install Windows Server 2022 onto a HPE Proliant server remotely with media plugged in virtually via the ILO and kept running into this error. The install target was a standard RAID1 configured pair of disks, and I spent a lot of time looking at this disk setup, as right when the disks are being partitioned and the file copies start is when the error is thrown.

However, the target was not the problem. What was causing a problem was the internal SD card slot, and the fact that had an SD card with the previous ESXi install on it. Following the HPE instructions to disable the SD card slot and then re-attempting the install was the fix in this case.

Ultimately I don’t know why having the SD card available interferes with the Windows install process, but clearly it does.

Adding root certificates to the system trust store in Amazon Linux 2023 in AWS

As AWS are constantly notifying customers about the need to update RDS certificates, I thought now would be both a good time to do it, and also ensure that WordPress is connecting to RDS via TLS. Sounds simple enough, updating the certificates is simply a case of modifying the RDS instance and picking the new certificate. Getting the new root and intermediate CA’s into the AL2023 system certificate store was another matter.

Once you’ve had a read of this page, and downloaded the relevant bundles, in the case of WordPress you need to add them to the system store. I had a good look around the internet and couldn’t find much information related to this at all, but knowing AL2023 is based on Fedora, I checked and they do have documentation on this, so kudos to them for that. The basic process for this is as follows:

# Grab the correct bundle for your region
wget https://truststore.pki.rds.amazonaws.com/eu-west-1/eu-west-1-bundle.pem
# Copy the bundle to the trust anchors store
sudo cp ~/eu-west-1-bundle.pem /etc/pki/ca-trust/source/anchors/
# This ensures the relevant certificate bundles in app specific formats are updated
sudo update-ca-trust
# You should then be able to run this and see the newly added certificate bundles
trust list | grep eu-west-1

After this you can test connectivity to RDS via the MariaDB or MySQL CLI utility to confirm the new certificate is being picked up and works, and this command should then connect if that’s the case

#  Connect to the database
mariadb -h DBHost --ssl-verify-server-cert -P 3306 -u DBUsername -p

Next we can test PHP connectivity

# Drop into a php prompt
php -a

// Define the database variables
$host = 'DBHost';
$username = 'DBUsername';
$password = 'DBPassword';
$db_name = 'DBName';

// Initializes MySQLi
$conn = mysqli_init();

mysqli_ssl_set($conn,NULL,NULL, "/etc/pki/ca-trust/source/anchors/eu-west-1-bundle.pem", NULL, NULL);

// Establish the connection
mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL);

// If connection failed, show the error
if (mysqli_connect_errno())
{
    die('Failed to connect to MySQL: '.mysqli_connect_error());
}

// Run the Select query which should dump a list of WordPress users out if the connection was successful
printf("Reading data from table: \n");
$res = mysqli_query($conn, 'SELECT * FROM wp_users');
while ($row = mysqli_fetch_assoc($res))
 {
    var_dump($row);
 }

If that’s successful then the final step should be to make sure WordPress use TLS to connect to the database, which means adding the following to the wp-config.php file

define( 'MYSQL_CLIENT_FLAGS', MYSQLI_CLIENT_SSL );

At this point, the final step was to enforce SSL connectivity on the RDS instance, which simply needed a change of the RDS parameter group to the require_secure_transport option.

I hope this helps someone enforce SSL connectivity, or at the very least update the root certificates in AL2023 for another reason, as I found very little AL2023 specific documentation regarding that, but luckily the upstream Fedora docs work.

Github Actions Failing – Error: Resource not accessible by integration

I suddenly came across various Github Actions across a number of different repos that all started failing at the same time, with errors similar to the below:

Error: Resource not accessible by integration
Exception: {
 "error": "403/Unknown Error:HttpError",
 "from": "undefined/Error",
 "message": "There was an error getting change files for repo:reponame owner:PR-Owner pr:593",
 "payload": "Resource not accessible by integration"
}

I couldn’t work out if something had changed on the Github side, in terms of the permissions the default GITHUB_TOKEN allows, or if someone on one of the dev teams had been fiddling with their repo permissions.
The solution here was to add additional permissions to the Action, either at the top level or within a specific job. In my case it was permissions around the repo contents and pull requests.

Validity fingerprint sensor 06cb:009a on Lenovo X1 Carbon running EndeavourOS

Doing this on Endeavour was actually easier than in Fedora and really just consisted of a couple of steps, install the Python Validity drivers from the AUR repo, then enable them and reset the firmware.

This is really a follow up post to my previous one about getting the fingerprint sensor working correctly on my Lenovo X1 Carbon under Fedora, but as I like to switch my work laptop around, I’m now running EndeavourOS, based off Arch.

Arch actually have on their wiki some quite comprehensive details on various laptops, including my Lenovo X1

Doing this on Endeavour was actually easier than in Fedora and really just consisted of a couple of steps, install the Python Validity drivers from the AUR repo, then enable them and reset the firmware.

$ yay -S python-validity
$ sudo validity-sensors-firmware
$ sudo python3 /usr/share/python-validity/playground/factory-reset.py

After that it was a straightforward enrollment of my fingerprints

$ fprintd-enroll -f right-index-finger
$ fprintd-enroll -f left-index-finger

After that, it was just a case of checking that fingerprint login is enabled in settings under the user section.

Fixing yum repos on CentOS 6 now it’s EOL

First of all, if you can, you really should upgrade, to either CentOS Stream if a rolling release works for you, or Alpine or Rocky Linux if you want the same sort of release cadence as CentOS historically had, and before anyone points out that there’s no direct upgrade path, I know, and that makes upgrading basically a reprovision exercise, but still in the longer term, you’ll be better off. This is a small note I found regarding the current CentOS 6 status:

CentOS 6 is *dead* and *shouldn't* be used anywhere at *all*

Also, if you’re considering the last non-rolling release of CentOS, CentOS 8, keep in mind that CentOS 8 has had the rug pulled from under it in terms of lifecycle and should have been supported until the end of 2029, but that was brought forward to the end of 2021, and so is also end of life.

For the purposes of what follows though, I’m assuming that you can’t upgrade easily for some reason and that’s why you’re here, stuck in the same hole I was.

So, you’ll see an error similar to the below when you run the usual yum update commands:

Setting up Upgrade Process
YumRepo Error: All mirror URLUniform Resource Identifiers are not using ftp, http[s] or file.
Eg. Invalid release/repo/arch combination/
removing mirrorlist with no valid mirrors: /var/cache/yum/x86_64/6/base/mirrorlist.txt
Error: Cannot retrieve repository metadata (repomd.xml) for repository: base. Please verify its path and try again

The fix here is fairly simple and it’s to use the CentOS vault repos, which are snapshots of older release trees.

So to fix the base repo, just copy the following into /etc/yum.repos.d/CentOS-Base.repo

[C6.10-base]
name=CentOS-6.10 - Base
baseurl=http://linuxsoft.cern.ch/centos-vault/6.10/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=1
metadata_expire=never

[C6.10-updates]
name=CentOS-6.10 - Updates
baseurl=http://linuxsoft.cern.ch/centos-vault/6.10/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=1
metadata_expire=never

[C6.10-extras]
name=CentOS-6.10 - Extras
baseurl=http://linuxsoft.cern.ch/centos-vault/6.10/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=1
metadata_expire=never

[C6.10-contrib]
name=CentOS-6.10 - Contrib
baseurl=http://linuxsoft.cern.ch/centos-vault/6.10/contrib/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=0
metadata_expire=never

[C6.10-centosplus]
name=CentOS-6.10 - CentOSPlus
baseurl=http://linuxsoft.cern.ch/centos-vault/6.10/centosplus/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=0
metadata_expire=never

Then to fix the epel repo, this is the vault config to go into /etc/yum.repos.d/epel.repo

[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
baseurl=http://mirror.math.princeton.edu/pub/fedora-archive/epel/6/$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
baseurl=http://mirror.math.princeton.edu/pub/fedora-archive/epel/6/$basearch/debug
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

If you prefer you can just curl the files down that contain the above config and overwrite the existing old configs:

curl https://www.mark-gilbert.co.uk/wp-content/uploads/2021/08/CentOS-Base.repo --output /etc/yum.repos.d/CentOS-Base.repo
curl https://www.mark-gilbert.co.uk/wp-content/uploads/2021/08/epel.repo --output /etc/yum.repos.d/epel.repo

Update – Dec 2021 – Someone posted in the comments to say they couldn’t download the configs using the commands I included in the article, and I realise this is due to the various https settings I employ on the website and older CentOS installs not being compatible with what I enforce, so I’ve added the same commands that are pulling from AWS S3 below to get around this.

AWS S3 hosted versions of the same files and the relevant commands are below:

curl http://mark-gilbert-co-uk.s3-website-eu-west-1.amazonaws.com/CentOS-Base.repo –output /etc/yum.repos.d/CentOS-Base.repo
curl http://mark-gilbert-co-uk.s3-website-eu-west-1.amazonaws.com/epel.repo –output /etc/yum.repos.d/epel.repo

Update – Feb 2022 – I’ve had to amend the details here again as more and more http mirrors are moving to redirect to https, meaning that on a server with extremely old software packages, you won’t be able to connect as you’ll be pushed to https. Unfortunately this is just to happen more and more, and you really, really should migrate to something more modern, that’s still supported.

Update – April 2022 – I’ve updated the epel mirrors to use the Princeton University servers in the US, as someone in the comments pointed out that the epel mirrors were also not working now.

Validity fingerprint sensor 06cb:009a on Lenovo X1 Carbon running Fedora

I’ve been using Linux on my work laptop for a while now, but found I really missed having the fingerprint sensor for login and authentication. I tried sometime last year in Ubuntu to get it working, but as I’d just switched to Fedora, (because why not?) I thought I’d have another go.

So in really brief form, and apologies if I’ve missed anything here, the steps I took to get my fingerprint sensor working on my Lenovo X1 Carbon under Fedora 34 were as follows:


Grab RPM packages from this post – https://github.com/uunicorn/python-validity/issues/54#issuecomment-727092786 and extract RPMs from the zip files

Install RPMs
$ sudo dnf install python3-validity-0.12-1.fc33.noarch.rpm open-fprintd-0.6-1.fc33.noarch.rpm fprintd-clients-1.90.1-2.fc33.x86_64.rpm fprintd-clients-pam-1.90.1-2.fc33.x86_64.rpm

Create two missing files that don’t get created
$ sudo touch /usr/share/python-validity/backoff && sudo touch /usr/share/python-validity/calib-data.bin

Find driver file in the install folder, in my case there is driver named: “6_07f_lenovo_mis_qm.xpfwext”, so change the permissions on that
$ cd /usr/share/python-validity && ls -la
$ sudo chmod 755 6_07f_lenovo_mis_qm.xpfwext

Reset the fingerprint sensor, as this had been used previously
$ sudo systemctl stop python3-validity
$ sudo validity-sensors-firmware
$ sudo python3 /usr/share/python-validity/playground/factory-reset.py

Start the sensor package again and set to run at boot
$ sudo systemctl enable python3-validity && sudo systemctl start python3-validity

Check the validity package has started correcty
$ sudo systemctl status python3-validity

If the service is all running ok, then go ahead and enroll your fingerprints
$ fprintd-enroll

At this point it let me enroll my fingerprints, so go ahead and enable fingerprints as a login method and then I had the fingerprint sensor working and could use them for login to both Gnome and for sudo at the terminal
$ sudo authselect enable-feature with-fingerprint

Sources
https://github.com/uunicorn/python-validity/issues/54#issuecomment-727092786
https://github.com/uunicorn/python-validity#errors-on-startup
https://github.com/uunicorn/python-validity/issues/3
https://bugzilla.redhat.com/show_bug.cgi?id=1943406

Benchmarking AWS ARM Instances Part 1

As AWS have made the t4g.micro instance free until the end of June, it’d be a crime to not test them out. My test comparisons will be between two burstable small instances, the sort you might use for a small website, a t3a.micro and a t4g.micro, and for pure CPU benchmarking, larger general purpose instances, a m5.2xlarge and a m6g.2xlarge. The ARM instances here run on the AWS Graviton processors and AnandTech have a fantastic writeup on them.

So diving straight into my admittedly small and basic benchmarks, the first test I ran was an ApacheBench test against a website being served from each instance. In this case I replicated this site to a LEMP stack on the instance and ran against that, and again the command ran is below:

ab -t 120 -n 100000 -c 100 https://www.test-site.com/

t3a.micro ApacheBench
Requests per second: 14.95 #/sec
Time per request: 6690.257 ms
Time per request: 66.903 [ms] (mean, across all concurrent requests)
Transfer rate: 1626.65 [Kbytes/sec] received

t4g.micro ApacheBench
Requests per second: 24.65 #/sec
Time per request: 4057.414 ms
Time per request: 40.574 [ms] (mean, across all concurrent requests)
Transfer rate: 2680.52 [Kbytes/sec] received

m5.2xlarge ApacheBench
Requests per second: 67.22 #/sec
Time per request: 1487.566 ms
Time per request: 14.876 [ms] (mean, across all concurrent requests)
Transfer rate: 6876.10 [Kbytes/sec] received

m6g.2xlarge ApacheBench
Requests per second: 67.88 #/sec
Time per request: 1473.144 ms
Time per request: 14.731 [ms] (mean, across all concurrent requests)
Transfer rate: 7502.57 [Kbytes/sec] received

The performance difference here on the smaller instance types is incredible really, the ARM instance is 64.8% quicker, and these instances cost about 10-20% less than the equivalent Intel or AMD powered instance. For the larger instances the figures are so similar I suspect we’re hitting another limiting factor somewhere, possibly database performance.

Next up was a basic Sysbench CPU benchmark, which will verify prime numbers by calculating primes up to a limit and give a raw CPU performance figure. This also includes the larger general purpose instances here too. The command used for this is below, with threads to match the vCPU in the instance:

sysbench cpu --cpu-max-prime=20000 --threads=2 --time=120 run

t3a.micro Sysbench
CPU speed: events per second: 500.22

t4g.micro Sysbench
CPU speed: events per second: 2187.58

m5.2xlarge Sysbench
CPU speed: events per second: 2693.14

m6g.2xlarge Sysbench
CPU speed: events per second: 8774.13

As with the ApacheBench tests, the ARM instances absolutely wipe the floor with the Intel and AMD instances, and they’re 10-20% cheaper. There’s no wonder AWS are saying that the Graviton 2 based ARM instances represent a 40% price-performance improvement over other instance types.

Based on this I’ll certainly be looking at trying to use ARM instances where I can.

Sonos S2 app stuck on “Sonos” splash screen

I came to start the Sonos S2 Windows application today and noticed that it wouldn’t run, it just sticks with the Sonos logo on screen and never actually launches into the application. So I checked the log files under C:\ProgramData\SonosV2,_Inc\runtime and looked at the most recent XML file in there, which should be the last launch log, and I could see the error below in the XML file:

<ApplicationData>@Module:module @Message:Can&apos;t bind (port 3410)</ApplicationData>

This suggests that the Sonos app is trying to open a port and failing, presumably as it’s already reserved for something else. So I checked current excluded ports using the command below and port 3410 that Sonos is trying to grab is already excluded in a range spanning 3390-3489.

netsh int ipv4 show excludedportrange protocol=tcp

A little further Googling and I found that this might be related to Hyper-V grabbing ports, possibly more ports than it really needs, and as I use WSL2 on my machine I do have Hyper-V installed. So the solution was to simply disable Hyper-V, reboot, add an exclusion for port 3410 so that Hyper-V can’t grab it, then re-enable Hyper-V, and the code for that is below:

dism.exe /Online /Disable-Feature:Microsoft-Hyper-V
netsh int ipv4 add excludedportrange protocol=tcp startport=3410 numberofports=1
dism.exe /Online /Enable-Feature:Microsoft-Hyper-V

After I’d done this, the Sonos app launches again, and Hyper-V works as expected. Hopefully this helps someone, as I found this problem mentioned in a closed topic on the Sonos forums, but with no solution listed.

Ubiquiti Home Network Setup – part 2

Finally, this is the follow up post to My Experiences with consumer mesh wireless devices and Ubiquiti Home Network Setup – part 1

After quite a while at my parents looking taking care of family stuff I have finally managed to get back home, and I bet my partner is thrilled, because as soon as I’m back, in goes more network stuff, including holes through walls and cabling.

So my home setup now consists of:
Draytek Vigor 130 modem
Ubiquiti Dream Machine
Two Ubiquiti Flex Mini switches
A single Ubiquiti AC-Lite AP with another following later this week:

Unifi Devices

The living room switch is wired directly from the office, with a Cat6 run out from the office, outside and round the outside of the house and back into the living room, leaving the current setup looking something like this;

Home network map

The cabling wasn’t strictly necessary , I could have used the second AP in wireless uplink mode, but since I can wire it in, I’ll certainly see better performance from doing it.

There are a few small annoyances that I’ve seen, things I’ve become aware of due to the wealth of features on the Ubiquiti kit, but certainly not caused by it. First, my partners work laptop is old enough that the wireless chip in it doesn’t support fast roaming, so it can be a bit of a pain when she moves between the two APs, whereas my work laptop does support it. Previously on the consumer wireless devices, where they do support fast roaming, there’s then no way to see which client devices are capable of that, whereas I can just look at my client device table to see here exactly what does and doesn’t support it.
Second, the 2.4GHz wireless bands round here are just a noisy mess, with seemingly every house nearby blaring out 2.4GHz at full strength, although this seems to be a problem everywhere these days. The number of times I’ve heard our network guys at work cursing our very central London office, with an absolutely full 2.4GHz spectrum. (Side note, in that office even the 5GHz bands are full, and we get evacuation alerts far more frequently than anyone would want, for anything above band 48)

I’ve dropped the transmit power down as appropriate, so don’t look at me for noisy 2.4GHz bands, and when the second AC-Lite comes, I’ll have to tune everything again. Wireless channel widths have been reduced, as I’m not trying to push huge data throughput on wireless, just to cover Internet speed, which until OpenReach pull FTTP out of the bag in my area, isn’t that fast. Anything that needs big throughput, gets wired in. That leaves me at 20Mhz channels for 2.4GHz bands and 40MHz channels for 5GHz bands, nothing mad, but enough for what I need.

When summer rolls round next year, the new network should easily stretch to the garden, enabling me to work outside, and I can get my partner to start nagging work for a device that supports fast roaming to solve that little conundrum.


Doing two proper setups like this with Ubiquiti kit has been an absolute breath of fresh air compared to the various consumer mesh wireless kits I looked at, and I really can’t imagine I’d go back to consumer grade stuff again, even if tacking a long length of Cat6 to a wall was a bloody pain.

Ubiquiti Home Network Setup – part 1

Update 11/11/2020: I’ve followed this post up with Ubiquiti Home Network Setup – part 2

This is kind of a follow up to my previous post, My Experience With Consumer Mesh Wireless Devices

Due to unfortunate personal circumstances I’ve not actually been at home recently, so as far as my home setup goes, I’ve only made minor progress. I’ve decided I will go all in and refresh my home network setup to a Ubiquiti system. At present that’s involved getting a Ubiquiti UDM coupled with a Draytek 130 modem to replace the core of the network. This will be further supplemented with a FlexHD AP somewhere else in the house when I’ve had chance to actually do a bit of a site survey. I’ve no space for racking any kit, so have had to go with the UDM rather than the UDM Pro.

At my parents house however, I’ve spent ample time and that meant I had an urgent need to make some changes to their network, as I’ve had to work from there for the last three months. They were basically in the same boat as myself in terms of wanting blanket wireless coverage, with the exception that they have a fairly large house that’s quite long, and has been extended, so has internal walls that were external walls, making them thicker than usual.

I started with the same Ubiquiti UDM, Draytek 130 and added a FlexHD, two AC Lite APs, and a couple of Flex Mini switches, with a few long cable runs this has improved things massively for them.

Unifi Devices

In total there’s approximately 40 network clients, and everything seems to be working great after I’d done a little tuning around channels and transmit powers. The FlexHD covers off the end of the house with the most wireless clients, as you can see on this cool map:

My parents home network map

My goal here was stability and internet connectivity, there’s no need for them to be pushing insane speeds with 80 or 160Mhz wide 5Ghz channels, it’d just create noise and use up airtime unnecessarily, this is purely for internet connectivity, and that’s exactly what I’ve managed to achieve. Problems with their previous kit were frequent, with drop outs or weird bugs in software, but since the upgrade, everything has been very stable and I must say I’m a full convert to Ubiquiti kit, it’s been absolutely great to use.

One cool thing that I have seen through the Ubiquiti kit, is the absolute proliferation of wireless hotspots in cars and on phone. Sat in their house I can generally see three other neighbouring access points when I check wireless networks available, which are the neighbours either side and the house over the street. However, looking at the neighbouring access points list in the console, shows me 206 access points in a 24 hour period!

Neighbouring access points

All the static ones in neighbours houses will be there, but swamped out by the sheer number that pass in cars, and are only ever seen briefly, as their house backs onto a relatively busy road, and the clue that these are in cars is in the AP names;

There’s a lot of cool features on the Ubiquiti kit, and if you’re a bit of a network or IT nerd, I’d highly recommend it if you’ve been struggling with home wireless.

Still to come

Next up will be the network changes when I get home. I’d imagine it’ll be a very similar setup to the one I’ve done for my parents, as anything that needs high speeds or low latency, like the NAS or my gaming PC will be wired, and wireless just need to be stable and of more than internet speeds. So look out for that to come in the next few weeks hopefully, I’m looking forward to doing it all again at home.