Validity fingerprint sensor 06cb:009a on Lenovo X1 Carbon running EndeavourOS

This is really a follow up post to my previous one about getting the fingerprint sensor working correctly on my Lenovo X1 Carbon under Fedora, but as I like to switch my work laptop around, I’m now running EndeavourOS, based off Arch.

Arch actually have on their wiki some quite comprehensive details on various laptops, including my Lenovo X1

Doing this on Endeavour was actually easier than in Fedora and really just consisted of a couple of steps, install the Python Validity drivers from the AUR repo, then enable them and reset the firmware.

$ yay -S python-validity
$ sudo validity-sensors-firmware
$ sudo python3 /usr/share/python-validity/playground/

After that it was a straightforward enrollment of my fingerprints

$ fprintd-enroll -f right-index-finger
$ fprintd-enroll -f left-index-finger

After that, it was just a case of checking that fingerprint login is enabled in settings under the user section.

Fixing yum repos on CentOS 6 now it’s EOL

First of all, if you can, you really should upgrade, to either CentOS Stream if a rolling release works for you, or Alpine or Rocky Linux if you want the same sort of release cadence as CentOS historically had, and before anyone points out that there’s no direct upgrade path, I know, and that makes upgrading basically a reprovision exercise, but still in the longer term, you’ll be better off. This is a small note I found regarding the current CentOS 6 status:

CentOS 6 is *dead* and *shouldn't* be used anywhere at *all*

Also, if you’re considering the last non-rolling release of CentOS, CentOS 8, keep in mind that CentOS 8 has had the rug pulled from under it in terms of lifecycle and should have been supported until the end of 2029, but that was brought forward to the end of 2021, and so is also end of life.

For the purposes of what follows though, I’m assuming that you can’t upgrade easily for some reason and that’s why you’re here, stuck in the same hole I was.

So, you’ll see an error similar to the below when you run the usual yum update commands:

Setting up Upgrade Process
YumRepo Error: All mirror URLUniform Resource Identifiers are not using ftp, http[s] or file.
Eg. Invalid release/repo/arch combination/
removing mirrorlist with no valid mirrors: /var/cache/yum/x86_64/6/base/mirrorlist.txt
Error: Cannot retrieve repository metadata (repomd.xml) for repository: base. Please verify its path and try again

The fix here is fairly simple and it’s to use the CentOS vault repos, which are snapshots of older release trees.

So to fix the base repo, just copy the following into /etc/yum.repos.d/CentOS-Base.repo

name=CentOS-6.10 - Base

name=CentOS-6.10 - Updates

name=CentOS-6.10 - Extras

name=CentOS-6.10 - Contrib

name=CentOS-6.10 - CentOSPlus

Then to fix the epel repo, this is the vault config to go into /etc/yum.repos.d/epel.repo

name=Extra Packages for Enterprise Linux 6 - $basearch

name=Extra Packages for Enterprise Linux 6 - $basearch - Debug

If you prefer you can just curl the files down that contain the above config and overwrite the existing old configs:

curl --output /etc/yum.repos.d/CentOS-Base.repo
curl --output /etc/yum.repos.d/epel.repo

Update – Dec 2021 – Someone posted in the comments to say they couldn’t download the configs using the commands I included in the article, and I realise this is due to the various https settings I employ on the website and older CentOS installs not being compatible with what I enforce, so I’ve added the same commands but pulling from AWS S3 below to get around this.

AWS S3 hosted versions of the same files and the relevant commands are below:

curl –output /etc/yum.repos.d/CentOS-Base.repo
curl –output /etc/yum.repos.d/epel.repo

Update – Feb 2022 – I’ve had to amend the details here again as more and more http mirrors are moving to redirect to https, meaning that on a server with extremely old software packages, you won’t be able to connect as you’ll be pushed to https. Unfortunately this is just to happen more and more, and you really, really should migrate to something more modern, that’s still supported.

Update – April 2022 – I’ve updated the epel mirrors to use the Princeton University servers in the US, as someone in the comments pointed out that the epel mirrors were also not working now.

Validity fingerprint sensor 06cb:009a on Lenovo X1 Carbon running Fedora

I’ve been using Linux on my work laptop for a while now, but found I really missed having the fingerprint sensor for login and authentication. I tried sometime last year in Ubuntu to get it working, but as I’d just switched to Fedora, (because why not?) I thought I’d have another go.

So in really brief form, and apologies if I’ve missed anything here, the steps I took to get my fingerprint sensor working on my Lenovo X1 Carbon under Fedora 34 were as follows:

Grab RPM packages from this post – and extract RPMs from the zip files

Install RPMs
$ sudo dnf install python3-validity-0.12-1.fc33.noarch.rpm open-fprintd-0.6-1.fc33.noarch.rpm fprintd-clients-1.90.1-2.fc33.x86_64.rpm fprintd-clients-pam-1.90.1-2.fc33.x86_64.rpm

Create two missing files that don’t get created
$ sudo touch /usr/share/python-validity/backoff && sudo touch /usr/share/python-validity/calib-data.bin

Find driver file in the install folder, in my case there is driver named: “6_07f_lenovo_mis_qm.xpfwext”, so change the permissions on that
$ cd /usr/share/python-validity && ls -la
$ sudo chmod 755 6_07f_lenovo_mis_qm.xpfwext

Reset the fingerprint sensor, as this had been used previously
$ sudo systemctl stop python3-validity
$ sudo validity-sensors-firmware
$ sudo python3 /usr/share/python-validity/playground/

Start the sensor package again and set to run at boot
$ sudo systemctl enable python3-validity && sudo systemctl start python3-validity

Check the validity package has started correcty
$ sudo systemctl status python3-validity

If the service is all running ok, then go ahead and enroll your fingerprints
$ fprintd-enroll

At this point it let me enroll my fingerprints, so go ahead and enable fingerprints as a login method and then I had the fingerprint sensor working and could use them for login to both Gnome and for sudo at the terminal
$ sudo authselect enable-feature with-fingerprint


Benchmarking AWS ARM Instances

As AWS have made the t4g.micro instance free until the end of June, it’d be a crime to not test them out. My test comparisons will be between two burstable small instances, the sort you might use for a small website, a t3a.micro and a t4g.micro, and for pure CPU benchmarking, larger general purpose instances, a m5.2xlarge and a m6g.2xlarge. The ARM instances here run on the AWS Graviton processors and AnandTech have a fantastic writeup on them.

So diving straight into my admittedly small and basic benchmarks, the first test I ran was an ApacheBench test against a website being served from each instance. In this case I replicated this site to a LEMP stack on the instance and ran against that, and again the command ran is below:

ab -t 120 -n 100000 -c 100

t3a.micro ApacheBench
Requests per second: 14.95 #/sec
Time per request: 6690.257 ms
Time per request: 66.903 [ms] (mean, across all concurrent requests)
Transfer rate: 1626.65 [Kbytes/sec] received

t4g.micro ApacheBench
Requests per second: 24.65 #/sec
Time per request: 4057.414 ms
Time per request: 40.574 [ms] (mean, across all concurrent requests)
Transfer rate: 2680.52 [Kbytes/sec] received

m5.2xlarge ApacheBench
Requests per second: 67.22 #/sec
Time per request: 1487.566 ms
Time per request: 14.876 [ms] (mean, across all concurrent requests)
Transfer rate: 6876.10 [Kbytes/sec] received

m6g.2xlarge ApacheBench
Requests per second: 67.88 #/sec
Time per request: 1473.144 ms
Time per request: 14.731 [ms] (mean, across all concurrent requests)
Transfer rate: 7502.57 [Kbytes/sec] received

The performance difference here on the smaller instance types is incredible really, the ARM instance is 64.8% quicker, and these instances cost about 10-20% less than the equivalent Intel or AMD powered instance. For the larger instances the figures are so similar I suspect we’re hitting another limiting factor somewhere, possibly database performance.

Next up was a basic Sysbench CPU benchmark, which will verify prime numbers by calculating primes up to a limit and give a raw CPU performance figure. This also includes the larger general purpose instances here too. The command used for this is below, with threads to match the vCPU in the instance:

sysbench cpu --cpu-max-prime=20000 --threads=2 --time=120 run

t3a.micro Sysbench
CPU speed: events per second: 500.22

t4g.micro Sysbench
CPU speed: events per second: 2187.58

m5.2xlarge Sysbench
CPU speed: events per second: 2693.14

m6g.2xlarge Sysbench
CPU speed: events per second: 8774.13

As with the ApacheBench tests, the ARM instances absolutely wipe the floor with the Intel and AMD instances, and they’re 10-20% cheaper. There’s no wonder AWS are saying that the Graviton 2 based ARM instances represent a 40% price-performance improvement over other instance types.

Based on this I’ll certainly be looking at trying to use ARM instances where I can.

Sonos S2 app stuck on “Sonos” splash screen

I came to start the Sonos S2 Windows application today and noticed that it wouldn’t run, it just sticks with the Sonos logo on screen and never actually launches into the application. So I checked the log files under C:\ProgramData\SonosV2,_Inc\runtime and looked at the most recent XML file in there, which should be the last launch log, and I could see the error below in the XML file:

<ApplicationData>@Module:module @Message:Can&apos;t bind (port 3410)</ApplicationData>

This suggests that the Sonos app is trying to open a port and failing, presumably as it’s already reserved for something else. So I checked current excluded ports using the command below and port 3410 that Sonos is trying to grab is already excluded in a range spanning 3390-3489.

netsh int ipv4 show excludedportrange protocol=tcp

A little further Googling and I found that this might be related to Hyper-V grabbing ports, possibly more ports than it really needs, and as I use WSL2 on my machine I do have Hyper-V installed. So the solution was to simply disable Hyper-V, reboot, add an exclusion for port 3410 so that Hyper-V can’t grab it, then re-enable Hyper-V, and the code for that is below:

dism.exe /Online /Disable-Feature:Microsoft-Hyper-V
netsh int ipv4 add excludedportrange protocol=tcp startport=3410 numberofports=1
dism.exe /Online /Enable-Feature:Microsoft-Hyper-V

After I’d done this, the Sonos app launches again, and Hyper-V works as expected. Hopefully this helps someone, as I found this problem mentioned in a closed topic on the Sonos forums, but with no solution listed.

Ubiquiti Home Network Setup – part 2

Finally, this is the follow up post to My Experiences with consumer mesh wireless devices and Ubiquiti Home Network Setup – part 1

After quite a while at my parents looking taking care of family stuff I have finally managed to get back home, and I bet my partner is thrilled, because as soon as I’m back, in goes more network stuff, including holes through walls and cabling.

So my home setup now consists of:
Draytek Vigor 130 modem
Ubiquiti Dream Machine
Two Ubiquiti Flex Mini switches
A single Ubiquiti AC-Lite AP with another following later this week:

Unifi Devices

The living room switch is wired directly from the office, with a Cat6 run out from the office, outside and round the outside of the house and back into the living room, leaving the current setup looking something like this;

Home network map

The cabling wasn’t strictly necessary , I could have used the second AP in wireless uplink mode, but since I can wire it in, I’ll certainly see better performance from doing it.

There are a few small annoyances that I’ve seen, things I’ve become aware of due to the wealth of features on the Ubiquiti kit, but certainly not caused by it. First, my partners work laptop is old enough that the wireless chip in it doesn’t support fast roaming, so it can be a bit of a pain when she moves between the two APs, whereas my work laptop does support it. Previously on the consumer wireless devices, where they do support fast roaming, there’s then no way to see which client devices are capable of that, whereas I can just look at my client device table to see here exactly what does and doesn’t support it.
Second, the 2.4GHz wireless bands round here are just a noisy mess, with seemingly every house nearby blaring out 2.4GHz at full strength, although this seems to be a problem everywhere these days. The number of times I’ve heard our network guys at work cursing our very central London office, with an absolutely full 2.4GHz spectrum. (Side note, in that office even the 5GHz bands are full, and we get evacuation alerts far more frequently than anyone would want, for anything above band 48)

I’ve dropped the transmit power down as appropriate, so don’t look at me for noisy 2.4GHz bands, and when the second AC-Lite comes, I’ll have to tune everything again. Wireless channel widths have been reduced, as I’m not trying to push huge data throughput on wireless, just to cover Internet speed, which until OpenReach pull FTTP out of the bag in my area, isn’t that fast. Anything that needs big throughput, gets wired in. That leaves me at 20Mhz channels for 2.4GHz bands and 40MHz channels for 5GHz bands, nothing mad, but enough for what I need.

When summer rolls round next year, the new network should easily stretch to the garden, enabling me to work outside, and I can get my partner to start nagging work for a device that supports fast roaming to solve that little conundrum.

Doing two proper setups like this with Ubiquiti kit has been an absolute breath of fresh air compared to the various consumer mesh wireless kits I looked at, and I really can’t imagine I’d go back to consumer grade stuff again, even if tacking a long length of Cat6 to a wall was a bloody pain.

Ubiquiti Home Network Setup – part 1

Update 11/11/2020: I’ve followed this post up with Ubiquiti Home Network Setup – part 2

This is kind of a follow up to my previous post, My Experience With Consumer Mesh Wireless Devices

Due to unfortunate personal circumstances I’ve not actually been at home recently, so as far as my home setup goes, I’ve only made minor progress. I’ve decided I will go all in and refresh my home network setup to a Ubiquiti system. At present that’s involved getting a Ubiquiti UDM coupled with a Draytek 130 modem to replace the core of the network. This will be further supplemented with a FlexHD AP somewhere else in the house when I’ve had chance to actually do a bit of a site survey. I’ve no space for racking any kit, so have had to go with the UDM rather than the UDM Pro.

At my parents house however, I’ve spent ample time and that meant I had an urgent need to make some changes to their network, as I’ve had to work from there for the last three months. They were basically in the same boat as myself in terms of wanting blanket wireless coverage, with the exception that they have a fairly large house that’s quite long, and has been extended, so has internal walls that were external walls, making them thicker than usual.

I started with the same Ubiquiti UDM, Draytek 130 and added a FlexHD, two AC Lite APs, and a couple of Flex Mini switches, with a few long cable runs this has improved things massively for them.

Unifi Devices

In total there’s approximately 40 network clients, and everything seems to be working great after I’d done a little tuning around channels and transmit powers. The FlexHD covers off the end of the house with the most wireless clients, as you can see on this cool map:

My parents home network map

My goal here was stability and internet connectivity, there’s no need for them to be pushing insane speeds with 80 or 160Mhz wide 5Ghz channels, it’d just create noise and use up airtime unnecessarily, this is purely for internet connectivity, and that’s exactly what I’ve managed to achieve. Problems with their previous kit were frequent, with drop outs or weird bugs in software, but since the upgrade, everything has been very stable and I must say I’m a full convert to Ubiquiti kit, it’s been absolutely great to use.

One cool thing that I have seen through the Ubiquiti kit, is the absolute proliferation of wireless hotspots in cars and on phone. Sat in their house I can generally see three other neighbouring access points when I check wireless networks available, which are the neighbours either side and the house over the street. However, looking at the neighbouring access points list in the console, shows me 206 access points in a 24 hour period!

Neighbouring access points

All the static ones in neighbours houses will be there, but swamped out by the sheer number that pass in cars, and are only ever seen briefly, as their house backs onto a relatively busy road, and the clue that these are in cars is in the AP names;

There’s a lot of cool features on the Ubiquiti kit, and if you’re a bit of a network or IT nerd, I’d highly recommend it if you’ve been struggling with home wireless.

Still to come

Next up will be the network changes when I get home. I’d imagine it’ll be a very similar setup to the one I’ve done for my parents, as anything that needs high speeds or low latency, like the NAS or my gaming PC will be wired, and wireless just need to be stable and of more than internet speeds. So look out for that to come in the next few weeks hopefully, I’m looking forward to doing it all again at home.

Backing Up Data Centre Hosted Data To AWS

Currently at work, we backup the majority of our critical on-premises data to Azure, with some local retention onsite, as the majority of restores are needed for data within the last week or so. This is done using a combination of Microsoft Azure Backup Services (MABS) and the standalone Microsoft Azure Recovery Services (MARS) software and agents.

In time the majority of the on-prem data is likely to move to the cloud, but this takes time, with various products and business functions to move, so for now we still have a large data set that we need to backup from our data centres. The cloud all this on-prem data is moving to is and will continue to be AWS, but the backups happen into Azure, for mainly historical reasons.

We started looking at whether we could move this from Azure to AWS to reduce complexity, simplify billing and potentially reduce cost, because as I said, a lot of our estate runs in AWS already. We fairly quickly found out that the actual “AWS Backup” solution doesn’t really cover on-premises data directly, and from there things started to get complicated.

So we looked at the various iterations of Storage Gateway, including file gateway and volume gateway.

Tape gateways are pretty much ruled out as we don’t have any enterprise backup software that will write to tape storage, so would incur additional cost to purchase licences for that.

File Gateway does most of what we need as we can write backup output from things like MSSQL or MySQL servers running on-prem, to the file gateway presented volume and have that written back into S3 and backed up from there. However, as this can’t be throttled in terms of bandwidth and we don’t have a direct connect available for this means we can’t risk annihilating our data centre egress bandwidth.

Volume gateways would do what we need in terms of being able to present storage to a VM that backups are written to, and that can then be throttled and sent into S3. From there we’d have to pick that data up and move that via AWS Backup into a proper backup with proper retention policies attached, however as this bills as EBS rather than S3 storage, when we priced all this out it worked out considerably more expensive than our current solution of backing up into Azure, which again, pretty much rules this out as option.

Now, if only Amazon would add an on-prem option for AWS Backup, we’d be laughing – oh well, we can dream.

Serving Index Pages From non-root Locations With AWS CloudFront

Note: Adapted from, I grabbed the content from web caches as the site appears to have been taken offline, but I did find it useful, so thought it might be worth re-creating.

So, I was doing a quick experiment with host this site in static form in AWS S3, details on how that works are readily available, so I’ll not go into that here. Once you’ve got a static website it’s not hard to add a CloudFront distribution in front of it for content caching and other CDN stuff.

Once setup and with the DNS entries in place, the Cloudfront distribution will present cached copies of your website in S3, and if you’ve got a flat site structure, such as this example below;

this will work fine.

However, if you have data in subfolders, ie. non-root locations, for example if there was a folder in the bucket called, “subfolder” such as the example here;

and you want to be able to browse to


and have the server automatically serve out the index page from within this folder, you’ll find you get a 403 error from CloudFront. This problem comes about as S3 doesn’t really have a folder structure, but rather has a flat structure of keys and values with lots of cleverness that enables it to simulate a hierarchical folder structure. So your request to CloudFront gets converted into, “hey S3, give me the object whose key is subfolder/“, to which S3 correctly replies, “that doesn’t exist”.

When you enable S3’s static website hosting mode, however, some additional transformations are performed on inbound requests; these transformations include the ability to translate requests for a “directory” to requests for the default index page inside that “directory”, which is what we want to happen, and this is the key to the solution.

In brief: when setting up your CloudFront distribution, don’t set the origin to the name of the S3 bucket; instead, set the origin to the static website endpoint that corresponds to that S3 bucket. Amazon are clear there is a difference here, between REST API endpoints and static website endpoints, but they’re only looking at 403 errors coming from the root in that document.

So, assuming you’ve already created the static site in S3 and that can be accessed on the usual URL, it’s example time;

  1. Create a new CloudFront distribution.
  2. When creating the CloudFront distribution, set the origin hostname to the static website endpoint and do NOT let the AWS console autocomplete a S3 bucket name for you, and do not follow the instructions that say “For example, for an Amazon S3 bucket, type the name in the format”.
  3. Also, do not configure a default root object for the CloudFront distribution, we’ll let S3 handle this
  4. Configure the desired hostname for your site, such as your-site.tld as an alternate domain name for the CloudFront distribution.
  5. Finish creating the CloudFront distribution; you’ll know you’ve done it correctly if the Origin Type of the origin is listed as “Custom Origin”, not “S3 Origin”.
  6. While the CloudFront distribution is deploying, set up the necessary DNS entries, either directly to the CloudFront distribution in Route 53 or as a CNAME in whatever DNS provider is hosting the zone for your domain.

Once your distribution is fully deployed and the A record has propagated, browse around in your site and you should see all of your content, and it’ll be served out from CloudFront. Essentially what’s happening is CloudFront is acting as a simple caching reverse proxy, and all of the request routing logic is being implemented at S3, so you get the best of both worlds.

Note: nothing comes without a cost, and in this case the cost is that you must make all of your content visible to the public Internet, as though you were serving direct from S3, which means that it will be possible for others to bypass the CloudFront CDN and pull content directly from S3. So be careful to not put anything in the S3 bucket that you don’t want to publish.

If you need to use the feature of CloudFront that enables you to leave your S3 bucket with restricted access, using CloudFront as the only point of entry, then this method will not work for you.

My Experiences with consumer mesh wireless devices

Update 11/11/2020: I’ve followed up this post with two others, Ubiquiti Home Network Setup – part 1 and Ubiquiti Home Network Setup – part 2

UPDATE 15/08/2020: I’m currently testing Ubiquiti hardware, which initially seems much better, although I’ve not got as far as multiple AP’s yet as I’ve been away from home. The current setup for this is a Ubiquiti UDM, and I’ll add access points further down the line, as well as do a bit of a write-up of the setup.

Over the previous 8 weeks I’ve been experimenting with various mesh wireless systems, and thought it might be worth at least recording some of my thoughts on them.

The Problem to be solved

My house isn’t huge, but unfortunately as it was built in a time before internet connectivity was a consideration the main BT socket is in the hallway, where it’s completely useless as having the router there would either involve wall mounting it, which I don’t want to do, or having a small table in the hall right at the bottom of the stairs in front of the living room door, which I also don’t want to do. Thankfully the previous owner had taken care of this little trouble but drilling various holes through walls, meaning I can run the telephone cable through holes and under carpets to get to the router which is in the office room at the front of the house.

So far so normal, but this essentially means the router could not be any further towards the front of the house without it being outside on the drive. To further complicate things, the way the room is setup in terms of desk layout for me and my partner to work, we have a large wine fridge next to desk, so the router is immediately obscured by a large metal object.

Because of the direction the garden faces we have our outside table setup down at the far end of the garden, putting this as far away as possible from the router that it could be and still be on my property, and by the time you get down there the line of sight to the router is obstructed by a fridge, three internal walls, and an external wall.

Reception down there was patchy at best, and video calls were out of the question, which meant working from outside in the garden wasn’t an option.

We also wanted either me or my partner to be able to move quickly and work in the living room temporarily if we both ended up having calls at the same time, and her work systems are a little flaky when it comes to switching wireless networks, involving long periods of reconnecting and Cisco IP Communicator being a real pain with this.

On top of this, we’ve got various Google Home devices around the house, Sonos speakers, multiple phones, laptops, tablets, and other smart home devices.

Essentially though I’m only looking for internet access everywhere, not throwing large files around a at as close to gigabit LAN speeds as possible, so that seems like a fairly straightforward setup. My router in use here is either a Draytek 2860n or a Billion 8800NL, and I list two for reasons that will become apparent later.

The solution

I tried four different mesh wireless systems over the previous few months, with various degrees of success. All systems have been three device systems, with the main device going in the office at the front of the house, a second in the living room at the back of the house, and a third upstairs in the main bedroom at the front of the house.

BT Whole Home WiFi Premium

I got the BT Premium whole home wifi system out of the box and setup, configured to essentially act as a wireless AP mesh. Leaving my Draytek router doing all the heavy lifting for the network, including DHCP, DNS and other services.

The setup on the app reported the connection between all the BT devices was good and at first, all seemed well, I migrated all my devices to the new network and they all seemed happy enough. I did some performance testing from various places in the house and out in the garden, and all was good for internet access.

After a few hours though I started to notice a problem, where if a device was turned off, rebooted, or had it’s wireless disconnected by leaving the house or something, it couldn’t then reconnect to the network. If I restarted the BT mesh system, it worked again, until the same thing occurred again a few hours later. After various testing and trawling the BT product forums, I came to the conclusion that for some reason the devices stop passing DHCP packets through. If I assigned an IP manually, all good, use DHCP, the wireless connection happens, but no addressing is given. I tried resetting the whole thing and starting again, but couldn’t solve this. I did read from someone who had more time to look at this than I did that he thought they were fragmenting DHCP packets for some reason. – So back in their box they went

Amazon eero

The next contender was the Amazon eero mesh wireless system. I’ll start by saying the eero doesn’t do PPPoE, so forget that as an option here. I initially set this up in router mode, and swapped the Draytek to the Billion as the Billion will run in the mode I’ve seen from various sources called PPP half-bridge mode, where the WAN IP received from the ISP is passed through to the device underneath it, eliminating the double NAT problem, although this is ultimately trickery to get around the eero devices lack of PPPoE capability.

So the main eero had the WAN IP assigned by my ISP on it’s external interface, albeit via a half cut method, so good? At this point I very quickly discovered that the eero devices in “router” mode did not want to get an IPv6 address from the Billion device above it. It wanted the address handed out via DHCPv6 and Prefix Delegation, rather than using SLAAC. I did try every conceivable setup on the Billion, but could not get this to work, so no IPv6 connectivity. Not a big deal you may think, but it bothers me. – Again, back in their box they went.

Google Nest WiFi

Next contender was the new Google Nest Wifi. Again, the setup was very simple and all seemed to work well, there was even an option to enable IPv6 which enabled and worked well. However after a few hours IPv6 just stopped working across all devices on the network, meaning it had to be turned off and back on on the Nest WiFi devices and then it would work again, for another few hours.

We also noticed that when we were playing music through the Sonos devices they kept all dropping off the network. It didn’t seem to be any specific Sonos speaker, they just all dropped, then 5 minutes later all reconnected. This happened quite often, sometimes a few times a day, and enough to wind me up.

So I looked at putting the Google Nest Wifi devices into AP mode to see if this helped, except you can’t. Well, technically you can, but they don’t act as a mesh, so what exactly is the point? – Back in their box they went.

Linksys Velop AC6600

Next challenger was the Linksys Velop system, which was certainly the worst looking of the bunch. I had them in white, but in black I’d imagine you could mistake them for the monolith from “2001: A Space Odyssey”.

The app was also more clunky than the other devices, just as a point to note. Perfectly functional, just a bit more clunky.

Everything setup ok, but within 24 hours I had problems with the Linksys devices dropping their connectivity to each other, and I was getting fairly irritated with these sorts of problems by this point. – I’m sure you can probably guess, back in their box they went.

AMAZON EERO – Another attempt

I decided, since they were still back in their box where I left them, to give the eero another final shot before I returned them. This time I went for what eero call “bridge mode”, where my router does the work, and they act as mesh access points. So back in with the Draytek this time, and they seem….ok.

We’ve now been running off them for a few weeks, Sonos devices seem fine, and everything works ok. I don’t get the fancy features of the eero devices, but then I’ve got what I need on the Draytek router, so all seemed good.

I have had a few instances where the two satellite devices get disconnected from the main unit, which stays connected and working during this time. Mostly they reconnect pretty fast, but sadly they did this twice in the space of 15 minutes this morning when I was on a video call, so that did irk me somewhat, and enough to question whether I should get them back in their box and have another go with something else.

Asus ZenWiFi XT8

I tried these as I was really at the end of my patience with the various other options. I also brought in a Draytek 130, to act as a pure modem for this setup.

Everything was plugged in and configured, and everything seemed fine on initial inspection, until I did some performance testing. At which point I found that wired in in to the Asus router unit, I got what I would call “full speed” for my connection. However, move to wireless, and the upload speed drops to somewhere between dire and unacceptable. Download speeds stay normal, but upload performance dies a terrible death,


So at this point I really have to wonder two things; What have I done to deserve this, and why is home mesh wireless kit so absolutely crap? It’s not a case of “there’s no perfect kit”, there’s just no kit that didn’t have some fairly critical functionality flaw.

I honestly don’t know why this has been so problematic. It seems like a fairly simple problem to solve as I’m not after huge throughput, just stability at range and standard UK broadband speeds on wireless. All the mesh systems I’ve tried either seem to have their own unique problems, like the DHCP on the BT one, or devices falling off the network and IPv6 failing to work like the Google devices, performance problems like the ASUS, or just random disconnects between them, which the eero and Linksys systems seem to do

I’m not sure of the advice I’d give here after all this. Perhaps don’t bother with mesh wireless is the right conclusion? All the ones I tested seemed flaky in one way or another, although reading the reviews you get the usual mixed bag, they’re terrible and don’t work at all; they’re the best thing since sliced bread. My personal experience says there’s a reason why the ethernet cable is still king.

My problem now is I still need a mesh wireless system, and I know there are a few brands I haven’t tested, Netgear’s Orbi system, and a few other less known devices, like the Plume kit, but after having tried so many, would I expect them to be any better? I simply don’t know what to actually do to solve this beyond smashing my walls to bits and running cables.

As a note, my parents have a set of TP-Link Deco P9 devices. They have a fairly large house to cover and so have four of these setup in various places. My experience of these is the app isn’t as slick as the eero or Google apps, isn’t very feature rich, and is a bit flaky at times. My biggest gripe of these, is since they support ethernet, wireless or powerline for backhaul, you have no way to pick which they’re using, and no way to see which the devices themselves have picked. My parents had a very similar problem to what I’ve seen with other mesh devices I’ve tested, in which the mesh devices disconnected from the main device, but in their case seemed to stay disconnected. This could be related to the backhaul method chosen, I have no idea as you can’t control it or see what’s being used. UPDATE: This little niggle has apparently now been improved slightly in a later firmware update and you can now see connection quality of the various backhaul methods, but still can’t pick which one is being used, or see which one the devices have chosen, and the quality of the firmware seems questionable. There’s a lot of errors appearing in the log files that TP-Link are aware of , and are working on fixing at some point. From my experience, the Deco devices constantly go “offline” in the app, when in reality they’re actually still online and available. So, as for the Deco devices, I’d also avoid them.