Fixing WiFi Issues on the Lenovo P11 Plus Tablet

tl;dr – The Lenovo P11 Plus (and likely the P11 and possible P11 Pro too) don’t work well when you have Protected Management Frames (PMF) enabled. If you are having problems with the tablet, and can turn these off then this may solve your problem.

I recently bought a Lenovo P11 Plus tablet. I wanted a low cost tablet for general media consumption, and already being an Android user having an Android tablet made more sense than an iPad. The Lenovo P11 Plus also has a reasonably decent screen, sound (“Dolby Atmos”), battery life etc. and at £200 was selling at a pretty decent price.

However, I immediately started having WiFi issues with it. I could connect to my home WiFi just fine, but after something like 10-30 minutes the WiFi would just stop transferring traffic. The icon at the top was still showing the tablet connected to the WiFi AP, but every app just couldn’t access the network. This was the only device showing an issue, and I’ve got Macs, PCs, other Android devices (including TVs) and the usual assortment of smart devices all using the same WiFi.

I have separate WiFi access points at home as it gives me three hardwired devices covering the whole house and outside areas. I also tend to try not to the ISP provided hardware since it is inevitably built to a budget, and hence tends to not be very good. Having my own equipment also means I don’t need to change much if I change ISP.

After messing around with various settings on the tablet, I re-enabled the WiFi on the ISP provided router and connected to that. Everything worked fine, so I knew the tablet could work under the right conditions. I then decided to take it into work which has similar WiFi access points to mine (Ubiquiti) and found that it worked just fine there too. That enabled me to compare system settings and I found that my home network had PMF enabled, whereas our work network does not.

Changing this setting at work immediately solve my problem, and the WiFi on the tablet has been working ever since.

Listing database table sizes in SQL Server

Today I wanted to put a copy of a database onto my laptop so I could access it offline – when travelling or away from home without access to the Internet. Unfortunately the database is pretty big and I didn’t really want to lose a lot of space just for that database. However I really only needed enough of it for development purposes, so I thought I’d delete out as much data as I could starting with the largest tables.

Looking at SQL Server there isn’t an obvious way to see the size of all tables in a convenient form. However, Microsoft do provide a stored procedure ‘sp_spaceused’ which gives information about the size of a single table. There is also a stored procedure ‘sp_MSforeachtable’ which will run some SQL on each table in the database. Combine these together and you can produce something that puts the table usage data into a temporary table:

EXECUTE sp_MSforeachtable
@precommand = 'DROP TABLE ##TableSizes; CREATE TABLE ##TableSizes
( name NVARCHAR(128),
rows CHAR(11),
reserved VARCHAR(50),
data VARCHAR(50),
index_size VARCHAR(50),
unused VARCHAR(50)
)',
@command1 = 'INSERT INTO ##TableSizes EXECUTE sp_spaceused [?];';
GO

Unfortunately the result isn’t as useful as it could be – the size values contain ‘KB’ on the end and the columns are set as VARCHAR so they won’t sort numerically. Fixing this is fortunately quite easy:

UPDATE ##TableSizes
SET reserved = SUBSTRING(reserved, 1, LEN(reserved)-3),
    data = SUBSTRING(data, 1, LEN(data)-3),
    index_size = SUBSTRING(index_size, 1, LEN(index_size)-3),
    unused = SUBSTRING(unused, 1, LEN(unused)-3);
ALTER TABLE ##TableSizes ALTER COLUMN reserved INT;
ALTER TABLE ##TableSizes ALTER COLUMN data INT;
ALTER TABLE ##TableSizes ALTER COLUMN index_size INT;
ALTER TABLE ##TableSizes ALTER COLUMN unused INT;

You now have a temporary table containing all the table size information and can easily query / sort / filter the information as you require, eg:

SELECT TOP 10 * FROM ##TableSizes ORDER BY data DESC;

– will show the top tables in terms of data stored.

No checksum on Apple iPhone UDIDs

Just found out the hard way that there is no checksum on the UDID of Apple devices (iPhone, iPad, iPod Touch etc.). The UDID is a unique identification for the device, and is used during development to allow pre-release versions of apps to be tested on real hardware before submitting to the App Store. This allows developers to test the app, but allows Apple to ensure the app isn’t generally released without going through their App Store.

For such a long number (a 40 digit hexadecimal string), I would have thought that there would be some form of checksum in there – technology that has been around since I don’t know when and is prevalent in all fields of computing where having correct data is important. Even credit card numbers have a form of checksum built-in to guard against error.

However, alas it seems there is no checksum in the number as I’ve found out that a UDID we were provided had one digit wrong. Wouldn’t normally be a massive issue, but due to circumstance it will be a pain to rebuild and get the app we want to put on the device on it. So if you are getting a UDID from someone in the future, recommend that you ask for a screenshot to be sure.

OS/X Leopard and Samba Permissions

Finally solved a problem that has been causing me much grief. We have a fileserver running CentOS Linux with Samba used to provide file shares to our desktops – a mixture of Windows, Mac and Linux. For some reason the file server was not setting the correct permissions on files copied onto it – using 644 (rw-r–r–) instead of 664 (rw-rw-r–). The consequence of this was that only the person that create the file (or similarly directory) could edit it or move it to another directory (unless they owned that directory). This is something that I’m sure used to work for us, so I was a bit puzzled to hear reports of the problem.

Normally setting either the “force create mode” and “force directory mode” options in the smb.conf configuration file, or using “inherit permissions” ensures that the correct permissions are set. The former option can be set to 664 / 775 respectively and will ensure that files copied onto the server get the right permissions – in particular the group write bit is set allowing any user (who has access to the file) to write it, or for a directory to copy other files into that directory. The latter option ensures that new files inherit the permissions of their parent, so assuming that the system starts with the correct permissions on everything, all new files should continue to have the right permissions.

In the end I decided to upgrade the server as we were running an old version of CentOS and it was beneficial to do it anyway. However, after upgrading and running some tests the problem still persisted. Very frustrating as there are lots of postings of people having similar issues, and they always get resolved by setting the above options.

Finally, I managed to get to the bottom of the issue. It seems that Samba has a feature that allows Unix clients to perform additional operations. OS/X takes advantage of this in Leopard (and now Snow Leopard I imagine) and resets the permissions on the file after it is created. So it doesn’t matter (with the exception of the below) what options you set in smb.conf – any files created from recent Macs will have the wrong permissions. Most of our heavy users of the server use OS/X and so what I thought was an issue prevalent to everyone actually was specific to one operating system.

Looking around the web, I came across a post in the Apple support forums with the solution – set “unix extensions = no” in the global section of smb.conf and restart Samba. One simple line and suddenly everything is back to how it was. Reading around it appears to be an issue that is being resolved – although I imagine it will take a few months before it appears in mainstream releases.

Building mpt-status for CentOS 5.2

At Vexed Digital, we recently migrated a large part of our internal infrastructure from a co-located server to a couple of machines that we lease from our hosting provider. These machines are used for things such as email, our website, FTP etc. and so we have RAID-1 (mirroring) with a hardware controller. Hardware RAID has always worked well for me, but we obviously want to be sure that the underlying physical drives are OK and for that we need to install tools to interrogate the controller for status updates.

Using ‘lspci’ I found that the card in one of the machines is an “LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS” controller, but unfortunately could not find any vendor tools for it. Fortunately though, there is an open source tool developed by originally by Matt Braithwaite and now Roberto Nibali which you can find through Freshmeat.

If like us you run CentOS on your machines, you will find that it doesn’t compile straight away. First of all, you will need to install the kernel source which is easy enough to do if you follow the instructions on the CentOS website. Once you’ve done that you will find that you still can’t compile it unless you perform one more step.

First of all, you will need to edit the Makefile and alter the line for CFLAGS to add the ‘include’ directory of your kernel source tree, eg:

CFLAGS          := -Iincl -Wall -W -O2
-I${KERNEL_PATH}/include
-I${KERNEL_PATH}/drivers/message/fusion
-I${KERNEL_PATH_B}/drivers/message/fusion
-I${KERNEL_PATH_S}/drivers/message/fusion

Here I’ve added the second line “-I${KERNEL_PATH}/include ” so that it can find linux/compiler.h. You are now ready to compile, but make sure that you run ‘make’ with the path to your kernel source tree, eg:

make KERNEL_PATH=/home/ben/rpmbuild/BUILD/kernel-2.6.18/linux-2.6.18.i686

You should then (almost immediately – it’s not large) get a single binary, mpt-status. Run this and it will tell you in a few lines the key information about the controller, virtual disc and each of the physical discs. It couldn’t find the controller where it expected it when I ran it, but unlike some tools it is actually quite helpful and told me to use the ‘-p’ option to probe the SCSI bus. This told me that the card was on SCSI ID 6, and it suggested the arguments ‘-i 6’ would give me the information I was looking for:

ioc0 vol_id 6 type IM, 2 phy, 231 GB, state OPTIMAL, flags ENABLED
ioc0 phy 1 scsi_id 7 ATA      GB0250C8045      HPG2, 232 GB, state ONLINE, flags NONE
ioc0 phy 0 scsi_id 8 ATA      GB0250C8045      HPG1, 232 GB, state ONLINE, flags NONE

The documents that come with the package give some information on what this means, but in the above case the first line shows that the virtual disc is in an OPTIMAL (ie. good) state and is 231GB with type IM which means mirrored (RAID-1). There are two discs, both are ONLINE (so OK).

Of course the next stage is getting the pertinent information into our nagios system so that we get told when they fail. Note I didn’t say if – drives always fail eventually…it’s just a question of how soon 🙂

MacBook Pro

I’ve never been a great fan of the Mac interface – mainly I just couldn’t get used to the idea of a menu bar at the top of the screen. It worked fine when we had smaller screens – for example with the ‘Compact Mac‘ family – since realistically you had to run each application full screen anyway. The menu bar can be anywhere on the screen and it will always be close by.

It became slightly more annoying as screens got larger, but was still bearable if you had a single screen. It took a bit of work to move the pointe up there, but it wasn’t a major inconvenience. I suspect Apple didn’t change it back then since it had become a trademark of the operating system, and didn’t upset too many people.

One thing Macs have always been good at is second monitors – you just plug them in, and it usually works with little fuss. The first such setup that I used had an A3 monitor attached to a Mac IIci. You could move the menu bar onto the A3 monitor, but it was still all the way up there at the top – so if you were working elsewhere on screen, it was such as chore to get up there!Monitors are still increasing in size, and nothing has really changed – it must be a nightmare if you have 2 30″ cinema displays ;-). Well perhaps not a nightmare given a setup that most people would love to own, but a bit annoying though. I do wonder how long it will be before Apple do something?

Anyway, otherwise the Mac interface has improved over the years to the point where I’m beginning to quite like it. With OS/X having Unix underneath, the Mac has become every developers dream – a nice interface with good software support and a decent operating system underneath. It also has some very nice hardware – the MacBook Pro in particular looking like it was designed by engineers, rather than managers. The only thing missing for me was the ability to run certain Windows programs, as my job quite often requires me to do this.

Well of course Apple changed to using Intel processes, and that barrier has gone. So when an opportunity came up recently to switch, I went out and got a MacBook Pro with VMWare Fusion. So far it’s running nicely –  with my previous laptop, I always had to worry about blocking the air vents since these are usually under the machine and at the side. The MacBook Pro vents air under the screen in a way that means don’t have to worry about such things. And the illuminated keyboard really is the icing on the cake. The interface (OS/X 10.5 aka Leopard) runs very nicely – I know that a number of changes upset die-hard Mac users, but I think they will be appreciated by new converts.

There are a few annoyances – VMWare Fusion won’t allow you to suspend a VM if it’s running from a Bootcamp partition – for good reasons, but it’s still annoying. And a number of applications such as Adium and Skype want to show an icon in the dock, even though under Windows or a Linux based system they don’t (or at least their equivalents don’t in the case of Adium). I’ve fixed it for Adium by following the instructions here, but when I tried it on Skype it stopped logging in! I’m not sure how well it would work for Skype anyway, as a few key functions require that you have access to the full menu bar – sigh.

Flash Player 9 Update on Adobe Labs

It’s been a while since I updated my blog, mainly because I’ve had too much work on and not really much to say. However today I’ve discovered that the current pre-release version of Adobe Flash Player for Linux fixes the bugs that I wrote about previously. Fortunately I’ve managed to run the Windows version under Wine up until now; it’s not ideal, but it works. However it’s always nicer to run the player natively, and also nice to see that Adobe are just as committed to Linux support as other platforms.

Linux Flash Player Bugs

As a user of Linux on the desktop (I use Ubuntu 7 on an HP nc8430 laptop), the release of a current version of the Flash Player for Linux was great news. As well as allowing me to look at the latest Flash web sites, it allowed me to develop and test content produced using the Flex SDK. Unfortunately there are a couple of annoying bugs in the player related to use of the FileReference class – the class that lets you upload files back to the server.

The first of these bugs relates to sending parameters back to the server at the same time as uploading the file. This is useful as it lets the page send back information that the user may have entered to be associated with the file, such as a title and description for the file. Normally you use the URLVariables class to add variables to the file, for example:

var fileRef:FileReference = new FileReference();
fileRef.browse();
// In handler for Event.SELECT
var params:URLVariables = new URLVariables();
params.title = "Sleeping Cat";
params.description = "My cat having a sleep on the patio.";
var request:URLRequest = new URLRequest("http://localhost:8080/FileUploadServlet");
request.method = URLRequestMethod.POST;
request.data = params;
fileRef.upload(request);

Now under Flash Player in Windows this works just fine – the parameters are encoded into the request and uploaded just as Adobe says they should be. Try it under Linux though, and you just get the uploaded file – no parameters sent. This is very annoying as it means I have to encode the information I want to sent differently if I want to support Linux, which I do or otherwise I won’t be able to see my own application.

The workaround that I have implemented is to put the parameters into the URL itself, eg:

var requestString:String = "http://localhost:8080/FileUploadServlet?title=" + title \\
 + "&description=" + description;
var request:URLRequest = new URLRequest(requestString);

This works fine in both Linux and Windows, although you may have to encode your parameters of course before creating the URL to avoid illegal characters in there.

But unfortunately that is not the only problem with FileReference in the Flash Player. Adobe introduced an event to UPLOAD_COMPLETE_DATA that allows you to get at any response from the server once your file is uploaded. This allows you to make your server-side code send a response back to the player, for example a reference number or even XML with more complex data, for example:

fileRef.addEventListener(DataEvent.UPLOAD_COMPLETE_DATA, fileUploadCompleteDataHandler);

…and then…

private function fileUploadCompleteDataHandler(event:DataEvent) : void
{
   Alert.show("File upload OK - response from server: " + event.data, "Debug");
}

This all sounds great, but unfortunately it doesn’t work under Linux either – although its just fine under the Windows player.

I’ve reported both of these issues to Adobe, so I hope that they get resolved soon. Sadly though it does make life more complex because of course so many people will have older versions of the Flash player that are broken. Fortunately I suspect Linux users are more amenable to upgrading, and of course the majority of users have Windows or OS/X.

Right tool for the right job

Most people who do any sort of software development have probably read something from Joel on Software. This week, Joel Spolsky commented on the idea that blogs shouldn’t have comments, an idea that Dave Winer had written about previously.

Comments on any item can tend to end up as just meaningless drivel as soon as a certain type of person, ie. those who revel in the anonymity of the Internet, start “contributing”. You just have to look at the example given by Joel, or in fact pretty much any video on YouTube, or any link on Digg, Reddit, or similar sites that allow commenting. Somebody creates interesting content, and assuming it gets published widely then it will be inundated with a mass of comments – mostly one liners, probably slagging off the content in some way. Eventually the whole thing descends into a slanging match.

I believe that the problem with blog comments is that they are the wrong type of tool for the job. If people want to discuss a posting, they should do that in a place more suitable for discussions, such as a forum. Forum software has been around for ages and provides a much better environment for discussion than the simple comments system most blogs have. Typically forums allow a much better, threaded structure for discussion and better administrative controls than any blog does.

Most forum software isn’t perfect though. Slashdot is an example of a site that effectively provides forum-like discussion on the back of each article posted. They have long had to deal with the problem of inane comments, trolling etc. and so have developed a heavily customised system where the community moderates itself (or at least tries to). I still read Slashdot partly because it is relatively easy to scan through the newest articles, but also because when you look at the discussion I can quickly find the best comments and ignore the rubbish.

Joel actually follows this tactic himself – no comments on blog entries, but the site has a forum system where people can comment. Unfortunately there is still a lot of rubbish in there, but it’s a lot better than just a linear comments system attached to each blog entry. Of course it’s just a fairly vanilla forum system, as are most out there. Improving forums is probably a discussion for another day though.