Man, it's all well and good to "explain" where these absolutely arbitrary measurements come from (mile? WTF?), but do you realize how many people have no idea how to convert between these units? It makes conversions for, say, mathematical calculations (which already use decimal units, why not continue that trend!) so much more of a pain than necessary. And besides, how relevant are any of those measurements now? None of them have any meaning to an average person beyond the fact that they are ingrained into our memory from an early age. The only arguments against switching is it would be difficult to change things that are already established, like highway markers, and harder on people who are already used to this terrible, terrible system, but it would make everything ever so much easier, in the long run. Oh, and our system is also probably easier on cooks, but they already use their own ridiculous measurements like dashes and pinches, so whatever.
RE: Computers: Hexadecimal was convenient for early computers because they were built on 16-bit architecture: a memory address was usually 16 binary digits (16-bit address space), and the information at any given address was also 16 digits long (16-bit addressability). So, instead of having to write out a sequence of 16 ones and zeros, computers could simply use hexadecimal digits to represent groups of four digits (2^4 = 16). This doesn't necessarily make anything easier, as when a computer is executing some instruction in memory it still has to expand into binary to parse the opcode, operands, etc., and even doing simple calculations it's often easier to use binary, as it simplfies the arithmetic (especially given two's complement calculations with negative numbers; at least, I find binary easier to understand in that case). So computers only really "use" base-16 numbers for input/output related things, as it's easier to report a long binary number in this kind of hexadecimal shorthand. Interestingly, it's not much harder to convert from binary to decimal than it is from binary to hexadecimal. At least, the logic behind the algorithms is pretty similar.
As far as Windows using hexadecimal, that's not quite true anymore. The x86 architecture, on which most processors for the last many years of computing have been based, is actually a 32-bit processor: 32-bit address space, 32-bit addressability. So mostly it works with larger numbers, but still in the way I outlined above. Eight hexadecimal digits per binary word instead of four. In recent years they've been developing stuff like the x86-64 architecture, which is 64-bit in contrast. This is why you have had 32-bit and 64-bit versions of Windows' latest OSs, and why programs designed for one aren't necessarily compatible with the other (if you're trying to read from one memory location to the next in a 32-bit program, that same program, depending on how it's designed, may instead read the first half of one location, then the second half, and so on).