Because OCT 31 = DEC 25
An explanation
The joke plays on the fact that the number 31 in octaldecimal (base 8, octal, or just OCT) equals 25 in decimal (base 10, or DEC - the number system most people are taught). In decimal, the second column (the 2 in our joke) represents the number of "10s" in the number, while in octal that column (the 3) represents the number of "8s". So to verify we would do 8*3 + 1 = 25.
But why is this a joke about coding? What is significant about base 8 that programmers would be interested in it?
We'll explore that by looking at a little about the binary number system (base 2) and the history of computer systems.
Binary
It is overly simplistic, but fundamentally, nearly all modern computers are built around base 2 - an electrical circuit can be on or off. This gets translated to a 0 or a 1. To build larger numbers, we use these two binary digits (bits) in much the same way we use 10 digits (0-9) to build base 10 numbers. With base 10, each column represents 10 times the column to the right, so in binary, each column represents twice the column to the right of it.
I like to build a table when I'm computing binary, so it might look something like this:
256 128 64 32 16 8 4 2 1
To convert a decimal number to binary, we find the left most column that can store our number, mark a 1 in that column, and subtract it from our number. We then keep repeating this until we get to 0. For every column we don't put a 1 in, we mark it with a 0.
To convert the number 42, for example, we might go through this process:
- The largest number that fits is 32. We put a 1 in the 32 column, subtract it from 42 and get 10.
- The largest number that fits into 10 is 8. We put a 1 in the 8 column, subtract, and get 2.
- The largest number that fits into 2 is 2. We put a 1 in the 2 column, subtract, and get 0.
- We'll then put a 0 into all the other columns.
So our table would look something like this:
256 128 64 32 16 8 4 2 1
0 0 0 1 0 1 0 1 0
If we have a binary number and need to get the decimal number, we simply put it into columns and add up those columns that have 1s in them. So given the binary number 001000101, we would write it out like this:
256 128 64 32 16 8 4 2 1
0 0 1 0 0 0 1 0 1
and add 64+4+1.
That seems well and good. But what does this have to do with octal?
Moving up to Octal
Binary numbers are good for computers, but are a bit long to always write out for humans. Decimal makes it shorter (more information dense - a subject for another time), but is more complex to figure out how the computer represents it since you'd need to do the math every time. Decimal also requires somewhere between 3 and 4 bits to represent one digit, which adds to the complexity.
Octal is convenient since three binary digits was completely represent one octit (the equivalent of a digit). To help compute octits, we can rewrite our table thusly, clustering every three columns:
256 128 64 | 32 16 8 | 4 2 1
4 2 1 | 4 2 1 | 4 2 1
We will use the top column when converting between decimal and binary, and the second column when converting between octal. So given our joke of 25 (decimal), we would write it as
256 128 64 | 32 16 8 | 4 2 1
4 2 1 | 4 2 1 | 4 2 1
0 0 0 | 0 1 1 | 0 0 1
we can then go through each cluster, and add up the columns that have 1s in them, using the values from the second row. Doing so gives us 031, which is the answer to our joke.
That seems pretty easy, right? Certainly nothing to be scared about.
Who cares?
If you were using a PDP-8 computer, you would! (And 40 years ago, if you were using a computer, there was a good chance you were using a PDP-8.) And a lot of early coding was done using a PDP-8, which was one of the most popular machines of its time. The hardware in a PDP-8 used 12 bits for most of its internal systems. This translates easily to 4 octits. Other systems used 18 or 36 bits, which corresponded to 6 or 9 octits.
Since UNIX was first written on some of these systems, you see traces of octal around the system. Most notably, the UNIX permission structure gives read, write, and execute permissions to a file. Since that can be represented as 3 bits, it translates nicely to an octit, and you can still see this reflected in the octal modes of the UNIX chmod command.
But largely, this is a relic of computing history. The wide popularity and adoption of the IBM Series/360 line of computers established the 8-bit byte as the de-facto standard in the late 60s and early 70s. The early Internet developers similarly adopted the "octet" (or 8-bit byte) when describing their Internet Protocol. Since 8 bits can't evenly be represented by octal, octal slowly fell out of favor to be replaced by hexadecimal (base 16, which uses 4 bits for one hexadigit). But that is some math for another time.
Pretty long explanation for one joke, huh? Kinda ruins the punch line. Anyway - enjoy the holiday!
No comments:
Post a Comment