May 15, 2017
When thinking about this blog, I was reminded of an old joke: There are 10 kinds of people in the world: those who understand binary and those who do not. Sorry, but it still makes me smile Now to...
When thinking about this blog, I was reminded of an old joke:
There are 10 kinds of people in the world: those who understand binary and those who do not.
Sorry, but it still makes me smile
Now to the blog.
Embedded developers often need to work “close to the hardware.” Stated another way, the raw bit patterns used in control registers need to be visualized and manipulated. Clearly, the way to represent such a register would be in binary, just ones and zeros, but this is not a commonly available option in high-level languages or even assembly. So, it’s normal to resort to other options.
Early in my programming career, once I had passed my Fortran period, I use DEC minicomputers, which were very popular at that time. Originally, these machines had odd, by today’s standards, word sizes—12 or 18 bits, for example. There’s nothing sacred about the 8-, 16-, or 32-bit words that we see universally today. With these older machines, grouping the bits into threes made sense, particularly as the instruction sets often used 3-bit fields. So, I became adept at working in octal and could see the binary values intuitively.
As the PDP-8, for example, was a “real” computer, I would commonly need to enter some binary code via a line of switches on the front panel. When DEC introduced the PDP-11, it started the trend toward the word sizes that are now common. These machines used 16 bits, but DEC persisted with using octal, even though the most significant digit could only be a one or a zero. As the instruction set still had some 3-bit fields, this made some sense. Later, when the 32-bit VAX came along, DEC started using hexadecimal like everyone else.
Despite the number of years that have passed, I’ve never felt 100% at home with hex. Visualizing 0x0C as 00001100 doesn’t come naturally. Some time ago, I wanted to be able to include binary values in my C/C++ code, so I created a binary.h file, which had lines in it like:
#define b00000000 ((unsigned char) 0×00)
#define b00000001 ((unsigned char) 0×01)
#define b00000010 ((unsigned char) 0×02)
This worked well. Let me know if you’d like a copy of the file.
Colin Walls is an Embedded Software Technologist at Mentor Graphics’ Embedded Software Division.