Re: [CODE] ( 1 << 01)

From: Jesse Becker (
Date: 11/12/02

On Mon, 11 Nov 2002, tap3w0rm wrote:

> ok so i am new at codeing c

As are/were we all now/then.

> and i dontunderstand .. after i converted to 128 i saw most of the
> #define Stuff ( 1 << 0 )
> #define Stuff ( 1 << 1)
> #define Stuff ( 1 << 2 )
> #define Stuff ( 1 << 3 )
> was replaced with just
> #define Stuff 0
> #define Stuff 1
> #define Stuff 2
> #define Stuff 3

That's a very astute observation. :-)

> please tell me the diffrence in the 2 ways of doing the defines

Disclaimer:  I don't use the 128 bit patch, so I can't tell you about how
it actually uses the #defines, I can tell you about bit shifting though.

Also, if anyone spots a mistake, please tell me!

First things first:  C considers '<<' to be a left shift operator, and
'>>' to be a right shift operator.  To understand what this means you have
to start looking at binary numbers.  So, here are the decimal numbers 0-10
encoded in binary:

        0  = 0000
        1  = 0001
        2  = 0010
        3  = 0011
        4  = 0100
        5  = 0101
        6  = 0110
        7  = 0111
        8  = 1000
        9  = 1001
        10 = 1010

The two shift operators take those bits, and move them some number of bits
to the left or right.  So if you have the binary number 0110, and you
shift it two bits to the left, you have 1100.  Two more examples:
        0001 << 1       = 0010
        0111 << 2       = 1100
        1000 >> 3       = 0001
        1111 >> 2       = 0011

(Not the bits getting shifted "off" in the 2nd and 4th examples...)

One other interesting tidbit: shifting one bit left or right is the same as
multiplying (left shift), or dividing (right shift) by 2.  Take a look at
the binary forms of 1,2,4, and 8 above.  Notice anything special about
them?  Care to take a guess at what the representation of 16 is? :-D
Generally speaking, shifting bits left or right is equal to multiplying
your original number by 2^(bits shifted).  Or:
                "x << b" is equivalent to "x * 2^b"

How does this relate to CircleMud?  Well, each bit in a number can be
handled separately, and used to represent the presence or absence of some
flag.  A good example are the various player affects (blind, invisible,
detect alignment, detect invisible, etc).  Each one of those traits
separate from the others, and you can have them in any combination.
CircleMud uses a 'bitvector' to manage these.

So, when you see "#define stuff (1 << 1)", it usually means that the
programmer is designating which bits in a bitvector are to be used for
storing which bits of data.

Let's pretend that there are only four affects in the game:  blind, invis,
detect alignment, and detect invisible.  We can represent these four
affects using four bits as follows:

#define   BLIND       (1<<0)
#define   INVIS       (1<<1)
#define   DET_ALIGN   (1<<2)
#define   DET_INVIS   (1<<3)

        |||+--- blind
        ||+---- invisible
        |+----- det. align
        +------ det. invis

If a bit is 0, the player is not affected by that particular trait; if
it is 1, then the player is affected.  Some examples:

        0000 = no affects at all
        0001 = blind
        1100 = det invis, det align
        1001 = det. invis, but also blind
        1111 = blind, invisible, det. align, and det. invis.

Naturally, CircleMud uses more than 4 bits to store affects; it uses a 32
bit number (instead of a 4 bit number I've used in my examples).

There are a few down sides to using bitvectors like this:  1)  you can
only have as many 'bits' as are in the underlying data type.  In the case
of CircleMud (most of the time), that's 32 bits.  If you want more than 32
different affects (and many people do), you have to resort to various bits
of trickery to get it to work.  2)  Bit vectors are tricky to make
portable between architectures (anyone out there running CM on Alpha,
Itainium, or 64bit Sparc hardware?).

Getting back to the 128bit patch...  It appears (glancing over the patch
quickly) that the 128bit patch replaces the old 32bit bitvectors found in
stock circle with a small array of 4 unisgned integers (each of 32 bits in
their own right).  It then does some clever manipulation to space out the
various flags across the new arrays, effectively giving you a 128 bit
long bitvector.

Pretty clever, and if they force the components of the array to be 32 bits
long in a way to works across architectures, it'll probably be more
portable as well. :-)


   | FAQ: |
   | Archives: |
   | Newbie List:   |

This archive was generated by hypermail 2b30 : 06/25/03 PDT