Re: bitfields

From: Naved A Surve (
Date: 11/26/95

In message <> you said:
> ...
> Also, you can put in many blanks for future expansion by doing this:
> ...
>  aff_hide : 1
>  extras   : 100
> }
>  then to add a new one, simply:
> ...
>  aff_hide : 1
>  aff_new  : 1
>  extras   : 99  /* pfile is not corrupted */
> }
>  The size of the bitfield in bytes will be the number of bits allocated, 
> divided by 8, rounded up.

This is a false assumption.  From Kernighan & Ritchie, "All aspects of
the implementation of bitfields are undefined."  That may not be an exact
quote as I don't have my copy in front of me, but you get the idea.  PFor
all you know, the compiler may be allocating an entire byte for each
bitfield variable!  Granted, I am proposing an extreme case, but you never
know.  For that same reason, you can not assume that the two structures
listed above will be structured identically in memory.

> For the bitvectors, the operations required to check are this:
> if (ch->affected_by & aff_bit)
>  1. dereference ch pointer
>  2. bitwise AND it with a constant (fast, but also required storage of that
> 	constant in memory)
>  3. test if the result is true
> for bitfield:
> if (ch->affected_by.aff_bit)
>  1. dereference ch pointer
>  2. calculate offset of aff_bit (probably the bottleneck)
>  3. test is result is true
> Is this a correct analysis?  What are other reasons (not) to switch to 
> bitfields?

Like I said above, you can not make any assumptions about the implementation
of bitfield variables.  For all You and I know, the operations could be

Personally, I think it would be a good idea to switch to the bitfield
paradigm, for the reason that it removes one dependancy on an OS-specific
parameter, the word size of the machine.  It tears down that limitation of
8/16/32/64 fields to one bitvector and IMHO makes the code easier to
maintain.  However, It will not necessarily make the code run smaller or


This archive was generated by hypermail 2b30 : 12/07/00 PST