Struct Packing in C (Linux vs Windows)

I’m currently taking an introductory Computer Graphics Course using OpenGl and C++. As part of our first assignment we were required to export an image as a bmp. The general format of a BMP file is 3 or 4 sections. Two headers, an optional color pallet, and the bitmap information itself.

So given the following definitions

typedef struct {
    unsigned short int type;                 /* Magic identifier            */
    unsigned int size;                       /* File size in bytes          */
    unsigned short int reserved1, reserved2;
    unsigned int offset;                     /* Offset to image data, bytes */

typedef struct                       /**** BMP file info structure ****/
    unsigned int   biSize;           /* Size of info header */
    int            biWidth;          /* Width of image */
    int            biHeight;         /* Height of image */
    unsigned short biPlanes;         /* Number of color planes */
    unsigned short biBitCount;       /* Number of bits per pixel */
    unsigned int   biCompression;    /* Type of compression to use */
    unsigned int   biSizeImage;      /* Size of image data */
    int            biXPelsPerMeter;  /* X pixels per meter */
    int            biYPelsPerMeter;  /* Y pixels per meter */
    unsigned int   biClrUsed;        /* Number of colors used */
    unsigned int   biClrImportant;   /* Number of important colors */
// Code by Michael Sweet    

we would expect we should be able to do something like

void exportBitmap(const char *ptrcFileName, int nX, int nY, int nWidth,int nHeight)
	int imageSize = sizeof(unsigned char) * nWidth * nHeight * 3 + (4 - (3 * nWidth) % 4) * nHeight;

	unsigned char *ptrImage = (unsigned char*) malloc( imageSize );

	FILE *ptrFile = fopen(BMP_FILE, "wb");

	//read pixels from framebuffer
	glReadPixels(nX, nY, nWidth, nHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, ptrImage);

	// set memory buffer for bitmap header and informaiton header
	memset(&bf, 0, sizeof(bf));
	memset(&bi, 0, sizeof(bi));
		// configure the headers with the give parameters

	bf.bfType = 0x4d42;
	bf.bfSize = sizeof(bf) + sizeof(bi) + imageSize;
	bf.bfOffBits = sizeof(bf) + sizeof(bi);
	bi.biSize = sizeof(bi);
	bi.biWidth = nWidth + nWidth % 4;
	bi.biHeight = nHeight;
	bi.biPlanes = 1;
	bi.biBitCount = 24;
	bi.biSizeImage = imageSize;

	// to files
	fwrite(&bf, sizeof(bf), 1, ptrFile);
	fwrite(&bi, sizeof(bi), 1, ptrFile);
	fwrite(ptrImage, imageSize, ptrFile);

//Modified from sample code by Dr. Fan

Unfortunately when we tried it we on our Linux machine we got a garbled bmp file which wouldn’t even open. Compiling the same code on Windows produced a proper bitmap file.

Digging deeper we looked at the raw hex data for the two files and found

42 4D B6 17 0E 00 00 00 00 00 36 00 00 00 //Good bmp
42 4D 00 00 B8 17 0E 00 00 00 00 00 38 00 //Bad bmp

This is rather suspisious looking. Specifically there’s an extra to bytes after the magic number in the bad bmp header and both the file size and the data offset are two bytes bigger then in the good file.

Suspicious that the size of the fields might now be the same on Linux and Windows I borrowed a small program from Stack Overflow to check the sizes of the various variable types.

#include <stdio.h>
#include "bitmap.h"
int main()
    printf("sizeof(char) = %d\n", sizeof(char));
    printf("sizeof(short) = %d\n", sizeof(short));
    printf("sizeof(int) = %d\n", sizeof(int));
    printf("sizeof(long) = %d\n", sizeof(long));
    printf("sizeof(long long) = %d\n", sizeof(long long));
    printf("sizeof(float) = %d\n", sizeof(float));
    printf("sizeof(double) = %d\n", sizeof(double));
    printf("sizeof(long double) = %d\n", sizeof(long double));

and we found

         Linux | Windows
char         1 |  1
short        2 |  2
int          4 |  4
long         4 |  8
long long    8 |  8
float        4 |  4
double       8 |  8
long double 12 | 16        

Since we aren’t using long or long double we shouldn’t have a problem. Further down the rabbit hole we found that on Linux sizeof(bf) returned 16 bytes rather then the expected 14 the Windows kindly returned. That explained the extra two bytes, but it didn’t explain where they came from. After a bit of Googleing it turns out GCC on linux doesn’t pack struct fields sequentially in memory unless explictedly told to.

It turns out the easiest way to do this is to add some compiler directives to the struct definition to tell GCC to tell it to pack them at compile time.

Specifically we get

typedef struct {
    unsigned short int type; __attribute__((__packed__))                /* Magic identifier            */
    unsigned int size;       __attribute__((__packed__))                     /* File size in bytes          */
    unsigned short int reserved1, reserved2 __attribute__((__packed__));
    unsigned int offset                     __attribute__((__packed__));                     /* Offset to image data, bytes */

Side note : It is noted in the header file that

 * Bitmap file data structures (these are defined in <wingdi.h> under
 * Windows...)
 * Note that most Windows compilers will pack the following structures, so
 * when reading them under MacOS or UNIX we need to read individual fields
 * to avoid differences in alignment...

So if we wrote the fields to the file one at a time we shouldn’t have a problem.



  • Thanks to Dr. Fan for spending an hour helping me verify that I wasn’t crazy.
  • Thanks to David Brown for helping me find my way down the rabbit hole by suggesting I try sizeof(bf).