Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mac2x

macrumors 65816
Original poster
Sep 19, 2009
1,146
0
Question for C programmers out there.

Do you prefer to use macros or the const qualifier when defining global constants? I see it both ways in book. Both seem to work equally well to me; just wondering if anyone feels one is better than the other, and why.

Thanks!
 
Interesting question.
It depends ...

I do a lot of embedded programming. So a "const" will reside in ROM (typically Flash).

For a struct, I think advantage goes to const. Same for arrays.
A const will use memory and a macro does not. (assuming simple macro)
 
I use both const variables and enums in C-based languages because giving the compiler more information is generally a good thing in terms of detecting potential problems, and in terms of debug-ability.

EDIT:

A const will use memory and a macro does not. (assuming simple macro)

I hope I'm not quoting you out of context... but the following two functions will generate identical code:

Code:
int foo()
{
    const int kFoo = 42;
    return kFoo + 1;
}

Code:
int bar()
{
#define FOO 42
    return FOO + 1;
}
 
Last edited by a moderator:
The only benefit I can think of is for constant NSStrings: static const variable + extern variable in the header will share the single entry in memory. A #define in the header will have one constant NSString per binary.
 
How does a macro (e.g., "#define PI 3.14159") not use memory? The value takes up space anywhere it is referenced.

The Macro itself doesn't use memory (whereas a const int would). "#define name value" acts as simple text replacement.

To the OP:

Personally, I use #define if I really need a global const (even in C++).
 
Last edited:
How does a macro (e.g., "#define PI 3.14159") not use memory? The value takes up space anywhere it is referenced.

Like MacMax said.

Code:
#define PI 3.1415

main
{
float pi;

pi = PI;
}

after the preprocessor runs, what gets compiled is:
Code:
main
{
float pi;

pi = 3.1415;
}
 
I use both const variables and enums in C-based languages because giving the compiler more information is generally a good thing in terms of detecting potential problems, and in terms of debug-ability.

EDIT:



I hope I'm not quoting you out of context... but the following two functions will generate identical code:

Code:
int foo()
{
    const int kFoo = 42;
    return kFoo + 1;
}

Code:
int bar()
{
#define FOO 42
    return FOO + 1;
}

Correct. But ... & assuming the optimizer isn't too smart.
the following code will not generate identical machine code.

Code:
int foo()
    const int kFoo = 42;
{
    return kFoo + 1;
}

Code:
int bar()
{
#define FOO 42
    return FOO + 1;
}
 
localhost:~$ cat testA.c
Code:
int main(int argc, char *argv[]) 
{
	const int kFoo = 42;
	return kFoo+1;
}
localhost:~$ cat testB.c
Code:
int main(int argc, char *argv) {
	#define FOO 42
	return FOO + 1;
}
localhost:~$ cat testA.s
Code:
	.text
.globl _main
_main:
	pushl	%ebp
	movl	%esp, %ebp
	subl	$24, %esp
	movl	$42, -12(%ebp)
	movl	-12(%ebp), %eax
	incl	%eax
	leave
	ret
	.subsections_via_symbols
localhost:~$ cat testB.s
Code:
	.text
.globl _main
_main:
	pushl	%ebp
	movl	%esp, %ebp
	subl	$8, %esp
	movl	$43, %eax
	leave
	ret
	.subsections_via_symbols
localhost:~$ diff testA.s testB.s
Code:
6,9c6,7
< 	subl	$24, %esp
< 	movl	$42, -12(%ebp)
< 	movl	-12(%ebp), %eax
< 	incl	%eax
---
> 	subl	$8, %esp
> 	movl	$43, %eax

This is with gcc on x86, but i would figure you'd get similar behavior on other compilers and architectures.

Despite the const memory is set aside for kFoo and that is what's used when it's referenced. 42 + 1 is considered a compile-time constant, so 43 is just subbed in.

-Lee
 
Enums are true constants and can be optimized away without optimizations turned on.

Code:
int main(int argc, const char *argv[]) {
	enum {kFoo = 42};
	return kFoo + 1;
}
Code:
	.text
.globl _main
_main:
	pushq	%rbp
	movq	%rsp, %rbp
	movl	%edi, -4(%rbp)
	movq	%rsi, -16(%rbp)
	movl	$43, %eax
	leave
	ret

const variables might be modified if you cast away constness, (I dont think the standard specifies what should happen when you do that), so further static analysis is necessary to determine if they can be inlined or not.
 
Last edited:
Macros:
1. stripped away by the pre-processor
2. no symbol, debugger does not know about it
3. no space allocated

enum
1. handled by compiler
2. is a symbol that can be used by debugger
3. no space allocated

const int
1. handled by compiler
2. is a symbol that can be used by debugger
3. space allocated, has an address

I prefer enums.
 
enums can only handle integers, no?

If you want to define floating point constants or string constants, you will need to use consts or #defines.
 
const variables might be modified if you cast away constness, (I dont think the standard specifies what should happen when you do that), so further static analysis is necessary to determine if they can be inlined or not.

It is undefined behaviour, which means anything can happen.

const int three = 3;
* (int *)&three = 4;

Likely possible effects:

1. The assignment crashes.
2. Using the variable "three" after the assignment gives a value of 3.
3. Using the variable "three" after the assignment gives a value of 4.
4. Using the variable "three" after the assignment gives a value of 3 in some cases, and a value of 4 in other cases.
5. Something else goes wrong in weird and unpredictable ways.

You can check yourself what happens in XCode.
 
The Macro itself doesn't use memory (whereas a const int would). "#define name value" acts as simple text replacement.

The text replacement does not remove the value from memory. The data is still compiled into the binary and has to be stored someplace.
 
The text replacement does not remove the value from memory. The data is still compiled into the binary and has to be stored someplace.

I think the point is that there is not a fixed position in memory dedicated to the storage of that value, necessarily. If the value is small enough to just be a constant in an instruction, then the "memory" for the value is the few bits encoding it in an instruction instead of a separate chunk of memory used only for storing that value. If the instruction was going to be in memory anyway, then you're not using any more. Plus not having to load the value from someplace probably saves you some instructions, hence saving memory.

All in all, though, write code that's understandable. The few instructions or bytes you're saving is very unlikely to make a difference at this point. I appreciate economical code, for sure, but unless you're in a very constrained environment and know it, tweaks like this for the sake of saving a few bytes aren't worth it if it makes your code more difficult to read or write.

-Lee
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.