"primitive"

Discussion in 'Mac Programming' started by Sydde, Feb 9, 2013.

  1. macrumors 68000

    Sydde

    Joined:
    Aug 17, 2009
    #1
    I was scribbling up a blather trying to create a definitive, clear explanation of pointers in C, wherein I was making a distinction between primitive and complex data types. The question that arose is what is the largest datum that is/has been treated as a primitive in C? I know of an extended FP type that the x87 unit used/uses that was ten bytes (the M68K FPU used an 80 bit type that embedded two extra bytes for padding), but AFAIK, these are not particularly common.

    I regard a "primitive" as something the CPU can handle atomically, but with the prevalence of built-in vector units, the distinction gets blurry. Does C ever treat these big data types as primitives, or is long long the largest primitive in common use today?
     
  2. macrumors 6502a

    Joined:
    Jan 23, 2010
    Location:
    San Diego, CA USA
    #2
    C doesn't have "primitives". That's all CPU-specific. It's all up to the compiler. There's more than just x86 CPUs in this world you know. And even on the x86, it depends on what mode it is in. I think all that's really specified is that a 'char' is one byte. And an int will be what's "natural" for the CPU. Other than that, it's up to the compiler.
     
  3. macrumors 6502

    Joined:
    Sep 13, 2010
    #3
    There are minimum ranges for each of the fundamental data types, and while "char" and "byte" are synonymous, a C "byte" can be larger than 8 bits (which is the minimum size to provide the required range for the char data types).

    To Sydde: Types aren't "primitive" and "complex". They are "fundamental" and "compound". At least that's what they're called in C++. I think C actually has a keyword "complex" these days, so that is a pretty ambiguous term. I'm not sure what your data type size questions have to do with pointers, though. I guess intptr_t would be relevant.
     
  4. macrumors 68040

    lee1210

    Joined:
    Jan 10, 2005
    Location:
    Dallas, TX
    #4
    I normally think of primitive data types as a foil for "complex" types, generally objects in OO languages. In C you can consider a struct complex (and maybe union), but beyond that I don't know what you would be comparing to that would require the primitive distinction. Are you trying to make a case for when passing a pointer costs less memory? This will certainly vary as the size of these types is going to vary wildly. I can't tell you this with certainty, but my understanding is sizeof(short), sizeof(int), sizeof(long), sizeof(long long) can all be equal, and may be larger or smaller than the size of a pointer to any of these types.

    I think what this comes down to is letting go of the idea of a primitive in a language that doesn't have user-defined datatypes. Express what you need to express in terms of the data types that are available.

    -Lee
     
  5. macrumors G5

    gnasher729

    Joined:
    Nov 25, 2005
    #5
    The largest integer types supported by your compiler are int_max_t and uint_max_t (which on MacOS X compilers are defined as long long and unsigned long long). Pointer types can be different sizes. The largest real type is long double, 16 bytes on every current MacOS X compiler. C also supports complex numbers which are considered primitives, so the largest one would be long double _Complex. Compilers on MacOS X have "small vector" extensions; these would also be primitive types.

    So the largest primitive that is standard is likely long double _Complex.

    In C99, the only type with atomic operations is sig_atomic_t. In C11, you can use _Atomic to specify atomic types, for example _Atomic int x;. Which types are supported is defined by your C11 implementation.
     
  6. macrumors 603

    Joined:
    Jul 29, 2003
    Location:
    Silicon Valley
    #6
    There's ANSI C, what the processor chips actually do, and what the processor and computer vendors hack into C compilers.

    On some processors, FP arithmetic is emulated and float doubles are really implemented as 2 non-atomic 32-bit load/stores. So it may look like a primitive in C code, but can't actually be trusted to behave as such.

    Apple added short vector intrinsics (NEON, etc.), Intel likewise with SSE, and Cray added some seriously long vector types as "primitive" C data types.

    So it depends.
     

Share This Page