What a contradiction!
Programming with many hardcodes is not a good practice because it is hard to understand and also has maintainability issue, for example, when a condition is changed for some reason, the previous well-tuned hardcode value might not be working any more.
However, when there are some conditions under which we must have to write hardcodes when programming, then what're good practices? The rest of this article is focusing on this.
In a real world, hardcode can cause many bad issues, one of them is memory overwriting or buffer overflows. At system runtime, when this kind of issue happens, it is very difficult to debug because the behaviors are undefined or unexpected due to any memory overwriting. So in this case, it would be better if we could catch it as earlier as possible.
Two of methods are often used in software engineering. One is to do runtime check, the other is to do build-time or compile-time check. Whenever possible the latter solution is better than the former one.
For the runtime check, the developer must always keep in mind that whenever a new hardcode value is introduced, a corresponding runtime check for that hardcode must be present. For example, the amount of actual used memory must not be larger than the pre-allocated size, an error or assertion must be triggered when such a case occurs.
When taking build-time check into consideration, things are slightly different. Because not all the hardcodes can be checked by build-time or compile-time. And it is not easy to do so even if we can find a situation where we can do build-time check for a special hardcode.
Jon Jagger proposed a good solution to solve the build-time check issue as below. COMPILE_TIME_ASSERT(pred) macro is defined as a switch/case statement, when the condition (pred) is 0 (logical false), the compiler will complain about this statement because the same two (or more) case conditions are not allowed; when such a condition (pred) is 1 (logical true), this statement is legal even if it does nothing.
__________________________________________
#define COMPILE_TIME_ASSERT(pred) \
switch(0){case 0:case (pred):;}
#define ASSERT_MIN_BITSIZE(type, size) \
COMPILE_TIME_ASSERT(sizeof(type) * CHAR_BIT >= size)
#define ASSERT_EXACT_BITSIZE(type, size) \
COMPILE_TIME_ASSERT(sizeof(type) * CHAR_BIT == size)
void compile_time_assertions(void)
{
ASSERT_MIN_BITSIZE(char, 8);
ASSERT_MIN_BITSIZE(int, 16);
ASSERT_EXACT_BITSIZE(long, 32);
}
_______________________________________________________________
However, when could we use this technology? It depends on the real usage mode. You can think about it:-) Here is just one of examples...there is a data structure XYZ (e.g. typedef struct {X, Y, Z, ...}) defined in C language header file, if we want to get the size of this structure, in C/C++ language, we can use the key word "sizeof(XYZ)" to get it automatically, but in an assembly language, such a key word and the structure itself are probably not known by the Assembler. So in this case, we can define a hardcode macro SIZE_OF_XYZ which is exactly equal to the sizeof(XYZ) and used in assembly language file, then a build-time check like this COMPILE_TIME_ASSERT (SIZE_OF_XYZ == sizeof(XYZ)) is added in C language file. If the structure XYZ is changed later,e.g. some of fields are added or removed for some reason, the compiler will catch this change immediately by throwing a compile error.
<The End>
No comments:
Post a Comment