The forgotten problems of 64-bit programs development

Discussion in 'C++' started by Karpov2007, Oct 19, 2007.

  1. Karpov2007

    Karpov2007 Banned

    Oct 14, 2007
    Likes Received:
    Trophy Points:
    scientific adviser of OOO "Program Verification Sy
    Russia, Tula
    Home Page:
    Annotation. Though the history of 64-bit systems development makes more than a decade, the appearance of 64-bit version of OS Windows raised new problems in the sphere of development and testing applications. In the article there are considered some mistakes connected with 64-bit C/C++ code development to Windows. The reasons are explained according to which these mistakes didn't find their reflection in the articles devoted to the migration tasks and are unsatisfactorily detected by the majority of static analyzers.


    The history of 64-bit programs is not new and makes more than a decade already [1]. In 1991 the first 64-bit microprocessor MIPS R4000 was released [2, 3]. Since that time the discussions devoted to porting the programs to 64-bit systems appeared in forums and articles. There began a discussion of the problems connected with the 64-bit programs development in C language. There were discussed the following questions: which data model is better, what is long long and many others. Here, for example, is an interesting collection of messages [4] from comp.lang.c news group devoted to using long long type in C language, which, in its turn, was connected with 64-bit systems appearance.

    One of the most wide spread and sensitive to the change of the digit capacity of data type is C language. Because of its low-level features, it is necessary to constantly control the correctness of the program ported to a new system in this language. It is natural that with the appearance of 64-bit systems the developers all around the world again faced the problems of providing the compatibility of the old source code with the new systems. One of the indirect evidences of the difficulty of program migration is a big number of data models which must be constantly taken into consideration. Data model is a correlation of the size of base types in a programming language. In picture 1 the digit capacity of types in different data models is shown, which we will refer to further on.

    Picture 1. Data Models.

    Existing Publications and Tools in the Sphere of Verification of 64-bit Applications

    Of course, it was not the first stage of digit capacity change. That's enough to recollect the transition from 16-bit systems to 32-bit. It's natural that the acquired experience had a good influence on the stage of migration to 64-bit systems.

    But the migrate to 64-bit systems had it's own peculiarities in the result of which there appeared a number of investigations and publications on these problems, for example [5, 6, 7].

    The mistakes of the following kinds were pointed out by the authors of those times:
    1. Packing of pointers in the types of a smaller digit capacity. For example, placing the pointer into int type in the system with LP64 database will result in truncation the pointer value and impossibility to use it further on.
    2. Using magic numbers. The danger consists in using such numbers as 4, 32, 0x80000000 and some others instead of special constants or using the sizeof() operator.
    3. Some shift operations that do not take into account the increase of digit capacity of a number of types.
    4. Using incorrect unions or structures not taking into account the alignment on different systems with different digit capacity.
    5. Mistakes of the work with bit fields.
    6. Some arithmetic expressions. Example:
    int x=100000, y=100000, z=100000;
    long long s=x * y * x;
    Some other more rare mistakes were also considered, but the main ones are mentioned in the list.

    On the ground of the investigation of the question of verification of 64-bit code some solutions were offered that provide the diagnostics of dangerous constructions. For example, such verification was realized in Gimpel Software PC-Lint ( and Parasoft C++test ( static analyzers.

    The question arouses: if 64-bit systems exist for such a long time, as well as article devoted to this problem, and even program tools that provide the control of dangerous constructions in the code, why should we get back to this question?

    Unfortunately, yes, we should. The reason is the program that has taken place during these years in the sphere of informational technologies. And the urgency of this question is connected with fast spreading of 64-bit versions of OS Windows.

    The existing informational support and tools in the field of 64-bit technologies development went out of date and need fundamental reprocessing. But you will object, saying that there are many modern articles (2005-2007) in the Internet devoted to the problems of 64-bit applications development in C/C++ language. Unfortunately, they turn out to be no more than retelling older articles concerning new 64-bit Windows version without taking into consideration its specific character and the changes in the technology.

    The Untouched Problems of 64-bit Programs Development

    Let us start with the beginning. The authors of some articles don't take into consideration large memory capacity that became available to modern applications. Of course, the pointers were 64-bit in ancient times yet, but such programs didn't have chance to use arrays of several gigabytes in size. As a result, both in old and new articles there appeared a whole stratum of errors connected with the mistakes of indexation of big arrays. It is practically impossible to find a description of a mistake in the articles similar to the following:
    for (int x=0; x != width; ++x)
      for (int y=0; y != height; ++y)
        for (int z=0; z != depth; ++z)
          BigArray[z*width*height+y*width+x] = InitValue;
    In this example the expression "z * width * height + y * width + x", which is used for addressing, has the int type, which means that the code will be incorrect at the arrays containing more that 2 GB of elements. At the 64-bit systems for a safer indexation to large arrays such types as ptrdiff_t, size_t should be used or their derivatives. The absence of the description of such kind of a mistake in the article can be easily explained. In the time when they were written the machines with memory capacity, which makes it possible to store such arrays were practically not available. Now it becomes a common task in programming, and we can watch with a great surprise how the code that has been serving faithfully for many years stopped working correctly dealing with big data arrays at 64-bit systems.

    The other stratum of problems, which has not been touched upon, is represented by the mistakes connected with the possibilities and peculiarities of C++ language. It also quite explicable why it happened so. During the introduction of first 64-bit systems C++ language did not exist for them or was not spread. That's why practically all the articles are devoted to the problems in the field of C language. Modern authors substituted the name C with C/C++ but they didn't contribute anything new.

    But the absence of the mistakes typical of C++ in the articles does not mean that they don't exist. There are such mistakes that show up during the migration of programs to 64-bit systems. They are connected with virtual functions, exceptions, overloaded functions and so on. You may get acquainted with such mistakes in article [8] in more detail. Let us give an example connected with using virtual functions.
    class CWinApp {
      virtual void WinHelp(DWORD_PTR dwData, UINT nCmd);
    class CSampleApp : public CWinApp {
      virtual void WinHelp(DWORD dwData, UINT nCmd);
    Let us follow the life cycle of development of a certain application. Let us suppose that first it wa developed to Microsoft Visual C++ 6.0. when WinHelp function in CWinApp class had the following prototype:
    virtual void WinHelp(DWORD dwData, UINT nCmd=HELP_CONTEXT);
    It was right to realize the overriding a virtual function in CSampleApp class like it is show in the example. Then the project was ported to Microsoft Visual C++ 2005 where the prototype of the function in CWinApp class underwent changes that consisted in changing DWORD type into DWORD_PTR type. The program will continue working correctly at a 32-bit system for the DWORD and DWORD_PTR types coincide here. The problems will show up during the compilation of the code to 64-bit platform. There will come out two functions with identical names but with different parameters, in the result of what the user's code will not be able to activate.

    Besides the peculiarities of 64-bit programs development from the point of view of C++ language there exist other moments to be paid attention to. For example, the peculiarities connected with the architecture of 64-bit Windows version. We'd like to let developer know about possible problems and to recommend to pay more attention to testing 64-bit software.

    Now let us get back to the methods of verification of the source code using static analyzers. I think you have already guessed that everything is not so nice here as it may seem. In spite of the declared support in diagnosing the peculiarities of 64-bit code, this support at the moment does not meet the necessary demands. The reason is that the diagnostic rules were created according to all those articles that do not take into account the specific character of C++ language or processing large data arrays, that exceed 2 GB.

    For Windows developers the case is somewhat worse. The main static analyzers are rated at diagnosing 64-bit errors for LP64 data model while in Windows LLP64 data model is used [10]. The reason is that 64-bit Windows versions are young and older 64-bit systems were represented by Unix-like systems with LP64 data model.

    As an example let us consider the diagnostic message 3264bit_IntToLongPointerCast (port-10), which is generated by the analyzer Parasoft C++test.
    int *intPointer;	
    long *longPointer;
    longPointer=(long *)intPointer; //-ERR port-10
    C++test presupposes that from the point of view of LP64 model this construction will be incorrect. But in the scope of data model accepted in Windows this construction will be safe.

    Recommendations on Verification 64-bit Programs

    Ok, you will say, the problems of 64-bit program versions are urgent. But how to detect all these mistakes?

    It is impossible to give an exhaustive answer, but it is quite possible to give a number of recommendations that will make it possible to provide the safe migration to 64-bit systems and to provide a necessary level of reliability.
    • Introduce the following articles to your colleagues who deal with the 64-bit applications development: [7, 8, 9, 10, 11, 12, 13, 14, 15].
    • Introduce to your colleagues the methodology of the static code analyzer: [16, 17, 18]. The static code verification is one of the best ways of detecting errors of such kind. It make it possible to make sure of workability even of those parts of the code, the work of which is difficult to be modeled at large data volumes, for example while using unit-tests methodology.
    • It will be useful for developers to get acquainted with such static analyzers as Parasoft C++test (, Gimpel Software PC-lint (, Abraxas Software CodeCheck (
    • For Window applications developers it will be especially useful to get acquainted with the specialized static analyzer Viva64 ( rated at LLP64 data model [19].
    • Upgrade the system of unit-testing having included the processing of large arrays in the set of tests. You may get a more detailed information about the necessity of at the large data volumes in [9], and also to learn how to better organize the testing.
    • To execute carefully manual testing of the ported code at the real difficult tasks that use 64-bit systems possibilities. The change of the architecture is a very considerable change to rely on the automated testing systems completely.


    1. John R. Mashey, The Long Road to 64 Bits.
    2. Wikipedia: MIPS architecture.
    3. John R. Mashey, 64 bit processors: history and rationale.
    4. John R. Mashey, The 64-bit integer type "long long": arguments and history.
    5. 64-bit and Data Size Neutrality.
    6. 64-Bit Programming Models: Why LP64?
    7. Transitioning C and C++ programs to the 64-bit data model.
    8. Andrey Karpov, Evgeniy Ryzhkov. 20 issues of porting C++ code on the 64-bit platform.
    9. Andrey Karpov. Evgeniy Ryzhkov. Problems of testing 64-bit applications.
    10. The Old New Thing: Why did the Win64 team choose the LLP64 model?
    11. Brad Martin, Anita Rettinger, and Jasmit Singh. Multiplatform Porting to 64 Bits.
    12. Migrating 32-bit Managed Code to 64-bit.
    13. Matt Pietrek. Everything You Need To Know To Start Programming 64-Bit Windows Systems.
    14. Microsoft Game Technology Group. 64-bit programming for Game Developers.
    15. John Paul Mueller. 24 Considerations for Moving Your Application to a 64-bit Platform.
    16. Wikipedia: Static code analysis.
    17. Sergei Sokolov. Bulletproofing C++ Code.
    18. Walter W. Schilling, Jr. and Mansoor Alam. Integrate Static Analysis Into a Software Development Process.
    19. Evgeniy Ryzhkov. Viva64: what is it and for whom is it meant?
    Last edited by a moderator: Jan 21, 2017
  2. Safari

    Safari New Member

    Oct 16, 2007
    Likes Received:
    Trophy Points:
    Nice article. We just escaped the WinHelp mishapped because we moved to HTMLHelp and converted the hlp to chm's but then that was not because of WinHelp problems. It was a requirement.
  3. Izaan

    Izaan New Member

    Oct 16, 2007
    Likes Received:
    Trophy Points:
    Nice article and can be one of the contender for Article of the month.
  4. shabbir

    shabbir Administrator Staff Member

    Jul 12, 2004
    Likes Received:
    Trophy Points:

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice