我有一个简短的问题.为什么 OpenGL 为标准类型(如 int、unsigned int、char 等)提供了自己的数据类型?我是否必须使用它们而不是 C++ 数据类型中的构建?
I have a short question. Why does OpenGL come with its own datatypes for standard types like int, unsigned int, char, and so on? And do I have to use them instead of the build in C++ datatypes?
例如,OpenGL 等价于 unsigned int
是 GLuint
并且对于 ac 字符串有 GLchar*
而不是 char*代码>.
For example the OpenGL equivalent to unsigned int
is GLuint
and for a c string there is GLchar*
instead of char*
.
例如,与
unsigned int
等效的 OpenGL 是GLuint
For example the OpenGL equivalent to
unsigned int
isGLuint
不,它不是,这正是为什么您应该在与 OpenGL 交互时使用 OpenGL 的数据类型.
No it isn't, and that's exactly why you should use OpenGL's data types when interfacing with OpenGL.
GLuint
与 unsigned int
不等效".GLuint
要求大小为 32 位.它总是 32 位的大小.unsigned int
可能大小为 32 位.它可能是 64 位.你不知道,C 不会告诉你(在 sizeof
之外).
GLuint
is not "equivalent" to unsigned int
. GLuint
is required to be 32 bits in size. It is always 32-bits in size. unsigned int
might be 32-bits in size. It might be 64-bits. You don't know, and C isn't going to tell you (outside of sizeof
).
这些数据类型将为每个平台定义,并且它们可能为不同平台定义不同.您使用它们是因为,即使它们的定义不同,它们的大小也总是相同的.OpenGL API 期望和要求的大小.
These datatypes will be defined for each platform, and they may be defined differently for different platforms. You use them because, even if they are defined differently, they will always come out to the same sizes. The sizes that OpenGL APIs expect and require.
这篇关于对于跨平台游戏,我是否必须使用 OpenGL 数据类型(GLint、CLchar 等)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!