OK, firstly, I'm still a beginner - especially at openGL - so apologies for any stupid questions or if I've overlooked something simple.
I'm adding a view to my app which loads a 3D model (originally built in blender) that can be rotated, zoomed and panned. I've tried unsuccessfully with a few different methods but here's what I've got so far:
- Export .obj model from blender
- Converted .obj file to .h using obj2opengl.pl script
- substitute the above .h file into Xcode's OpenGL game template and add touch recognisers for arcball rotation etc.
Everything works fine except for it only displays a wireframe model. Looking through the code I see default model data used by the template combines vertex and normal data in a single matrix whereas the converted .h file I'm using hold them in two separate matrices - I'm guessing this is the problem and the easiest fix is to combine the data into a single matrix but I find an appropriate command to do such a thing. Is it even possible?
The other option is to load the normal data separately but this to escapes me also. Below is the code I'm using for the drawing - cvsaVerts is the vertex matrix and cvsaNormals (not used yet) is the normal matrix.
I'm adding a view to my app which loads a 3D model (originally built in blender) that can be rotated, zoomed and panned. I've tried unsuccessfully with a few different methods but here's what I've got so far:
- Export .obj model from blender
- Converted .obj file to .h using obj2opengl.pl script
- substitute the above .h file into Xcode's OpenGL game template and add touch recognisers for arcball rotation etc.
Everything works fine except for it only displays a wireframe model. Looking through the code I see default model data used by the template combines vertex and normal data in a single matrix whereas the converted .h file I'm using hold them in two separate matrices - I'm guessing this is the problem and the easiest fix is to combine the data into a single matrix but I find an appropriate command to do such a thing. Is it even possible?
The other option is to load the normal data separately but this to escapes me also. Below is the code I'm using for the drawing - cvsaVerts is the vertex matrix and cvsaNormals (not used yet) is the normal matrix.
Code:
- (void)setupGL
{
[EAGLContext setCurrentContext:self.context];
[self loadShaders];
self.effect = [[GLKBaseEffect alloc] init];
self.effect.light0.enabled = GL_TRUE;
self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f);
glEnable(GL_DEPTH_TEST);
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
//default data
//glBufferData(GL_ARRAY_BUFFER, sizeof(gCubeVertexData), gCubeVertexData, GL_STATIC_DRAW);
//from .h file
glBufferData(GL_ARRAY_BUFFER, sizeof(cvsaVerts), cvsaVerts, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
glBindVertexArrayOES(0);
}