By Adam Nagy
This is a continuation of the previous post Revit model viewer for iOS - part 1
The iOS App part
We already created a Revit AddIn that can upload geometry data to a storage service. Now we need an iOS application that can download the data and display it using OpenGL.
I started by creating a new iOS project (iOS >> Application >> Master-Detail Application) The Master list can show the name of the model geometries uploaded to the server and the Detail view can display the selected model's geometry. I chose to use Storyboards which make it easier to keep all the views together.
Note: to see how OpenGL and GLKit can be used in iOS you could also create an 'OpenGL Game' project. This implements displaying a couple of rotating boxes.
First I started implementing the server part - talking to the Amazon service and downloading the geometry. For that I downloaded the Amazon SDK for iOS: Amazon SDK for iOS
Then I just needed to add the Framework to my project: Project settings >> Targets >> Summary >> Linked Frameworks and Libraries, then locate the framework in the downloaded folder.
Here is the code that gets the name of the uploaded models:
+ (NSMutableArray *)getItemNames
{
AmazonS3Client * s3 =
[[AmazonS3Client alloc]
initWithAccessKey:ACCESS_KEY_ID withSecretKey:SECRET_KEY];
NSMutableArray * names = [[NSMutableArray alloc] init];
@try
{
S3ListObjectsRequest * listObjectsRequest =
[[S3ListObjectsRequest alloc] initWithName:MODEL_BUCKET];
S3ListObjectsResponse * response =
[s3 listObjects:listObjectsRequest];
NSMutableArray * objectSummaries =
response.listObjectsResult.objectSummaries;
for (S3ObjectSummary * summary in objectSummaries)
{
[names addObject:[summary key]];
}
}
@catch (AmazonClientException * exception)
{
[self showAlert:exception.message withTitle:@"Download Error"];
}
return names;
}
We can fill the Master View (a table view) with the list of model names we got with the above function. Then when the user selects one of the models we need to get the data from that item. Here is the code to do that:
+ (NSMutableArray *)getFacets:(NSString *)withName
{
@try
{
AmazonS3Client * s3 =
[[AmazonS3Client alloc]
initWithAccessKey:ACCESS_KEY_ID withSecretKey:SECRET_KEY];
S3GetObjectRequest * request =
[[S3GetObjectRequest alloc]
initWithKey:withName withBucket:MODEL_BUCKET];
S3GetObjectResponse * response = [s3 getObject:request];
NSData * data = [response body];
// Convert it to list of points
return [self getFacetsFromData:data];
}
@catch (AmazonClientException * exception)
{
[self showAlert:exception.message withTitle:@"Download Error"];
}
return [[NSMutableArray alloc] init];
}
Now we need to display the geometry. In the storyboard I replaced the UIView of the Detail View Controller with GLKView. This is supposed to simplify a couple of things, e.g. now we get a function called drawInRect() inside which we can do the drawing part. But that function won't get called unless we also set up a couple of things for the GLKView. We also need to create an effect that we can use to do the initialization each time before we start drawing:
- (void)viewDidLoad
{
[_statusButton setTitle:@"Done"];
[super viewDidLoad];
[[NSNotificationCenter defaultCenter]
addObserver:self selector:@selector(didRotate:)
name:@"UIDeviceOrientationDidChangeNotification" object:nil];
GLKView * glView = (GLKView *)self.view;
EAGLContext * context =
[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
glView.context = context;
glView.drawableColorFormat = GLKViewDrawableColorFormatRGB565;
glView.drawableStencilFormat = GLKViewDrawableStencilFormat8;
glView.drawableDepthFormat = GLKViewDrawableDepthFormat16;
_baseEffect = [[GLKBaseEffect alloc] init];
}
To make the drawing faster we can store the geometry in the GPU. Each time the user selects a different model, we get the geometry from the storage service, turn it into an array of vertex and normal coordinates, then store it like so:
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
//////////////////////////////////////////////////////////////////
// buffer data is in bytes = //
// size of float * number of facets * //
// vertices per facet * values per vertex //
//////////////////////////////////////////////////////////////////
glBufferData(GL_ARRAY_BUFFER,
sizeof(GLfloat) * facetCount * 3 * 3, vertices,
GL_STATIC_DRAW);
glGenBuffers(1, &normalBuffer);
glBindBuffer(GL_ARRAY_BUFFER, normalBuffer);
glBufferData(GL_ARRAY_BUFFER,
sizeof(GLfloat) * facetCount * 3 * 3, normals,
GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
Then whenever the drawInRect() function is called, where we need to draw our geometry, we can retrieve the stored array of vertices and normals and use that:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
if (_faces == nil)
{
// if there is nothing to draw let's just fill
// the background with red
glClearColor(1.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
return;
}
// colorMaterialEnabled = GL_FALSE
// - uses the material color I set here
// colorMaterialEnabled = GL_TRUE
// - uses the material color that comes from the array
_baseEffect.colorMaterialEnabled = GL_FALSE;
_baseEffect.light0.enabled = GL_TRUE;
_baseEffect.material.shininess = 50;
_baseEffect.lightingType = GLKLightingTypePerPixel;
// GLKit does not seem to have these
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL);
glClearColor(0.0f, 0.5f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glClear(GL_DEPTH_BUFFER_BIT);
glClear(GL_STENCIL_BUFFER_BIT);
[self updateTransformation];
// Do the drawing
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
////////////////////////////////////////////////////////////////////
// array type, number of values per vertex, value type, //
// normalize, //
// offset between values (0 unless using an interleaved array), //
// pointer to array //
////////////////////////////////////////////////////////////////////
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT,
GL_FALSE, 0, 0);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glBindBuffer(GL_ARRAY_BUFFER, normalBuffer);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT,
GL_FALSE, 0, 0);
glEnableVertexAttribArray(GLKVertexAttribNormal);
long facetCount = 0;
for (FaceData * face in _faces)
{
_baseEffect.material.diffuseColor =
GLKVector4Make(
face.red / 255, face.green / 255, face.blue / 255, 1);
[_baseEffect prepareToDraw];
glDrawArrays(GL_TRIANGLES, facetCount * 3, face.facets.count * 3);
facetCount += face.facets.count;
}
glDisableVertexAttribArray(GLKVertexAttribPosition);
glDisableVertexAttribArray(GLKVertexAttribNormal);
}
When the user selects a specific model, then apart from storing the geomety information, we also need to set up the view direction so that it is looking at the center of the model:
// Needed to update the aspect when the view dimension changes
// i.e. the user rotates the device - used in didRotate()
- (void)updateTransformation
{
#define M_TAU (2*M_PI)
float aspect =
fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
// float fovyRadians, float aspect, float nearZ, float farZ
GLKMatrix4 mx =
GLKMatrix4MakePerspective(0.1 * M_TAU, aspect, 2, -1);
_baseEffect.transform.projectionMatrix = mx;
}
- (void)initViewDirection
{
distance = GLKVector3Distance(
GLKVector3Make(_minPt.x, _minPt.y, _minPt.z),
GLKVector3Make(_maxPt.x, _maxPt.y, _maxPt.z)) * 2;
GLKVector3 centerToEye = GLKVector3Make(0, -distance, 0);
GLKVector3 eye = GLKVector3Add(_centerPt, centerToEye);
_baseEffect.transform.modelviewMatrix =
GLKMatrix4MakeLookAt(
eye.x, eye.y, eye.z,
_centerPt.x, _centerPt.y, _centerPt.z,
0, 0, 1);
[self updateTransformation];
}
In the next post we'll see how to add view transformations when the user makes pinch, pan or rotate gestures on the device.