By Philippe Leefsma (@F3lipek)
My previous Forge-focused blog last week was dealing with the OAuth authentication workflow, in case you missed it here it is:
Landing your Forge OAuth authentication workflow
If you practiced it a bit, authenticating through the Autodesk OAuth server should now be a breeze for you ;) ... This time we can start having some real fun because we are going to use the actual Forge API's !
Most likely, the first thing you may want to do is to upload your own models to the Autodesk Cloud, in order to load them at a later time into the viewer or extract some metadata. If for some reason you've got no design data at all, keep reading we've got you covered with a bunch of free sample models ...
I - OSS: Autodesk Object Storage Service
Let's get started with the easiest part, OSS API. What we now call the Forge Data Management API (see official documentation here) is actually a set of two distinct API's which may be used independently based on what you need to do. If you used the View & Data API in the past, chances are you're probably already familiar with OSS.
It is a rather basic file storage REST API that lets you host securely any data on the Autodesk Cloud. Typically that's an App-context API, which means you need a 2-legged OAuth token to use it. But for some workflows that we are going to cover in section II of this article, you may use a 3-legged token. The way OSS API works is pretty straightforward:
- Obtain a valid OAuth token in order to perform authorized REST API calls
- Create a bucket: a logical storage unit like a folder
- Upload any file to the bucket (a file is called an object)
- Download your objects if needed
Now here are the tricks: a bucket is created with a specific policy, this defines how long the files are being kept on that bucket. For info:
- transient: your file remains 24h on the bucket
- temporary: 1month
- persistent: until you decide to delete them
The bucket itself is never deleted, I just need to emphasis on it because this has been a common misunderstanding in the past!
The bucket name has to be lower case, with no special characters and has to be unique service-wide, across ALL the users of the service, not just you. So if you first try the API and attempt to create a bucket named "bucket1" ... unfortunately there are chances that some other developer already owns that name (you will get a 409 - conflict error in such case). So you can either come up with really fancy names or append some guid-like prefix/suffix to your bucket names in order to avoid collisions.
The object keys can be anything - no special characters though - but need to be unique on a per-bucket basis, otherwise uploading an object with a name that already exists on the same bucket will simply override the resource.
There also has been often requested how many buckets you can create, how many objects a bucket can contain, how large can be each object... there are no theoretical bounds to the API, sky's the limit. We could advise you create just three buckets (1 transient, 1 temporary and 1persistent) but that's really up to you. If you plan to create multiple buckets based on some workflow, just think about how your logic will scale: creating one bucket per user may not scale nicely once you get a large customer base...
If you used the OSS API version1, here are the enhancements for the v2:
- You can now delete an object on a bucket
- Deleting a bucket is doable but NOT for everybody, because that's a dangerous operation, we decided to whitelist users who can do that.
- You can iterate the list of buckets you created from your set of API keys
- Similarly you can iterate the objects stored in each bucket
- You can create signed resources, but I will tackle that in another post (can't get all the fun at once!)
Below is the implementation of my server-side node service that wraps the OSS API calls. It's pretty straightforward and doesn't do anything fancy. At the time of the writing, I didn't implemented yet the resumable upload that allows you to upload large files by multiple chucks. The reason is that we are working on auto-generated API wrappers for our REST APIs, so I'm waiting for that to be ready. Once the wrapper is available, I simply replace the request calls by the wrapper methods inside the service. The impact on the rest of my server code will be inexistent.
Using that, the REST API exposed by my server for its client application is just a thin Express wrapper around the service methods. For example below is the "GET /buckets" implementation. For full implementation, take a look there: oss endpoint
1 var router = express.Router() 2 3 ///////////////////////////////////////////////////////////////////////////// 4 // GET /buckets 5 // 6 // 7 ///////////////////////////////////////////////////////////////////////////// 8 router.get('/buckets', async (req, res) =>{ 9 10 try { 11 12 // obtain forge service 13 var forgeSvc = ServiceManager.getService( 14 'ForgeSvc') 15 16 // request 2legged token 17 var token = await forgeSvc.getToken('2legged') 18 19 // obtain oss service 20 var ossSvc = ServiceManager.getService('OssSvc') 21 22 // get list of bucket by passing valid token 23 var response = await ossSvc.getBuckets( 24 token.access_token) 25 26 // send json-formatted response 27 res.json(response) 28 29 } catch (ex) { 30 31 res.status(ex.statusCode || 500) 32 res.json(ex) 33 } 34 })
You can refer to this page for the official documentation to see how to create a bucket and upload a file to it.
II - A360 Data Management API
The second set of endpoints within the DataManagement API provides a programmatic access to A360, the Autodesk Cloud at https://a360.autodesk.com. That's a 3-legged OAuth API that let's your application access and manage files of a user once it has been authorized through a web interface.
If you don't have an A360 account yet, you will need to signup for one in order to test the API with your data. A set of basic design files will be provisioned automatically to your account upon first sign in.
The underlying file storage system is based on the OSS API described above but A360 adds an extra layer of features on top of it, such as sharing your data with other users, versioning your files, attaching metadata to it, and many more features to come... It is therefore more complex to use than OSS.
Your data is organized as follow inside A360:
- Hubs: the highest logical data storage on A360. Each user has it's own hub by default but you ca n create additional team hubs and share that with other users
- Projects: under each hub, you have projects, which can be seen as root folders containing your data
- Folders: A logical sub-folder inside a project or another folder
- Items: a file that you uploaded to a project or folder
- Versions: each item may contain one or multiple versions of the file. You can then select programmatically which version you want to access, download or load in the viewer
In order to upload a file to an A360 user account, you have to follow several steps:
- Your web application needs to obtain a valid 3-legged token by requesting user approval
- Determine projectId and folderId which identify where on A360 you want to upload the data
- Create a storage location: basically the API will determine the underlying OSS bucketKey and objectKey you have to use to upload the file
- Upload the data using OSS API
- If the file doesn't exist create a new item, otherwise create a new version and add it to the corresponding item
Here is how the implementation of that workflow looks like in my service:
1 ///////////////////////////////////////////////////////////////// 2 // Upload file to create new item or new version 3 // 4 ///////////////////////////////////////////////////////////////// 5 upload (token, projectId, folderId, file, displayName = null) { 6 7 return new Promise(async(resolve, reject) => { 8 9 try { 10 11 var filename = file.originalname 12 13 var storage = await this.createStorage( 14 token, projectId, folderId, filename) 15 16 var ossSvc = ServiceManager.getService('OssSvc') 17 18 var objectId = ossSvc.parseObjectId(storage.id) 19 20 var object = await ossSvc.putObject( 21 token, 22 objectId.bucketKey, 23 objectId.objectKey, 24 file) 25 26 // look for items with the same displayName 27 var items = await this.findItemsWithAttributes( 28 token, 29 projectId, 30 folderId, { 31 displayName: filename 32 }) 33 34 if(items.length > 0) { 35 36 var item = items[0] 37 38 var version = await this.createVersion( 39 token, 40 projectId, 41 item.id, 42 storage.id, 43 filename) 44 45 var response = { 46 version, 47 storage, 48 object, 49 item 50 } 51 52 resolve(response) 53 54 } else { 55 56 var item = await this.createItem( 57 token, 58 projectId, 59 folderId, 60 storage.id, 61 filename, 62 displayName) 63 64 var response = { 65 storage, 66 object, 67 item 68 } 69 70 resolve(response) 71 } 72 73 } catch (ex) { 74 75 reject(ex) 76 } 77 }) 78 }
The complete implementation is available there: DM Service
In order to download a file, you first need to determine the available versions and select the version you want, then obtain the OSS objectId of that version and download the file using OSS API.
Here is how an item versions response may look like, the OSS objectId we are interested is available in relationships.storage.data.id (Line #66). The bucketKey to use would be "wip.dm.prod" and the objectKey "3d249b40-b15d-46a2-b684-001d4534129f.dwfx"
III - Implementing the UI
Implementing a good web UI that interacts with your server in order to let your users upload, download and visualize their data is not straightforward at all. I spent a large portion of the development time working on the following control panel that displays the list of all the hubs/projects/folders/items for the signed-in user, the OSS buckets & objects linked to my app account and also lets the user download or upload files either by drag and drop or file picking. Each item in the treeview has also a context menu that provides some actions based on the type of the item. If the item is a design data supported by the viewer, you can double click on it to import it into the viewer.
I used the following libraries to achieve this control:
- DropzoneJS on the client side + Multer npm package for node. The setup is very easy and they offer flexible options for customization. For having tried multiple other libraries, I highly recommend this combination!
- The treeview is the one provided by the viewer API: Autodesk.Viewing.UI.Tree and Autodesk.Viewing.UI.TreeDelegate objects. It's a pretty interesting component because it allows to load the data associated with each hub/folder/item asynchronously so you can see the tree will populate progressively but you can start interacting with it directly. You don't have to wait for a huge payload of all your items to initialize the whole tree or load the subfolder only once the user expands them. It would deserve a blogpost on its own to explain all the tricks you can achieve with that control ... You can find the implementation of the tree in Viewing.Extension.Storage.Panel.js
- The TabManager that lets you switch between hubs is a complete custom control. You can find it's implementation in here.
- You can reorder the Tabs by simple drag and drop, this feature is using the dragula library, a must!
- Complete code for that panel is packed into a viewer extension: Viewing.Extension.Storage
That's it for today! The DataManagemt API official documentation is available here, the complete source code of my ongoing project is there and the live sample runs pretentiously at https://forge.autodesk.io ;) Feel free to try it today with your own A360 account!
In a next post I will described the work I did with the Model Derivative API which lets you access metadata and properties of your model, convert design data into viewables that can be loaded into the viewer, or export the geometry to different CAD formats.