My Experience at the 2024 Forma Hackathon


In early February of this year, the first Forma Hackathon was held at the Autodesk offices in Oslo. This event provided a real opportunity for my company PIRO CIE to dedicate time to Forma with the assistance of the engineering team. The event kicked off at 9 am, and we were invited to pitch our ideas.  Our pitch was quite straightforward: "Make Augmented Reality from Forma".


A few days before the event, we started reading the documentation and discovered from one of the examples that we were able to get triangles from the elements. That was our starting point, but after some discussions with the engineering team, they convinced us to use the Forma API - which allowed us to access the data without using an extension - and thus without opening the Forma web page.




We decided to give it a try and began making API requests to extract Forma data. Unfortunately, we quickly realized that this API was still new and needed some tips/hacks to obtain certain values. Our biggest challenge, however, was not only competing against great teams but also against time. While we could accomplish a lot in two days, with introductions, breaks, discussions, setup, and preparation of materials for the presentation, we realistically only had about 5-10 hours of actual coding time.


Discovering the API, parsing responses, and creating recursive functions are not very complicated but can be quite time-consuming. In this case, we were not sure if we would be able to achieve our goals by the end. Returning to our initial approach, we quickly set up a Forma extension, and with a few lines of code, we were able to retrieve the geometry of our buildings. With a basic WebSocket server, we could then send this data to other clients.


Next, we created a fresh Unity project - our game engine of choice for easily prototyping and developing AR apps - and added a WebSocket client to receive our data.


















Unity is very useful for creating augmented reality apps. With the help of the ARFoundation package, we only needed to drag the necessary components into the scene to have an AR app that we could deploy on our iPad. Now came the trickiest part: model placement. Usually, we work with markers or detected plane intersections to establish a real-world origin. However, for this example, we decided to use geolocation (the accuracy should be sufficient for this use case).


This part involved two different steps:

  1. Converting the GPS coordinates of the model to placed and the device to Cartesian coordinates.
  2. Finding the true north of the device to orient the model.

Because we were working inside the building and did not have easy access to the outdoors, our GPS data was not perfect. Due to the time limit, we also streamlined this process a little and assumed the orientation by placing the device in the right position at launch, for example.
















In any case, our presentation and work impressed the judges, and we were one of the three winners (which was amusing for us as we weren't sure if we would even compete!).




Our source code is publicly available here via GIThubOur presentation slides can be found here:!Anrq2_ANoVNNjP0QKUYY9PdwVxhapQ?e=2Ijl0o