AI-based Prototype
Technology Concept Demonstrations

Nvidia HQ by Gensler | A.I. Space Planning

My Role
Senior Director
Timeline
2017-2018
A custom GAN to save Gwnsler designers time in creating the Voyager office layout

How might machine learning impact the future of architecture?

In the past few years, Autodesk Research has put machine learning to the test with architectural design projects.

Nvidia, a Silicon Valley pioneer sought a corporate headquarters that reflected its core belief in people as its greatest asset. The 250,000-square-foot floor plan of this triangular building is designed around how people move, demonstrating design from the inside out. In the future of architecture, automation and machine learning show promise in alleviating some of this pain, taking over menial tasks and giving architects more time for design creativity.

ML–assisted design can relieve drudgery and allow greater artistry in the future of architecture.

We see three different types of design interactions with artificial intelligence.

We see three ways that ML-assisted design has the potential to augment architects’ skills, improve productivity, and automate drudgery.

1. “design automation,” or generative design, where the designer inputs goals, constraints  or parameters, and the algorithm creates design options.

2. “design insight,” where the architect fully controls the design, but ML provides insight and suggestions on matters such as local building-code requirements. This gives architects more freedom to design, with helpful (but hands-off) guidance that can speed up their workflow—from planning to pre-construction.

3. True “design interaction,” where software is co-creating the design along with the architect and automating the more mundane parts of the task. We recently explored this method with two of Autodesk’s partners, NVIDIA and GENSLER. From the overlap of our common goals, a research project was born, one that suggests ML’s influence in building design and construction will be profound.

Moving from BIM to GAN

Using one of NVIDIA’s office-building projects (designed by Gensler) as an experiment, we tested the feasibility of a new AI-assisted workflow. With both data and guidance from NVIDIA and Gensler, we collected all variations of  components used in their current projects and created a data set that contains all of the possible spatial combinations of each type.

An ML model “learns” by finding patterns in a large data set—in this case, interior-layout examples for an office building. One of the basic principles in ML is that the model must be trained on both good data and bad data—data that tells the model which outcomes you want (practical, pleasant, productive working environments) and which you don’t want. If we gave it only good data, it wouldn’t know when it did something wrong; it might have cubicles overrun walls or not allow enough walking space in between.

For this project, we chose a specific type of model called a generative adversarial network (GAN). Like a human designer, an ML system can quicken and deepen its grasp of a domain of knowledge by repeatedly challenging its assumptions about what it’s already learned. In a GAN, there are two models challenging each other. Both are trained to “know” what good office layouts look like: combinations of furniture, infrastructure (such as HVAC and plumbing), light, and space that represent good office design. One model is constantly generating combinations of these features and challenging the other model to accurately label it a “good” or “bad” design.

test fit

Instantaneous Conversion of Sketches to CAD/BIM

Project VGAN lets users test-fit plans quickly by inputting a design program as a target, and sketching a boundary around the area.

We created a sketch interface that followed a workflow familiar to Gensler designers called “test fitting”: attempting to optimally fit in a target number and ideal configuration of phone booths, conference rooms, and cubicles into an open floor plan.

The initial program targets: 20 conference rooms, 20 phone booths, and 200 workstations. Using a pen and tablet, designers drew in the outline of the area to consider. Then they labeled the area with zone types, and then when sketching, were able to instantly create layouts that were "under the hood" full-fidelity BIM geometry in Autodesk Revit.

The Workflow of the Future

Now, when an architects  needs to make a change to a commercial office layout—for example, 400 cubicles instead of 200, these changes typically require the designer to start from scratch But when a GAN ( generative adversarial network) allows the exploration of alternative layouts, it saves an enormous amount of time and drudgery for designers, which frees them up to focus on the creative parts of their work. Sketching might take a few minutes to complete, but that can replace two to three weeks of back-and-forth CAD work and save a full five weeks to get to a high fidelity, real-time visualization of the new proposal.

Instant Immersion using VR

After high Fidelity BIM geometry is created, NVIDIA GPU's are able to convert the design directly into a real-time VR environment.  Minutes after  completing your sketch, a Gensler architect or designer is  able to put on a VR headset and walk around the space to experience that specific environment.

Additional content provided by Chin-Yi Cheng, principal research scientist, Autodesk.

Watch the video and read more about the project here