Machine Dreams

25 May 2018



By automating repetitive, labour-intensive practices, artificial intelligence is promising to revolutionise everything from image recognition to driving. Can its influence meaningfully extend to architecture? Greg Noone speaks to architects about the inroads AI is making in design, construction and the very definition of built space.


Over the screech and clatter of the traffic below sounds the dull, rhythmic thud of the cities, as they slouch on pinion legs along the East River. From up here, on the viewing platform, it seems like the laws of perspective have momentarily changed, as if a trail of stag beetles have decided to march through the air just inches from your eyes, but the distant whine of hydraulics brings you back to your senses. The cities house thousands now beneath their steel carapaces, carrying their residents to wherever they are needed. Sometimes they trudge vast distances, picking paths through deserts, plains, mountain valleys. Today they’re heading south, towards the Brooklyn Bridge and the setting sun.

Ron Herron’s Walking City, published in 1964, is regarded as one of architecture’s stranger fever dreams. By imagining these vast, ambulatory metropolises in the pages of Archigram, Herron was stretching the contemporary vogue for blending urbanism with convenience to its logical extreme. Condemned by some, in its day, as inhuman, even totalitarian in scope, it was also one of the first depictions of the practical application of artificial intelligence in architecture could look like. The steel behemoths Herron envisioned were intended to be not only self-sufficient, but also autonomous, existing in a world where humans put as much trust in machines as in the elimination of borders.

That this aspect of ‘Walking City’ has been largely overlooked since 1964 says a great deal about attitudes towards AI in architecture. Contemporary architects have no problem in relying on complex algorithms to test and generate new designs, or make their structures ‘smart’ by scattering sensors inside and out, like so many oyster shells in tabby concrete. However, the idea that a machine can be trusted to create new designs independently has remained problematic. When Robert Woodbury, a professor of architecture at Simon Fraser University, was recently asked by Bloomberg whether generative design could edge the architect out of existence, his response was emphatic. “It won’t. It can’t.”

It is easy to see this attitude emerging from personal prejudice: as with all great art, the idea that a mechanical intelligence could conceive, say, La Sagrada Familia, is one that naturally forces us to reconsider from where inspiration truly springs, not least in ourselves.

It is also a position rooted in technical reality. All the talk of AI automating entire professions out of existence has been just that: talk. Algorithmic attempts to compose songs or write screenplays has produced work that’s mostly garbled nonsense. Suddenly, the idea of an AI architect seems positively kitsch and dangerous.

As such, the influence of machine learning on our daily lives has been marginal. And yet, it is in the margins that AI has thrived. By rationalising repetitive processes, it has optimised working practices and freed human labour for other tasks. Learning algorithms suggest which products to buy, trade stocks, even translate languages. And their influence is beginning to be keenly felt in architectural practices, too.

Supervised learning

Viviane Hülsmeier is one of the founders of CoPlannery, a startup headquartered in Berlin. Set to open early next year, the practice will use a supervised learning algorithm to help first-time builders understand precisely what features they want to include or eliminate in their new projects. The ultimate goal is to deliver a bespoke design. “Each client is very individual,” says Hülsmeier. “There’s no off-the-shelf solution for architecture.”

CoPlannery works from the assumption that most new-build clients are laypeople who, for better or worse, cannot fully articulate what they want from their fi rst project. Instead of having the architect sit down with the client and winnow their precise needs, the start-up deploys a robo-adviser to determine what they want to include and how they want it rendered.

“That’s where the AI aspect comes into play, because we can’t possibly expect a client to go through a checklist of 200 questions, [as] many of them might not be relevant to their project,” explains Hülsmeier. “Our goal is that the robo-adviser reacts dynamically to the answers it has received from a client, so that it can detect certain tendencies and then clarify specific topics in more detail.”

We were fascinated by the idea of building an autonomous factory that could turn our craziest design ideas into physical objects.

CoPlannery is one of a handful of start-ups using AI to eliminate those necessary tasks that architects find wearing or dull. Hülsmeier mentions Spacemaker, a firm in Norway that uses AI to generate site proposals, according to its website, in the “billions”. Meanwhile in the US, Autodesk Research has rolled out ‘Project Discover,’ a learning algorithm that adapts old blueprints for newer projects. Its creator, David Benjamin, is somewhat of an evangelist for generative design, imagining the logical path of AI in area to facilitate “a kind of co-design between human and computer that could not be possible by human alone or computer alone”.

Hülsmeier is more reluctant to emphasise the capabilities of CoPlannery’s algorithm. “We’ve kind of stopped communicating the AI aspect too much, because it makes people afraid,” she says. The kinds of clients CoPlannery will pursue, after all, are often keenly aware of the amount of money they will have to repay on their construction loan or mortgage. “They still need personal interaction when it comes to that much money.”

Building blocks

In Hülsmeier’s opinion, any talk of robo-architects is rash. “If you work for an insurance company, yeah, chances are high that you’ll lose your job,” she says. “But for architects, their cognitive abilities are nothing that AI can automate in the next decade.”

Daghan Cam might beg to differ. Formerly of Zaha Hadid Architects, Cam is one of the co-founders of Ai Build, a company housed inside an East London warehouse that produces autonomous construction systems.

“We were fascinated by the idea of building an autonomous factory that could turn our craziest design ideas into physical objects without human intervention,” says Cam, whose enthusiasm quickly transformed what was initially an expensive hobby involving algorithms and robotic arms into a viable business proposition. After working in what the architect calls ‘stealth mode’ for over a year, Ai Build made its public debut in 2016, at the first European GPU Technology conference.

The result was the Daedalus Pavilion, a 5×5m latticework structure resembling a butterfly in flight. Built out of biodegradable filaments, each of its 48 parts were 3D printed in just over two weeks. Kuka construction robots were then wheeled into the conference hall to assemble them together in less than 24 hours. Fitted with video cameras, the robots used computer vision and machine learning algorithms to analyse any mistakes they made during construction and improvise solutions.

“There is a misconception about industrial robots that they are great at doing repetitive tasks, but that they cannot adapt to complex situations,” explains Cam. “We believe in the opposite. We think the real potential of robotics in manufacturing is to make the robots more autonomous and responsive to their environment by using sensor data and artificial intelligence.”

Since last year’s conference, Cam and his colleagues at Ai Build have dabbled in printing furniture, sculptures, and construction components like building panels and formworks for concrete surfaces. More recently, the company has collaborated with Krause Architects and 3D printing company Reflow to build an entire shop front using Kuka robots in Regent Street, using recycled plastic waste imported from India and Africa. “We are very proud to be working with like-minded partners to turn plastic waste into a luxurious construction, by using robotics and artificial intelligence,” Cam says.

Coded muse

The innovations under way at Ai Build have obvious implications for the construction sector, not least in the rise of a mechanical workforce that eschews payment for services rendered, not to mention sleep. Cam has speculated about whole cities being constructed by autonomous robots and drones in our near future. There seem to be subtle advantages to this arrangement for architects, too.

“They will no longer be limited by the constraints of conventional fabrication methods and mass production,” says Cam. Architects, he predicts, will “be challenged to approach design with a different mindset, where complexity is for free”.

This, they might argue, is all to the good in the creation of built space. Even so, there’s a finality to the proposition that, for some architects, does not stretch the potential of AI in architecture far enough. In the mind of Behnaz Farahi, the confluence of both is likely to persist in the lives of those who use and inhabit these structures long after their construction.

“I’m more in favour of IA versus AI: intelligence augmentation versus artificial intelligence,” she says. Although trained as an architect, Farahi prefers to think of herself as a designer and a creative technologist. Her interests are catholic, citing Rodney Brooks and Antonio Damassio – roboticists and neuroscientists respectively – as key influences, in addition to Brunelleschi and Gaudí. Above all, Farahi believes that the future of design, in any discipline, relies on making it as responsive to its users as they are to it.

“It’s about how we can integrate intelligence between the material,” she explains. One of Farahi’s first designs to incorporate this principle was the ‘Hylomorphic Canopy’ (2012), a structure erected at outdoor events that would react to the movements of visitors beneath. “Taking cues from biological organisms, it is driven by four major spines that move with mathematically determined frequency,” Farahi said in her brief. Powered by the sun, the entire canopy would slowly follow the procession of revellers down the city street, like a curious, oversized caterpillar.

‘Hylomorphic Canopy’ would remain purely speculative, but Farahi would soon have the chance to bring her designs into being on a smaller scale. That same year, she debuted ‘Alloplastic Architecture’, an adaptive tensegrity structure that, when hooked up to a Kinect motion sensor, changed its shape in response to the movements of a dancer. This was followed in 2013 by ‘Living, Breathing Wall’ and culminated three years later in ‘Aurora,’ perhaps the world’s first autonomous ceiling.

“[When] we use a space…we move, we walk, we move our hands, we move our body,” explains Farahi. “For me, it was [about] how we can actually move beyond having one element – the space – and that informed Aurora.”

As in ‘Alloplastic Architecture,’ the installation translated read-outs from a Kinect camera into movements of its component parts. “Based on different activities happening in this space, different patterns of behaviour triggered different behaviours in the ceiling. In this work, the intention was to create a kind of reciprocal relationship between the human body and the shape of the environment.”

Aurora is on permanent display at the MEML Lab at the University of Southern California. Like Walking City, it remains a tantalising glimpse of a future where artificial intelligence is permitted not only to influence the course of design, but also become an essential component of it. In a world where children of two or three are becoming rapidly accustomed to gestural interfaces and tablets, it’s perhaps a more plausible vision of the future than, say, the city as an itinerant lumbering insectoid. Instead, the future of artificial intelligence in architecture may lie less in the enhancement of design on the page than in the very walls themselves, built to gaze back, as well as be gazed upon.

Inside the Bottletop 3D-printed shop.
Jakob Pupke, Nadir Benkhellouf, Viviane H├╝lsmeier and Christian Andersch from CoPlannery.
Aurora, an autonomous ceiling.


Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.