More and more presentations at architecture and engineering conferences look like this, these days–the use of parametric modeling, genetic algorithms, and feedback loops is promising to revolutionize the problem-solving end of the design endeavor.
For those readers not in the loop (see what I did there?) these programs generate and evaluate forms based on a defined set of parameters. At its simplest, you can throw in a number of criteria for, say, a structural element, hit ‘run,’ and the programs will generate semi-random schemes, test them, rate them based on fitness for purpose, eliminate the under performers, and cross-fertilize the more successful schemes with one another to see if combined traits will perform more successfully. It’s a neat technology transfer from nature, borrowing evolutionary biology to evaluate complex spatial and formal problems. The results are often surprising, or at least far more nuanced than those of human labor–by running hundreds or thousands of iterations in a day, instead of the one or two that a designer might sketch out, the “design space,” or range of solutions on offer for consideration, is both broader and better informed.
This year’s IASS conference has, if anything, been the year of Galapagos, the genetic algorithm software that pushes and pulls the parameters in Grasshopper to produce formal results in Rhinoceros. The Structural Morphology working group in particular has presented a half-dozen or so case studies in how these programs can be used to very quickly produce and evaluate designs for structural elements and systems, and the results are impressive–and a bit disappointing. I’m interested in how process and product determine one another, often iteratively, often recursively–Nervi’s whole career can be seen as an evolutionary process in which four basic techniques get refined and tested in subtly varying circumstances, improving and becoming more efficient by small but crucial steps each time. So I’m fascinated by Galapagos in particular, and truly excited to see what it’s capable of. The first glimpse I had of this was in 2010, when I was a visiting faculty member at Northwestern, and one of SOM’s engineers came in to lecture on their use of proprietary genetic algorithm software to find ideal structural forms. The potential is incredible.
But the potential is also still way out there. One of the things that became clear as paper after paper presented the results of doctoral work in this area was that design, like biology, is pretty complicated. The number of variables in determining the most fit shape even for a simple structural element are deceptively large. Sure, there’s an ideal structural shape for a beam, say, but as any SCI-TECH alum knows, the cost of making an ideal shape might outweigh the cost of extra material in an almost ideal shape. The labor market may further add costs to one material or one method of connection. And the building type might suggest a further set of variables in how the shape integrates with other systems. An open-web joist, for instance, might be better in a laboratory that’s heavily serviced by ductwork, since its permeable. Or, as I found out in my days in practice, a dumb one-way concrete slab might work better for vibration control than any steel structure. Quantifying all of that starts to increase the time required for all of these genetic algorithms to run, and pretty soon you run up against the limits of what your machine can do. As one presenter put it, “laptops start smoking after a while.”
This became even more clear in some of the more ambitious projects to try applying Galapagos in particular. The program seems to be very good at finding shapes or forms that involve two or three variables, but in the case of a double-skin facade, for example, or even a fairly simple braced frame, it becomes apparent that the “design space” is a lot larger than it might appear at first. The double-skin facade project found a structurally efficient pattern, for example, but stopped short of even considering solar gain, ostensibly the rationale behind such a system in the first place. And the braced frame project, while it produced a really elegant profile, didn’t go the extra mile to find out what would happen if that profile were now used to figure out the wind loading, firing off another round of digital selection.
Moore’s law being what it is, computing power will eventually catch up with these problems, and the days of hundreds or thousands of iterations will seem much like the days of drafting on an IBM 486 (remember watching the line draw from A to B slowly across the screen? Mesmerizing). But that power might very well run up against other limits that we don’t quite realize yet, and once again we’ll be left with intuition to tell us a) when to stop, and b) what to do with what the outputs tell us. At the moment, these tools seem most useful as suggestions–things to look at as we contemplate a design space that’s more intuitive, not quite as large or refined, but more easily retrievable. And that’s probably the takeaway–there are amazing things out there now, being played around with by clever grad students, that will in fact revolutionize our problem-solving abilities. Like any tool, though, they’re not likely to take over the world, and they still seem best placed as adjuncts to an engaged, nimble mind.
Which, if anything, is even more promising.