grasshopper, galapagos, and smoking laptops

VilloGrasshopperScreenshotMore and more presentations at architecture and engineering conferences look like this, these days–the use of parametric modeling, genetic algorithms, and feedback loops is promising to revolutionize the problem-solving end of the design endeavor.

For those readers not in the loop (see what I did there?) these programs generate and evaluate forms based on a defined set of parameters.  At its simplest, you can throw in a number of criteria for, say, a structural element, hit ‘run,’ and the programs will generate semi-random schemes, test them, rate them based on fitness for purpose, eliminate the under performers, and cross-fertilize the more successful schemes with one another to see if combined traits will perform more successfully.  It’s a neat technology transfer from nature, borrowing evolutionary biology to evaluate complex spatial and formal problems.  The results are often surprising, or at least far more nuanced than those of human labor–by running hundreds or thousands of iterations in a day, instead of the one or two that a designer might sketch out, the “design space,” or range of solutions on offer for consideration, is both broader and better informed.

This year’s IASS conference has, if anything, been the year of Galapagos, the genetic algorithm software that pushes and pulls the parameters in Grasshopper to produce formal results in Rhinoceros.   The Structural Morphology working group in particular has presented a half-dozen or so case studies in how these programs can be used to very quickly produce and evaluate designs for structural elements and systems, and the results are impressive–and a bit disappointing.  I’m interested in how process and product determine one another, often iteratively, often recursively–Nervi’s whole career can be seen as an evolutionary process in which four basic techniques get refined and tested in subtly varying circumstances, improving and becoming more efficient by small but crucial steps each time.  So I’m fascinated by Galapagos in particular, and truly excited to see what it’s capable of.  The first glimpse I had of this was in 2010, when I was a visiting faculty member at Northwestern, and one of SOM’s engineers came in to lecture on their use of proprietary genetic algorithm software to find ideal structural forms.  The potential is incredible.

But the potential is also still way out there.  One of the things that became clear as paper after paper presented the results of doctoral work in this area was that design, like biology, is pretty complicated.  The number of variables in determining the most fit shape even for a simple structural element are deceptively large.  Sure, there’s an ideal structural shape for a beam, say, but as any SCI-TECH alum knows, the cost of making an ideal shape might outweigh the cost of extra material in an almost ideal shape.  The labor market may further add costs to one material or one method of connection.  And the building type might suggest a further set of variables in how the shape integrates with other systems.  An open-web joist, for instance, might be better in a laboratory that’s heavily serviced by ductwork, since its permeable.  Or, as I found out in my days in practice, a dumb one-way concrete slab might work better for vibration control than any steel structure.  Quantifying all of that starts to increase the time required for all of these genetic algorithms to run, and pretty soon you run up against the limits of what your machine can do.  As one presenter put it, “laptops start smoking after a while.”

This became even more clear in some of the more ambitious projects to try applying Galapagos in particular.  The program seems to be very good at finding shapes or forms that involve two or three variables, but in the case of a double-skin facade, for example, or even a fairly simple braced frame, it becomes apparent that the “design space” is a lot larger than it might appear at first.  The double-skin facade project found a structurally efficient pattern, for example, but stopped short of even considering solar gain, ostensibly the rationale behind such a system in the first place.  And the braced frame project, while it produced a really elegant profile, didn’t go the extra mile to find out what would happen if that profile were now used to figure out the wind loading, firing off another round of digital selection.

Moore’s law being what it is, computing power will eventually catch up with these problems, and the days of hundreds or thousands of iterations will seem much like the days of drafting on an IBM 486 (remember watching the line draw from A to B slowly across the screen?  Mesmerizing).  But that power might very well run up against other limits that we don’t quite realize yet, and once again we’ll be left with intuition to tell us a) when to stop, and b) what to do with what the outputs tell us.  At the moment, these tools seem most useful as suggestions–things to look at as we contemplate a design space that’s more intuitive, not quite as large or refined, but more easily retrievable.  And that’s probably the takeaway–there are amazing things out there now, being played around with by clever grad students, that will in fact revolutionize our problem-solving abilities.  Like any tool, though, they’re not likely to take over the world, and they still seem best placed as adjuncts to an engaged, nimble mind.

Which, if anything, is even more promising.

5 thoughts on “grasshopper, galapagos, and smoking laptops

  1. Tom,

    First, this conference sounds pretty awesome.

    I’d have to agree on the somewhat limited capabilities of Galapagos when it comes to giving you actual solutions. There’s a lot of overhead, tweaking and consideration to be taken before trying to use it. The number of possibilities has to be great enough that you want to take the time to set up the Grasshopper definition vs modeling different options by hand (or mouse), but not so great that you spend more time waiting for the optimized result than actually designing.

    I’ve found Galapagos to be most useful at finding problems in how you set up your range of possibilities. For example, I did a short workshop for ISU students on utilizing Galapagos with DIVA to try and minimize solar heat gain while maximizing views (or window size, more accurately). In the process, Galapagos spit out a 4′ x 4′ x 20′ tower of a house as the ideal option. But the value comes in attempting to quantify your goal in a computational way, which not only helps us as designs wrap our heads around certain rules of thumb or general design goals, but allows you to see what an unbiased machine will trend towards, even if it ends up with a godawful result.

    There are a few other evolutionary solvers out there for Grasshopper that I’m aware of, but CORE Studio at Thorton Thomasetti came up with a brute force solver that will (over many many hours) solve every possibility for a potential design space and save the outputs. This seems to allow designers to balance the aesthetic and reasonable qualities of a design with the measurable results, without leaving it up to the computer to decide.

    When it comes to the intuition aspect, the trend I’m starting to see is the development of tools to allow designers to make more informed decisions early in the design process. For example, tools like Sefaira and the various environmental analysis plugins for Grasshopper can give instant visual daylighting analysis results as you tweak your even schematic model. This makes the feedback loop between design and analysis much less daunting and hopefully more intuitive.

    Apologies for the wall of text, this stuff is really interesting for me and I agree that it’s very promising stuff.


    • “Walls of text” always welcome, especially when they’re this insightful. I think you’ve nailed it with the observation that these tools may be at their best where they give feedback and make the design process more nimble and better informed, as opposed to “solving” anything. Sefaira sounds like something I should have a look at…in addition to a dozen or so other digital tools I’ve got on my list from this conference. (Yes, awesome, especially the stuff in the Structural Morphology sessions, all of which made most similar presentations at architectural conferences seem pretty lame). Thanks…


      • I can second the praise for Sefaira. It’s pretty close to the ‘Holy Grail’ of timely feedback and suppression of input/output ‘noise.’ It is speedy & easy enough for an average-ish practitioner to use in everyday practice. That can’t be said for most programs out there.

        Within a short period of use, I found myself able to confirm intuition and visualize key variables in optimizing energy & daylight performance. Once Sefaira completes (they said Q3 2015!) tweaks to their daylighting tool, it’ll be really quite a useful tool.

        The other half of the battle, though, is extracting sufficiently clear design criteria from hydra-headed client entities and design teams – and doing that early enough to influence the design. I have actually found that side of the problem to be more difficult. People ask, “If we could land man on the moon, why can’t we solve [intractable social problem X].” Of course it’s because landing on the moon was a goal with a finite, clear “fitness function.”

        On the larger question:
        Shortening & clarifying the loop between ’cause and effect’ is key. As Tom said, we want “adjuncts to an engaged, nimble mind.” Minds have limited capacity to tackle complexity (see ‘Notes on the Synthesis of Form’) so tools that extend our capacity must be very thoughtfully restrained (for lack of a better word) in the bells-and-whistles offered. The sheer burden of learning and using some software is enough to preclude their use in actual practice, to say nothing of intuitive feedback-generation. Good design leverages people’s limited intelligence. 😛

        Thanks for putting your thoughts out there, guys – and sorry if this is obnoxious. It’s fascinating stuff!


  2. Tom –

    Another problem is that there is no single ideal shape for a beam. There is an ideal shape for a given load but in the real world (as opposed to the model world) any structural member will be subjected to multiple loadings. The best shape for a beam is therefore a compromise between the various ideals for different loads…and the more load cases you examine the more the beam tends towards the basic I shape we alread use.

    The kind of genetic algorithm you’re talking about works best at the largest scale, in determining overall form, assuming that (a) the architects are willing to let a genetic algorithm determine form and (b) that the building’s parameters are such that one load dominates and so leads to a clear result. (Tall buildings, wind or quake loads dominate. Short buildings, gravity dominates. Mid-rise, no clear dominant load.)


    • yep–this. It was pretty clear that the most useful applications of the genetic stuff were those that were most tightly delimited. As soon as you had any diversity in terms of variables, or competing systems, the complexity got overwhelming. Basic shapes still predominate at smaller scales, of course…


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s