Processing and Optimization Solvers

These days, a lot of professions are overwhelmed by what seems like excessive amounts of data. Architects are in this boat as well, or at least we should be. With the simulation tools available to architects, we can take a napkin sketch and work towards validating the design’s performative capabilities in a short amount of time, all while generating an entire database of information at our disposal.

But one can get lost in all this information if it’s not organized and presented legibly. The data doesn’t speak for itself, and designers are responsible for its interpretation and representation. How can a simulation inform our next round of sketches? How do we process this data and present it meaningfully? With these questions in mind, this post will focus on one particular data hog in our process: the optimization solver.

In summary, an evolutionary optimization algorithm considers a collection of parameters (genes) and drives them to reach a target optimization value (fitness level). If a particular genome improves the fitness value, the algorithm will favor this proximity of genes on the genetic landscape. In other words, if the parameters are close to a “good” result, the solver will test the range of parameters closer to that value. For more info, this article by David Rutten is definitely worth a read.

GalapagosInterfacefinalResults

the default interface for the Galapagos Evolutionary Solver.

 
When developing a parametric model for optimization, we isolate parameters into specified domains and simulate a wide range of iterations within that realm. The Galapagos solver runs through these iterations which are deleted when the solver is closed. And while it allows one to reinstate genomes, it doesn’t provide an interactive way of adjusting or organizing the genes. To remedy this, we can export the genes and fitness level to an external database (Excel or SQL) in order to recall those values after the full study has been run.

After creating this database, the biggest challenge lies in its interpretation. We’ve created an entire landscape of possibilities for a design scheme, but the landscape is often vast and difficult to navigate. And the way we interface with these databases is particularly important for designers. An optimized value does not necessarily agree with design intent. In fact, it often doesn’t; so running a solver can only go so far in the process. We therefore require flexibility in considering a range of these optimized values, and need to study our data from both an analytical and abstract level. The interface needs to be both data-driven and visual, which sounds like a job for Processing.

a spider graph for visualizing Galapagos results. Click here for a more comprehensive interface.

 

Processing is a programming language developed at MIT by Casey Reas and Ben Fry. It’s visually based and was created with designers and artists in mind. There are many reasons why Processing is useful in the architectural design process, and you can find a more detailed list at the bottom of this post. In this case, Processing allows us to create a simplified interface for interpreting the results of an optimization run.

The demo we’ll use here shows the Galapagos Evolutionary Solver run on a light shelf study. Our office building is a 1950s international style project with floor to ceiling glass, so daylight control is a major part of our upcoming office remodel.
 
DemoRun

Genome breakdown

 
In these early studies, we’ve set up 5 parameters to drive the shape and position of the shelf. These parameters represent our set of genes that makeup the genome, and the reflected area on the ceiling represents the fitness value.

The video below shows a walkthrough of how we’re working with the tool. In addition to capturing the parameter and fitness value, we’re adding a screen capture to each iteration so that a view of the model can be shown in Processing. This helps with navigating through a large collection of images quickly. And reinstating the genome with GHowl’s UDP tools makes the tool even more promising. If we internalize a Grasshopper file and add it to a sketch folder, we can export it along with a Processing sketch. This way we can have the two platforms communicating.

This is in its early stages, but serves as an example for how we’re examining Processing in our workflow. With a little up front work, Processing can allow any user (even clients) to flex a model parametrically without having familiarity with parametric modeling interfaces. Not to mention, it’ll help us make sense of all this data.

Why Processing?
  • Processing is based in Java (and recently javascript), so each script can be exported as an independent program. It can also be easily uploaded and embedded to a web site for anyone to use (the sketch above is embedded from OpenProcessing.org, a great sites with all sorts of Processing sketches available for download).
  • The language has a lot of power (moreso than Grasshopper) for complex agent-based modeling (think people as particles, but with millions of people driven by algorithms which adjust over time intervals and user interactions).  This is particularly helpful for urban planning, or general architectural designs studying swarm intelligence
  • It hosts a wide range of libraries connecting to other databases, software, and hardware (SQL, Arduino, Kinect, etc).
  • Since Arduino is based in Processing, we can use the program to create a live interface for interactive models.
  • Processing lends itself well to database and interaction plugins for Grasshopper, namely Firefly, Slingshot!, and GHowl.  The program can also be used to export images from it’s canvas, as well as vector-based files and 3D modeling gemoetry (3DM,DWG,and OBJ using piGeon)
  • ProcessingJS (processing based in Javascript) is newer project with some issues at this point (previous Processing libraries are not compatible).  But since it’s based in Javascript, scripts are HTML5 friendly and do not require plugins.