As we described in our intro post, our goal for the Med Mart facade design was to develop a robust yet adaptive system which could easily respond to the building’s program and context while uniting the building as a continuous whole. In this post we are going to expose a bit of the working process we explored to develop the final solution, and show just a few of the many iterations it took us to get there.
From the beginning of our design work on the facade system, we desired to develop a system that would give the building multiple layers of varied, unifying texture, legible from multiple scales.
Initial explorations of this concept focused on the textural capacities of precast concrete panels. We developed a prototypical textured panel type and used a grayscale image to control the texture depth and intensity around the extents of the building. A black value in the image placed a panel with the deepest texture possible, a white value placed a panel with no texture, and the gray values were assigned to texture depths in between the two extremes. The image below shows two different studies using this method.
There were some interesting results from these early explorations, but we decided to make some changes to the initial logic of the definition to see what else we could achieve.
We wanted the supergraphic effect we were achieving with the textural depth to go further and include the glazed portions of the facade. To do this, we built ourselves a simple catalog of 6 basic components which could mimic the graphic effect of the grayscale image. By altering the size of the glass and precast elements within a standardized 8′ tall module, each module could be assigned a different value correlated to a percentage of gray. A black value in the mapped image specified a panel to be mostly glass, a white value mostly precast, and the grays were somewhere in between.
After early explorations testing the limits of this definition, we incorporated programmatic constraints into the grayscale image such as the location of fire stairs, mechanical spaces, and critical vision areas. The mapped image worked great for quickly exploring a lot of different ideas, but later in the process it proved difficult for making fine adjustments.
It only took a few attempts of changing individual panel types with the brush tool in Photoshop for us to realize we needed a second way of manipulating the panel layout. The solution was to bake the Grasshopper surfaces onto Rhino layers that corresponded to the different panel sizes (example: 5′ panel was placed on layer “PanelType5.0″). A VB.net component was written to import surfaces from the previously created layers and each layer was assigned to a separate path branch in Grasshopper. We were still able to use the grayscale image for the overall layout of the panels but the Rhino layers allowed for finer manipulation of the panel layout.
The advantage of using a parametric modeling process was that we were able to explore a lot of iterations in a rather short period of time, giving us a much faster feedback loop. Design options could be tested and compared quickly, and as a result we had much less attachment to any individual solution. We could validate a response based on how it worked, while keeping the rigor of this system completely intact. The final design would simply be a reaction to each parameter we clarified.
In the next post, we’ll discuss the generation of the concrete panel texture. We’ll also discuss how the close collaboration with our precast fabricators and consultants informed both our design and technical processes. Stay tuned…