Med Mart 1: Generation of Facade Geometry

As we described in our intro post, our goal for the Med Mart facade design was to develop a robust yet adaptive system which could easily respond to the building’s program and context while uniting the building as a continuous whole.  In this post we are going to expose a bit of the working process we explored to develop the final solution, and show just a few of the many iterations it took us to get there.

From the beginning of our design work on the facade system, we desired to develop a system that would give the building multiple layers of varied, unifying texture, legible from multiple scales.

Initial explorations of this concept focused on the textural capacities of precast concrete panels.  We developed a prototypical textured panel type and used a grayscale image to control the texture depth and intensity around the extents of the building.   A black value in the image placed a panel with the deepest texture possible, a white value placed a panel with no texture, and the gray values were assigned to texture depths in between the two extremes.  The image below shows two different studies using this method.

There were some interesting results from these early explorations, but we decided to make some changes to the initial logic of the definition to see what else we could achieve.

We wanted the supergraphic effect we were achieving with the textural depth to go further and include the glazed portions of the facade.   To do this, we built ourselves a simple catalog of 6 basic components which could mimic the graphic effect of the grayscale image.  By altering the size of the glass and precast elements within a standardized 8′ tall module, each module could be assigned a different value correlated to a percentage of gray.  A black value in the mapped image specified a panel to be mostly glass, a white value mostly precast, and the grays were somewhere in between.

After early explorations testing the limits of this definition, we incorporated programmatic constraints into the grayscale image such as the location of fire stairs, mechanical spaces, and critical vision areas.  The mapped image worked great for quickly exploring a lot of different ideas, but later in the process it proved difficult for making fine adjustments. 

It only took a few attempts of changing individual panel types with the brush tool in Photoshop for us to realize we needed a second way of manipulating the panel layout.  The solution was to bake the Grasshopper surfaces onto Rhino layers that corresponded to the different panel sizes (example: 5′ panel was placed on layer “PanelType5.0″).  A VB.net component was written to import surfaces from the previously created layers and each layer was assigned to a separate path branch in Grasshopper.  We were still able to use the grayscale image for the overall layout of the panels but the Rhino layers allowed for finer manipulation of the panel layout.

The advantage of using a parametric modeling process was that we were able to explore a lot of iterations in a rather short period of time, giving us a much faster feedback loop.  Design options could be tested and compared quickly, and as a result we had much less attachment to any individual solution.  We could validate a response based on how it worked, while keeping the rigor of this system completely intact.  The final design would simply be a reaction to each parameter we clarified.

In the next post, we’ll discuss the generation of the concrete panel texture.  We’ll also discuss how the close collaboration with our precast fabricators and consultants informed both our design and technical processes.  Stay tuned…

Related Posts:

Med Mart: Introduction

Med Mart 2: Panel Texture and Geometry

Med Mart 3: Daylighting the Atrium

Med Mart 4: Facade Design Coordination

Med Mart 5: Panel Fabrication

9 Comments

  1. Jon says:

    This is such an interesting and well executed approach to facade design with budget friendly modular systems that preform an interesting aesthetic while allowing for a performance based outcome. I’m curious how you were able to decipher a number value for the gradient image sampler you used. And from there each panel type in your VB component orients the panel based on the cull’d layer? Do you think this can be done as a random reduce based on the assigned gradient value?

    Thank you and as always excellent work!

    -J

  2. scrawford says:

    Hi Jon,

    The number from the gradient image is the brightness value (black = 0, white = 1). The range of values between 0 to 1 are then assigned to a particular panel size. For instance, if we had 4 panels sizes then brightness values between 0 to .25 would be assigned to the first panel, .25 to .5 to the second, .5 to .75 to third, and .75 to 1 to the fourth. Once a type is assigned to a panel then the list of panels is reorganized into sublists that represent the different panel types. That allows us to then assign them to the desired layers. I’m not sure I understand your question about the random reduce based on gradient value.

  3. Jon says:

    Hey Sean,

    Than makes sense so the range of values averages out as a panel type out of the 4 predfuned types. I was taking a fifferent approach thonking that a gradient image could drive a random reduce but now I see the power of components that Nate Millers LunchBox offers with bake to file it
    Would be interesting to tie the cull’d value range you spoke of to a revit family to be update as design options in the project for the project team and ease of concept imagery/ concept documentation atleast in my current workflow.

    Thanks again!

Leave a Comment