Tech

Renovating Legacy Data Centers

Renovating Legacy Data Centers 

AFTER: UCSB’s North Hall Data Center underwent renovation and expansion while the existing facility remained operational.

“It’s like changing a tire on a moving car.” That’s how one data center owner described the process of renovating his data center while it was fully operational. This situation is more widespread than one might think. Whether in an institutional or corporate enterprise data center, the problem is common and growing: How do the facilities and IT staff renew an outdated facility built in the last century without provisions for required maintenance? Planning. In the deliberate university world, it requires a culture of collaboration from the entire team, including university senior administration, IT staff and facilities professionals, as well as careful constructors and a knowledgeable and integrated team of architects and engineers who can plan and execute the design.

Years ago, university data centers were built according to very different criteria than those today that must support email, online coursework, electronic admissions and research computing. Currently, everwidening data center requirements mean it is difficult to predict future use, as the increases in power use are coupled with more efficient equipment and virtualization, thereby constantly changing the design equation. Because of this, changes in data center layout and infrastructure must provide flexibility for unanticipated needs in the years to come. Determining how to incorporate adaptability, while remaining economical and mindful of existing conditions, renders upgrading operational data centers a challenge.

A common issue found in college and university data centers across the country is that they are running out of power, and therefore cooling capacity. Aging infrastructure equipment is nearing the end of its life, and repairs are difficult because they cannot be made without shutting down the facility. In addition, finishes are old and decrepit, especially access flooring, which in many cases cannot be replaced because the required type is no longer manufactured.

Another major component of these projects is the coordination needed to make them successful for the university communities, where a long list of stakeholders has input, commitment and responsibility. Team collaboration is paramount in data center design. The following project examples show how a comprehensive and holistic approach, where architects and engineers participate together from the beginning, allowed three universities to renovate, upgrade and refresh their facilities to meet today’s high-tech requirements.

Renovating Legacy Data Centers
Renovating Legacy Data Centers 

BEFORE (below) and AFTER (above): A recent renovation of Tuft University’s primary data center, located in the Tufts Administration Building (TAB), is revitalizing a facility that received its last major renovation in 1988. The two-phase plan allows Tufts to improve the facility while keeping IT services running with as few application and service interruptions as possible. The result of the project will be a primary Data Center that is more resilient, more energy efficient and sized to support projected growth over the next 20 years.

BACKGROUND

When new IT administration staff inherited Tuft’s existing 5,000-square-foot data center, it had maximized its apparent design capacity. The new staff was reluctant to add load for fear of bringing down the rest of the facility. Due to a lack of redundant components, there were no possible means of shutting them down for maintenance, and therefore the capacity of individual elements could not be tested. The existing mechanical system lacked additional cooling capacity. And, while there was plenty of overhead height, the under-floor dimension was restricted by the raised floor that had been placed less than 18 inches over a sub-floor, which in turn had an excess of inaccessible space below. There was little to no room for additional distribution under the floor, and no capacity to hang the cable tray from the existing roof. Built within the space of a former school gymnasium, the data center had limited to nonexistent as-built documentation.

SOLUTION

Integrated Design Group helped the university define its project goals and constraints and determined how the space could be used more effectively, replacing older equipment with newer upgraded technology to increase capacity while increasing energy efficiency. It was important to define what the university hoped to achieve and what they would consider a successful project. Determining the right questions to ask was integral to the design and construction process. The result was that Tufts obtained solutions they hadn’t originally considered.

As the program for the renovation was discussed, power capacity was determined and a phasing approach devised. This meant balancing the placement of new infrastructure outside and inside the building. It was decided to have two major phases — in the first, IT equipment was consolidated and then migrated to about half of the original data center area. In the empty half, a new roof was put in place, a new raised floor laid, and new mechanical and electrical equipment was installed. Then, once this was completed, the IT equipment was migrated back and the other side underwent its reconstruction. The complicated migration of equipment within spaces and on the roof was strictly coordinated by the construction manager, with significant collaboration with the university’s IT staff.

In the end, the existing 5,000-square-foot data center was completely renovated while remaining operational, and all mechanical, electrical and architectural systems were upgraded to increase capacity and resilience and to accommodate a range of computing functions, including high availability and research computing configured to work well far into the future.

Renovating Legacy Data Centers 

BEFORE: Previous to renovation, UCSB’s North Hall Data Center was in need of, among other improvements, upgrades to its power and cooling capacities.

BACKGROUND

The university’s outdated data center needed to fulfill a new function while maintaining operations of the vital network connectivity serving the entire campus housed within the designated data center space. UCSB’s need for high-performance computing (meaning between 12 and 20 kW per rack) was anticipated to require a substantial footprint increase, as well as upgrades to power and cooling capacity within the existing ground-floor space. A complete renovation of the existing 5,000-square-foot space, half of which had been data center and half of which was support space, was undertaken to provide a site for high-performance research computing, demand for which was growing.

SOLUTION

The primary purpose of the facility was to be able to provide infrastructure backup for high-performance computing, which requires a lot of power and therefore cooling, but not a tremendous amount of resiliency or redundancy. Given the constraints of the low raised-floor height and the low floor-to-floor height, it was determined that using rear-door heat exchangers for cooling would be the best solution. All systems — mechanical, electrical and architectural — were involved, with one of the main challenges being the shallow access floor and the low floor-to-ceiling heights.

The proposed rear-door heat exchangers utilize chilled water piping that could fit within the portions of raised floors that were only eight inches high. Providing the allowance for future connections, in-row rear-door units could be added as the need for cooling increased. The existing under-floor concrete ductwork was repurposed as a pipe chase in order to maintain as much clear space in the supply air plenum as possible. To reduce the quantity of power wiring required, and therefore the cost, a 400V distribution system — at the time rarely used in the U.S. — was recommended, and then provided. These collaborative efforts resulted in innovative solutions, giving UCSB a product that they are proud to utilize, and that has proved attractive to the research community on campus.

Renovating Legacy Data Centers

BEFORE: Brown University was working out of a 20-year-old data center that was in urgent need of upgrade. A vital part of the need was to devise a plan that could be implemented in affordable steps and be done while the data center remained fully operational.

BACKGROUND

Brown University’s primary data center was in dire need of an upgrade in order to respond to the growing demands of increased capacity to support research computing, email, data storage and other services essential to the institution. A previous proposal had recommended the complete replacement of the existing facility, but that price tag that was not sustainable. Therefore, a vital part of the work was to devise a plan that could be built using the existing facility as a base, could be accomplished in affordable steps and could be done while the data center remained fully operational.

SOLUTION

The solution was devised through a ten-year master planning document that provided a plan with detailed steps to incrementally replace all equipment, provide a system that is concurrently maintainable, and completely reconfigure the space.

The first step was to be executed through systematic moves over the course of a year to replace electrical, mechanical and fire-protection equipment and related distribution, with no need for an unplanned shutdown to minimize risk to servers. While remaining fully operational, the project included the installation of cable trays (in a redundant configuration) so that wiring could be moved overhead from under-floor; new generator backup; a new UPS module; new computer room air conditioners; as well as new offices, console area, finishes, lighting and the installation of new raised floor tiles and acoustical ceiling tiles.

Renovating Legacy Data Centers

AFTER: The results for Brown University include a 10-year plan to meet the university’s goals for a more robust and scalable data center.

Extensive upfront planning and ongoing communication between the Brown users, including IT and project management staff, was required to respect the university’s “blackout periods” during which no construction could occur. The collaboration between the university’s IT and project management staff, the contractors and the design team made the project a success. A total renovation and reconfiguration of the existing 7,000-square-foot operational data center was completed successfully, including a full upgrade to all mechanical, electrical and architectural systems, in order to accommodate a wide range of computing, including some that has conventional hot aisle/cold aisle organization and some that utilizes in-row cooled high-performance cabinets similar to those at UCSB (described previously).

It is hard, if not impossible, for IT professionals to predict growth, which means it is critical for a design to accommodate unforeseen circumstances. With a carefully crafted ten-year master plan, the flexible design was able, after year three of the original plan, to accommodate the recent need for in-row cooling for high-performance machines required by the increase in research computing. As with the earlier phases of construction, this was achieved while maintaining uninterrupted ongoing operations.

IT STARTS AND ENDS WITH THE TEAM

When working in an active, legacy data center, it is impossible to investigate all existing systems, since many components are hidden, or inaccessible. For any such complex technology renovation, the team needs to carry a contingency budget to cover any unforeseen and invisible conditions that inevitably arise during construction.

To renovate a legacy data center at a college or university, it takes a tremendous amount of teamwork and project management horsepower, both from the university and from the design firm. It begins with helping the college or university define its needs through careful planning and review of existing conditions. It proceeds with determining how best to design and thoroughly coordinate the architecture and engineering to provide a sophisticated design that is scalable, buildable within budget, and mindful of scheduling necessities. This all needs to be completed while interfacing with all university stakeholders.

During construction, the design, construction, project management and facilities staff must communicate constantly and consistently in order to perform renovations and upgrades during the university’s strict schedule, respecting ongoing campus programs and specific activities such as admissions, graduation or exams.

A successful project requires that the multiple team members within the university need to communicate frequently with each other and with their outside consultant team members, during the entire life of the project, from inception through occupancy. The result of successful team collaboration can translate into a totally new data center, in a reasonable time period, at a cost that is affordable.

This article originally appeared in the issue of .

Sponsored Content

  • Wenger Corporation

    In modern educational environments, the efficient organization of resources plays a pivotal role in enhancing the learning experience. Read More