HP High Performance LC Cluster
Here you can find all about HP High Performance LC Cluster like manual and other informations. For example: review.
HP High Performance LC Cluster manual (user guide) is ready to download for free.
On the bottom of page users can write a review. If you own a HP High Performance LC Cluster please write about it to help other people. [ Report abuse or wrong photo | Share your HP High Performance LC Cluster photo ]
HP High Performance LC Cluster - High Performance Clusters Lc 1000 Series Cabling Guide, size: 12.7 MB
HP High Performance LC Cluster High Performance Clusters Lc 1000 Series Design And Configuration Guide
HP High Performance LC Cluster High Performance Clusters Lc Series - Setup And Installation Guide - Technical
HP High Performance LC Cluster Linux Cluster Storage Msa30 Scsi 8TB And 4TB
HP High Performance LC Cluster Linux Cluster Storage 1X Msa1000 6tb, 2-node
HP High Performance LC Cluster Linux Cluster Storage 2X Msa1000 6tb, 4-node
HP High Performance LC Cluster
User reviews and opinions
|Danielle||1:41am on Wednesday, June 9th, 2010|
|The iPad is exactly what I expected, easy to use, very well executed so long as you understand that it is mainly a device to consume media.|
|kHd1dt9PBj||6:20am on Monday, May 3rd, 2010|
|Does this device have any real flaws? Lets address some real shortcomingsÂ of the iPad. you will love the 9 inches screen. You will enjoy the touchscreen experience with iPad Fast, Lightweight, Compact|
|mwpnl||6:42pm on Thursday, April 8th, 2010|
|PROS: OS, look, Awesomeness ITs great, and the idea is well along with the OS its a Mac downsized. its size is a bit big Bought the 16G WiFi for my wife. She enjoys playing games, surfing the web, reading books, reading email and catching up on her Soaps at ABC.com. Awesome game player, and has replaced my laptop but I do not have to need for business and so I do not know about how those work. Great for traveling,...|
|mjb||9:00am on Thursday, March 25th, 2010|
|You can get a Nano or Touch for around a third of the price and still get Music, Podcasts, Apps, Clip, FM Radio and Camera. Overpriced content consumption table. Very responsive touch screen, high res screen Content Consumption only. Not great value for money. No camera.|
Comments posted on www.ps2netdrivers.net are solely the views and opinions of the people posting them and do not necessarily reflect the views or opinions of us.
HP High Performance Clusters LC Series Design Considerations
March 2004 (First Edition) Part Number 366713-001
Introduction....3 Migration to Industry Standard Clusters for HPC...3 Building your Cluster....5
Control Node.....5 Compute Nodes.....5 Cluster Interconnect....7 Application and Interconnect Considerations...9 Application Granularity....9 Cluster Size and Interconnect Considerations...10 Storage Considerations.....11
There are three LC Series Solution offerings based on different 1U densely packaged servers called compute nodes. The HPC LC 1000 Series solution is based on ProLiant DL140 compute nodes. The HPC LC 2000 Series solution is based on ProLiant DL360 compute nodes. The HPC LC 3000 Series solution is based on ProLiant DL145 computer nodes.
Each solution offers a unique set of features and performance capabilities. This document will assist you in determining what size of cluster you need, and which interconnects are best for the applications you intend to operate. It helps you prepare to use the LC Series Design and Configuration Guides to configure an LC Series Cluster.
Migration to Industry Standard Clusters for HPC
The use of industry standard servers in high performance compute clusters for serial, parallel, and message passing applications has been growing rapidly in the last few years. Several areas have contributed to the more extensive use of these servers in high performance clusters. The creation of the casual computing market (office automation, home computing, games and entertainment) has provided system designers with new types of cost-effective components. The COTS (Commodity off the Shelf) industry has provided fully assembled subsystems (microprocessors, motherboards, disks and network interface cards). Mass-market competition has driven the prices down and reliability up for these subsystems. The latest implementation of Intel Xeon DP and the emergence of the AMD Opteron processor offer incredible price per performance capability that before was present only in expensive RISC processors. The development of publicly available software such as the Linux operating system, GNU compilers, programming tools, and the MPI and PVM message passing libraries provide hardware independent software. The emergence of the Open Source community as well as the university programs for advanced information technology have spawned huge numbers of libraries and algorithms, which have either been accepted in the industry or extended and productized by hundreds of independent software vendors. Programs like the High Performance Computing Cluster (HPCC) program have produced many years of experience working with parallel algorithms. An increased reliance on computational science, and therefore an increased need for high performance computing, has turned more researchers and developers to focus their expertise and experience into making these systems perform and work better.
The combination of these conditions: hardware, software, experience and expectation, has provided the environment needed for the LC Series High Performance Compute cluster which is based on industry standard servers. The LC Series clusters are based on the Beowulf concept, which is one approach to clustering commodity hardware components to form a parallel virtual computer. It is a system that usually consists of one control node and one or more compute nodes connected via Ethernet or some other network or system area network interconnect such as Myrinet. LC Series clusters are ideal for tackling very complex problems that can be split up and run in parallel on separate computers. Not every problem can be approached in parallel, however. The LC Series can also be used for consolidated serial applications and complex message passing applications. For consolidated serial applications, many independent jobs can be allocated and managed within the confines of the cluster. In this form of cluster computing, multiple independent jobs, all running on their own machines with different data inputs and outputs, can execute and save administrative time and resources over a series of independent machines. In this case, a higher latency interconnect is acceptable because there is little to no interaction between jobs operating across the nodes in the cluster. Platform Computing, Altairs PBS Pro, and other job management systems can provide very efficient and controlled job processing allowing the customer to get the most out of their cluster when the cluster is used in this fashion. In the message passing case, which is by far the most complex, the cluster interconnect is critical. Programs must interoperate between compute nodes, transferring commands and data to complete their routines. If there is a slow down due to latency or collisions, the application can stall, abort, or in some extreme cases produce an incorrect result. Parallel industry standard clusters are replacing Massively Parallel Processor (MPP) systems except in the most extreme application cases. An MPP system is typically larger, proprietary, and has a lower latency and higher bandwidth system interconnect network than a parallel industry standard cluster. These MPP, or Vector, computers are needed due to the requirement for highly critical or classified applications where performance is the only concern. As the industry standard processors have been improving, the need for these MPP machines has diminished. Cluster programmers need to consider locality, load balancing, granularity, and communication overhead in order to obtain the best performance. Even on shared-memory machines, many programmers develop their programs in a message-passing style. Programs that do not require fine-grain computation and communication can usually be ported and run effectively on Beowulf clusters. An industry standard class cluster computer is distinguished from a GRID or NOW (Network of Workstations or servers) by several subtle but significant characteristics. First, the nodes in the cluster are dedicated to the cluster. This helps ease load balancing problems because the performance of individual nodes is not subject to external factors. Also, since the interconnect network is isolated from the external network, the network load is determined only by the application being run on the cluster. This eliminates one of the key flaws of GRID or NOW systems unpredictable network latency. All the nodes in the cluster are within the administrative jurisdiction of the cluster. For example, the interconnect network for the cluster is not visible from the outside world, so the only authentication needed between processors is for system integrity. On a GRID or NOW, you must also be concerned with network security.
Building your Cluster
To build an industry standard cluster you need the following components: Control node Compute nodes Cluster interconnect
Design considerations for each of these components are addressed in the following sections.
In the LC Series clusters, a single control node typically can handle the user interaction and control needs of the cluster. The control node is the basis for all application interface and administration in the cluster. The ProLiant DL380 is the preferred server for the control node because it offers the most operational features in a small 2U space. The DL380 offers up to 12GB of memory, dual processors for 2P performance, and up to 700GB of onboard SCSI storage for data staging and parsing to the compute nodes. The DL380 also has 3 PCI slots for added adapters such as Fibre Channel HBA for SAN extensions, although HP recommends that large storage should be an independent subsystem due to the complexity of cluster compute operations. The DL380 has two onboard 10/100/1000 Ethernet NICs to connect to the In Band (IB) management network and the external LAN for the user interface to submit and retrieve jobs allocated to the cluster. The DL380 also supports the Integrated Lights Out Service co-processor for Out of Band (OOB) management and remote administrative functions, which are critical to cluster health and operations. In addition, the DL380 offers multiple enterprise class features regarding system reliability with optional redundant power, cooling and hot plug disk storage. The DL145 server can take the place of the DL380 server as the control node in Opteron based clusters if desired. Using this 1U server as the control node keeps the processor type consistent between the control and compute nodes but eliminates some high availability, expandability and management features.
Three 1U server offerings are available as compute nodes in LC Series clusters. Each solution offers a unique set of features and performance metrics. The compute node is the most critical component of the cluster configuration. When selecting the compute nodes, you must consider not only the performance of the application, but also the communication between compute nodes and communication to storage. Other important considerations concerning the compute nodes are management, monitoring, and administration of the cluster itself. In order to determine the number and type of compute nodes, you need to ascertain the performance requirements for the application that is to operate on the cluster. This requires the use of some common metrics and the ability to understand those metrics in terms of peak and sustained performance. Dont let peak performance which might never be attainable by a real application sway a decision to go a certain way when sustained performance is what really counts. Any of the decisions made when designing and configuring a LC Series
cluster should rely on a comprehensive understanding of the application to be run. Metrics to consider include but are not limited to Floating Point Operations per Second (FLOPS) SPECfp performance SPECint performance Cache size Cache bandwidth Total memory Memory bandwidth Storage bandwidth System interconnect throughput System interconnect latency Bisection bandwidth
The following tables show performance summaries for the LC 1000 Series and LC 2000 Series clusters. Updated information will periodically be released to the HP website at www.hp.com. LC 1000 Series Model Matrix
Cluster Cluster Model Interconnect Processors / Node Memory / Disk / Cluster GFLOPS/ Cluster* Node Base Max Min GB Max GB 1P 2P 1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB 4GB 4GB 4GB 4GB 4GB 4GB 4GB 4GB 4GB 4GB 4GB 4GB 131.6 131.6 131.515.6 515.6 515.329.6 329.6 329.6 585.6 585.6 585.6 1097.6 1097.6 1097.35.7 44.9 51.1 68.6 87.1 102.8 131.6 164.6 205.7 246.8 309.1 427.8 65.0 82.2 95.2 123.9 159.3 192.1 237.5 298.4 384.1 444.6 558.5 798.3
1 Control Node, 16 Compute Nodes LC1016 F 10/100 Fast Ethernet LC1016 G Gigabit Ethernet LC1016 M Myrinet Control Node, 32 Compute Nodes LC1032 F 10/100 Fast Ethernet LC1032 G Gigabit Ethernet LC1032 M Myrinet Control Node, 64 Compute Nodes LC1064 F 10/100 Fast Ethernet LC1064 G Gigabit Ethernet LC1064 M Myrinet Control Node, 128 Compute Nodes LC1128 F 10/100 Fast Ethernet LC1128 G Gigabit Ethernet LC1128 M Myrinet 2000
LC 2000 Series Model Matrix
Cluster Cluster Model Interconnect Processors / Node Memory / Disk / Cluster GFLOPS/ Cluster* Node Base Max Min GB Max GB 1P 2P 1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB 5GB 5GB 5GB 5GB 5GB 5GB 5GB 5GB 5GB 5GB 5GB 5GB 36.4 36.4 36.4 36.4 36.4 36.4 36.4 36.4 36.4 36.4 36.4 36.4 5135.2 5135.2 5135.2 9813.6 9813.6 9813.6 19170.4 19170.4 19170.39.7 49.9 56.8 76.2 96.8 114.3 146.3 182.8 228.5 274.3 347.2 475.4 72.3 91.4 105.7 137.6 177.0 213.4 263.8 331.3 426.8 494.0 614.4 887.0
1 Control Node, 16 Compute Nodes LC2016 F 10/100 Fast Ethernet LC2016 G Gigabit Ethernet LC2016 M Myrinet Control Node, 32 Compute Nodes LC2032 F 10/100 Fast Ethernet LC2032 G Gigabit Ethernet LC2032 M Myrinet Control Node, 64 Compute Nodes LC2064 F 10/100 Fast Ethernet LC2064 G Gigabit Ethernet LC2064 M Myrinet Control Node, 128 Compute Nodes LC2128 F 10/100 Fast Ethernet LC2128 G Gigabit Ethernet LC2128 M Myrinet 2000
The LC 1000 series uses DL140 servers and the LC 2000 Series uses DL360 servers. 2.4 GHz processors were used in obtaining the performance data in these tables. One reason that the performance numbers for the DL140 based systems are different from the DL360 based systems, even though the same processor speed was used, is that the DL360 systems use interleaved memory. This accounts for an 8% to 14% performance difference across these two Xeon DP systems. This is based on multiple execution runs of many types of performance benchmarks since one run will not give a good estimate. This performance difference can be very significant depending on your applications and cluster size. You must assess performance needs, versus expense, versus administrative costs in making your design decisions. A speed calculation based on processor clock rates and peak GFLOPs may not give you the result you expect. Factor a 30% to 35% degradation factor for estimating sustained GFLOPS, which will be a more accurate measure of performance.
The Cluster Interconnect is by far the most important factor for applications that rely heavily on message passing. Cluster control messages and data both move over the Cluster Interconnect. Any network interconnect local area network (LAN) or system area network (SAN) can be used to connect the cluster nodes to each other, although some interconnects are faster than others.
The following interconnects are currently supported on High Performance Clusters LC Series. Myrinet Gigabit Ethernet Fast Ethernet
Two very important measures are used to characterize system interconnects. The first is speed. How fast does data pass through the network? Choices of speed run from Fast Ethernet at 10/100 megabits per second to specialized network technologies clocking 1 gigabyte per second or higher. The other measurement used to characterize system interconnects is latency. How long does it take to prepare a packet of information for transmission and get it onto the network? How long does it take to propagate through the network and all its various potential stages or hierarchies? Latencies can range from milliseconds in some older technologies to just a few microseconds in newer ones. The two measurements of speed and latency taken together define throughput the total amount of useable aggregate data that can be moved from one system to the other. Clearly, throughput can greatly affect the overall performance of a cluster. As you review the matrices, you will notice how slow 10/100 Fast Ethernet clusters are compared to the Gigabit and Myrinet clusters. This is due to the protocol speed and latency issue discussed above. If the application target is heavily message passing, such as Fluent or LS-Dyna, then configure the cluster using the Myrinet cluster interconnect. Although expensive, the Myrinet cluster interconnect provides the best bandwidth and latency in this price class of cluster. You can start with a small cluster. When you need to support more users or need more nodes to operate on a problem, you can simply add them to the existing cluster. Although there may be some small amount of degradation in communications efficiency as more nodes are added to the cluster, scaling is essentially linear. If you double the number of nodes in your cluster, you basically double the performance. Another aspect of scalability involves interconnecting clusters. System interconnects not only connect the nodes within a cluster, but they can also connect small clusters together to make larger ones. This of course needs to be a consideration when determining the size of the cluster interconnect switch. The LC Series Design and Configuration Guide reference designs specify the maximum number of compute nodes allowed in the design without having to add another interconnect switch. Using the Design and Configuration Guide reference designs you can easily design a cluster of any size up to 128 nodes with GigE or Myrinet interconnect or 192 nodes with Fast Ethernet interconnect. Scaling over these limits to sizes like 256, 512, or 1024 nodes requires additional engineering and hardware in the form of multiple switches with extensive cabling topologies. Also, additional or different software may be required depending on the problem being solved. HP Consulting and Integration Services have implemented ProLiant clusters in the 1000+ node category and can assist you with designing these complex clusters.
Application and Interconnect Considerations
Knowing the types of applications to be run on an HPC cluster will help you decide what type of cluster interconnect is needed in the cluster. Clusters can be used to solve one big problem at a time or to solve multiple problems at the same time. Solving ONE BIG problem at a time is capability Solving MULTIPLE problems at the same time is throughput
Most clusters are used both ways. Customers with big problems to solve, have to buy a cluster big enough to solve their biggest capability problem. They generally have plenty of smaller problems, however, which get run on the cluster when it is not in use solving the big problem. Throughput usage is more common but capability usage is more challenging for system designers. Application Granularity Some applications will work better with different types of cluster interconnect depending on their granularity. A key to efficient distributed computing is data locality. Calculations on local data are often orders of magnitude more efficient than calculations which require data communication. Granularity is the ratio between computation and communication. Coarse Grain Algorithms with a high computation to communication ratio are said to have coarsegrain parallelism. Coarse grain algorithms often offer better scalability even though load balancing may be more difficult. Algorithms with an exceedingly low amount of communication are said to be embarrassingly parallel. Examples of highly parallel, coarse grain applications include: image or frame processing, sequence research in life science, and parameter study (stochastic crash tests). Fine Grain Algorithms with a low computation to communication ratio are said to have fine-grain parallelism. Examples of fine grain applications include finite element analysis or computational chemistry.
Knowing whether your applications are either fine grained or coarse grained will dictate what type of cluster interconnect is best suited for your cluster. In most groups, there is a mix of application types, but knowing what the majority is will help make the right determination for cluster interconnect, which is an expensive component of the cluster. In summary, when reviewing your application mix, keep these tips in mind. Serial applications (LC Series used as a Compute Farm) Independent processes Performance characterized by CPU performance. System is used for throughput. Multithreaded applications (LC Series used as a Parallel Machine) Pthreads, OpenMP One thread per CPU (ideal) working on shared memory. Performance characterized by node performance. Distributed Memory applications (LC Series as a Message Passing (MSG) Machine) Cooperating processes. Typically message passing applications (MPI). Application throughput characterized by both node and interconnect performance. Entire system is used by the application
Cluster Size and Interconnect Considerations
The following table will assist you in making the decision on interconnect and cluster size.
Cluster Interconnect and Cluster Sizing Recommendation Matrix Cluster Size Job Mix Serial Only Serial / Parallel Serial / MSG Parallel Only Parallel / Serial Parallel / MSG MSG Only MSG / Serial MSG / Parallel 16 Nodes 32 Nodes 48 Nodes 64 Nodes 96 Nodes Fast E Fast E Gig E Fast E Fast E Gig E Myrinet Myrinet Myrinet Gig E Fast E Gig E Fast E Gig E Gig E Myrinet Myrinet Myrinet Gig E Gig E Myrinet Fast E Gig E Myrinet Myrinet Myrinet Myrinet Gig E Gig E Myrinet Gig E Gig E Myrinet Myrinet Myrinet Myrinet Gig E Gig E Myrinet Gig E Gig E Myrinet Myrinet Myrinet Myrinet 128 Nodes Gig E Gig E Myrinet Gig E Gig E Myrinet Myrinet Myrinet Myrinet
The LC Series design point was to build a computing machine and allow for local storage to be an option based on the needs of the operating system, cluster manager, and applications. In the LC Series, the DL380 and the DL360 nodes come without storage, which must be ordered as an option. In the DL140 and DL145 nodes, ATA drives are included. The following are some rules of thumb to follow: If using Linux, drives are required on the control node but may not be needed on compute nodes, depending upon cluster management and applications to be used. It is recommended that all compute nodes have at least one drive for operating system needs or as a scratch space for applications. If using Windows, drives are required on all nodes.
A number of storage subsystems have been defined for use in LC Series solutions to meet customer needs. Divided into a range of subsystem sizes and capacities, these storage options include entry and low-end NFS subsystem options and entry to high-end Global File System solutions. HP software partners have tested their application codes on the storage subsystems and have given server recommendations for the best data I/O rates. These storage subsystems are designed to link to the cluster through the Gigabit Ethernet In Band Management network. This provides the application full access to the cluster interconnect if needed as in message passing and mixed use clusters. The In Band Management network has been designed in the LC Series for both compute node expansion and storage interconnect.
This document has provided some information to consider when determining the number of nodes and interconnect type to build into an HPC cluster. The next step is to use the Design and Configuration Guide for the compute node of your choice, DL140, DL360 or DL145, to design a cluster to meet your needs. The Design and Configuration Guides allow you to specify the number of compute nodes initially needed in your cluster, the amount of expansion you may need in the future, and the type of interconnect you need. Using the process in the guide you will be able to generate a detailed list of components needed to build the cluster, right down to the numbers of cables and cable lengths needed. Additionally, you will be able to specify storage components and software components that may be needed in the cluster.
Copyright 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft, Windows, and Windows NT are U.S. registered trademarks of Microsoft Corporation. Intel, Pentium, and Itanium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a U.S. registered trademark of Linus Torvalds.
HP High Performance Clusters LC Series Design Considerations March 2004 (First Edition) Part Number 366713-001
high availability systems engineering
technical white paper
HP High Performance Clusters LC Series setup and installation guide
table of contents
abstract introduction safety information HPC LC Series hardware solutions HPC LC Series solution components
control node compute nodes HPC networks additional components
hp services and care packs receiving the HPC cluster installing switches physical planning
positioning the racks rack cooling
cabling the configuration powering on the equipment factory system settings setting up the control node setting up the compute nodes summary for more information feedback
This setup and installation guide provides the customer with the information required to take the HP High Performance Clusters LC Series (HPC LC Series) product offering from delivery of order to a fully cabled system, ready for operating system and application installation. This document supplements the information found in the user guides for the servers and switches used in your HPC LC Series solution.
This guide provides the customer with instructions and the necessary reference information required for setting up and installing the HP High Performance Clusters LC Series solutions. The reader gains the knowledge of the various solution components and how to receive equipment, position racks, cable the configuration, power on the equipment, and begin the initial setup process. This paper provides information on each of the following topics: HPC LC Series hardware solutions HPC LC Series solution components HP Services and Care Packs Receiving the HPC cluster Installing switches Physical Planning Cabling the configuration Powering on the equipment Default factory settings Setting up the control node Setting up the compute nodes
This paper assumes that the reader has advanced technical skills and has extensive knowledge of the following topics and products: High performance computing concepts Linux operating system installation knowledge and experience High performance computing system software installation experience
IMPORTANT SAFETY INFORMATION
Before installation, read the Important Safety Information document included with the product. Also, read the safety information details of the documentation included for each component.
HPC LC Series hardware solutions
The HPC LC Series is a range of pre-configured hardware configurations that are integrated, tested, and shipped assembled in racks ready for customer use. The solutions utilize the ProLiant DL380 G3 server as the control node (service node) and the ProLiant DL360 G3 servers as the compute nodes (application nodes). The customer simply selects the appropriate HPC LC Series configuration that represents the required number of compute nodes and the desired speed of the cluster interconnect. There are four (4) cluster node size categories: 16-node 32-node 64-node 128-node
There are three (3) cluster interconnect categories: 10/100 Fast Ethernet Gigabit Ethernet Myrinet For each region, there are a total of eleven (11) HPC LC Series solutions to choose from based on possible combinations of the cluster node size and the type of cluster interconnect. Table 1 below lists all eleven part numbers for North America. Table 2 below lists all eleven part numbers for Europe, Middle East, and Africa (EMEA).
table 1. North America: HPC LC Series hardware solutions
North America 10/100 Fast Ethernet Gigabit Ethernet Myrinet switch Cluster Node Size 16-Node Part number: 322939-001 Part number: 322939-002 Part number: 322939-003 32-Node Part number: 322940-001 Part number: 322940-002 Part number: 322940-003 64-Node Part number: 322941-001 Part number: 322941-002 Part number: 322941-003 128-Node Part number: 322942-001 Not offered at this time Part number: 322942-003
table 2. EMEA: HPC LC Series hardware solutions
EMEA 10/100 Fast Ethernet Gigabit Ethernet Myrinet switch Cluster Node Size 16-Node Part number: 322939-421 Part number: 322939-422 Part number: 322939-423 32-Node Part number: 322940-421 Part number: 322940-422 Part number: 322940-423 64-Node Part number: 322941-421 Part number: 322941-422 Part number: 322941-423 128-Node Part number: 322942-421 Not offered at this time Part number: 322942-423
HPC LC Series solution components
Each HPC LC Series cluster contains one control node, several compute nodes, interconnects, rack(s), and rack infrastructure. The next few sections describe each of these components in more detail.
One ProLiant DL380 G3 server functions as the control node (service node), and is equipped with one Intel Pentium 4 - 3.06 GHz processor, 1 GB DDR memory, and one 36 GB Ultra 320 10K SCSI Disk as a base node configuration. This server also comes with a redundant power supply, a redundant fan kit, and a dual port PCI Gigabit NIC adapter. The control node can be extended up to two processors, twelve GB DDR memory, and six Ultra 320 SCSI hard drives. The server may also have a PCI Myrinet adapter installed if this server is part of a HPC Myrinet solution. This control node is used as the interface to the user community for job dispatch, control, monitoring, and job completion within the cluster. Depending on which configuration you ordered, you would have from 15 to 127 ProLiant DL360 G3 servers that function as compute nodes (application nodes). Each compute node is equipped with one Intel Pentium 4 - 3.06 GHz processor and 1 GB DDR memory. The compute node systems can be extended up to two processors, eight GB DDR memory, and two SCSI hard drives. Each compute node may also have a PCI Myrinet adapter installed if the node is part of a HPC Myrinet solution. Each HPC LC Series cluster is made up of multiple networks which include the following: iLO network, cluster interconnect network, and management network. Each one of these networks is described in more detail below. iLO network (Out of Band Management) This network is used for integrated LightsOut (iLO) connections to all of the ProLiant servers that make up the HPC configuration. This network uses the HP ProCurve 2650 switch. cluster interconnect network This is the main data network that connects all of the compute nodes together. This network can be either a 10/100 Fast Ethernet, Gigabit Ethernet, or Myrinet network. This network utilizes various types of ProCurve or Myrinet switches depending on the overall type and size of the HPC solution. management network (In Band Management) This network is used for overall cluster management using a standard Ethernet connection. If using the 10/100 Fast Ethernet or Gigabit Ethernet HPC solutions, you have the option to have the management network on a separate hardware switch. If using the Myrinet HPC solutions, the management network is on a separate hardware switch. This network uses the HP ProCurve 2650 switch.
Each HPC LC Series solution also comes equipped with an HP TFT 5600 RKM, Modular 24A PDU(s), and extra network cables for external network connectivity. All components are integrated and pre-cabled into the 42U rack(s). All of the internal rack network cables are labeled with a descriptive cable label to facilitate the identification process of each cable connection. The HP High Performance Clusters LC Series Cabling Guide illustrates the point-to-point connections of each network cable and describes the cable label nomenclature in detail.
hp services and care packs
To facilitate the setup and installation process of your HPC LC Series solution, HP offers a variety of installation and professional services. HPs highly skilled professionals can help you manage information resources, provide consistent performance, and deliver secure access with our comprehensive suite of proactive services. HP Care Pack services offer upgraded service levels to extend and expand your standard product warranty with easy-to-buy and easy-to-use support packages that help you make the most of your hardware and software investments. To learn more about what services are available for your HPC LC Series solution, please visit the following: hp services: http://www.hp.com/hps/ hp care pack services: http://www.hp.com/hps/carepack
receiving the HPC cluster
Since the HPC LC Series cluster sizes range from 16-nodes to 128-nodes, the cluster size will determine if one, two, or four 42U racks are delivered. The 16-node and 32-node solutions ship in one 42U rack. The 64-node solutions ships in two 42U racks. The 128node solutions ship in four 42U racks. Every configuration is shipped fully integrated with easy to read cable labels to facilitate the cabling process. Depending on which HPC LC Series solution that was ordered, the cluster interconnect switch may be shipped separately instead of being integrated and shipped in the rack. It is necessary to ship some of the cluster interconnect switches separately to prevent damage to the unit during the shipping process. Refer to the next section for which HPC LC Series solutions may require a switch to be shipped separately and installed at the customer site.
Some of the HPC LC Series solutions may contain a switch that cannot be shipped in the rack. In these cases, the switch is shipped in a separate box or boxes. The following HPC solutions may contain a switch that is shipped separately: 16-Node Myrinet Solution - the Myrinet (3-slot frame) switch may ship separately* 32-Node Myrinet Solution - the Myrinet (5-slot frame) switch may ship separately* 64-Node Myrinet Solution - the Myrinet (9-slot frame) switch may ship separately* 64-Node Gigabit Solution - the ProCurve 9308m switch may ship separately* 128-Node Myrinet Solution - the Myrinet (17-slot frame) switch may ship separately*
* This solution may contain a switch that is shipped separately. If you received a switch that was shipped separately, then the switch must be installed in the pre-defined space within the rack at the customer site. Refer to the HP High Performance Clusters LC Series Cabling Guide for information on switch placement within the rack. In the future, the switch may be shipped in the rack, instead of being shipped separately.
Physical planning for your HPC LC Series deployment is one of the first things that must be considered before beginning the installation. You must ensure that you have enough physical space, adequate power and ventilation. You should also provide backup power such as an Uninterruptible Power Supply (UPS). A properly designed computer room has adequate ventilation and cooling for racks with servers and storage devices and has the appropriate high-line power feeds installed. For more information on datacenter design and planning, please refer to Technology Brief TC030203TB at the link below. This technology brief describes trends affecting datacenter design, explains how to determine power and cooling needs, and describes methods for cost-effective cooling. Technology Brief TC030203TB can be downloaded from the following link: http://wwss1pro.compaq.com/support/reference_library/viewdocument.asp?countryco de=1000&prodid=137&source=tc030203tb.xml&dt=21&docid=15719
positioning the racks
Upon receipt of the HPC LC Series solution, the racks will need to be placed in the appropriate positions within the customer data center. The 16-node solutions ship in a single 42U rack and require sufficient power for two (2) Power Distribution Units (PDUs). The 32-node solutions also ship in a single 42U rack and require sufficient power for three (3) PDUs. The 64-node solutions ship in two 42U racks and require sufficient power for six (6) PDUs. The 128-node solutions ship in four racks and require sufficient power for eleven (11) PDUs. When positioning the racks, be sure to place them in sequential order because the cable lengths are designed for that relative position. For example, figure 1 below illustrates the proper rack positioning of a 128-node cluster solution.
figure 1. rack positioning for a HPC LC Series solution
The following spatial needs should be considered when deciding where to physically place the HPC LC Series cluster solutions: Clearance in front of the rack unit should be a minimum of 25 inches for the front doors to open completely and for adequate airflow. Clearance behind the rack unit should be a minimum of 30 inches to allow for servicing and for adequate airflow.
The racks in each HPC LC Series solution draw cool air in through the front and exhaust warm air out of the rear. To ensure continued safe and reliable operation of the equipment, place the system in a well-ventilated, climate-controlled environment. The HPC LC Series solutions should be placed in data centers with an adequate airconditioning system to handle continuous operation of this solution. To prevent component overheating and thermal shutdowns, please review the documentation for each of the components within your HPC LC Series solution to learn more about the recommended ambient (inlet) temperatures and the allowable maximum ambient operating temperatures within your data center.
cabling the configuration
All of the network cables within each rack are labeled for easy identification. The HP High Performance Clusters LC Series Cabling Guide explains and illustrates the cabling requirements for each of the HPC LC Series solutions in detail. Also, if your HPC solution is comprised of multiple racks, then there will be some inter-rack cabling. That is, some of the cables from one rack will be connected to a switch in another rack. Furthermore, each HPC solution will have some network cables that need to be connected outside of the rack to the corporate network (for example, the DHCP connection). The network cables that need to be connected outside of the rack are not labeled but are included with the product. Refer to figures 2 and 3 for a high level overview of the network cabling requirements for the various HPC LC Series solutions.
IMPORTANT: Refer to the HP High Performance Clusters LC Series Cabling Guide for details on the power and network cabling requirements, cable label nomenclature, and cabling illustrations for each HPC LC Series solution.
Figure 2 below provides a high level overview of the network cabling requirements for the 10/100 Fast Ethernet and Gigabit Ethernet HPC cluster solutions only.
iLO Switch (ISW)
NIC1 Corporate Network NIC2 Interconnect Switch (CSW)
NIC3 Management Switch (MSW - optional)
NIC4 DL380 Server
CAT5E (internal rack cables) CAT5E (external rack cables) Optional CAT5E cables
figure 2. network overview of 10/100 Fast Ethernet and Gigabit Ethernet HPC solutions
In figure 2 above, the DL380 server has one (1) iLO port and four (4) NIC ports. Both the iLO port and NIC1 port connect to the iLO switch (ISW). NIC2 is used to connect to an external network. NIC3 is used to connect to the cluster interconnect switch (CSW). In this configuration, the cluster interconnect switch can be either a 10/100 Fast Ethernet or Gigabit Ethernet switch. NIC4 is used to connect to the optional management switch (MSW). If the management switch is not part of the solution, then NIC4 will not be connected. The DL360 servers have one (1) iLO port and two (2) NIC ports. The iLO port connects to the iLO switch (ISW). NIC1 connects to the cluster interconnect switch (CSW). NIC2 connects to the optional management switch (MSW). If the management switch is not part of the solution, then NIC2 will not be connected. Note: The management switch (MSW) is an optional component for the 10/100 Fast Ethernet and Gigabit Ethernet solutions. Figure 2 above assumes you will be connecting your HPC LC Series solution to an external DHCP server. The DHCP server must be provided by the customer and be made available to the system before proceeding to use iLO. If you plan on using the control node and HPC cluster software to assign DHCP addresses, then you will need to cable the system accordingly.
Figure 3 below provides a high level overview of the network cabling requirements for the Myrinet HPC cluster solutions only.
iLO Switch (ISW) iLO
NIC2 NIC3 NIC4 MYR
Management Switch (MSW)
Interconnect Switch (CSW - Myrinet)
CAT5E (internal rack cables) CAT5E (external rack cables) Fibre optic cables
figure 3. network overview of Myrinet HPC cluster solutions
In figure 3 above, the DL380 server has one (1) iLO port, four (4) NIC ports, and one (1) Myrinet adapter connection. Both the iLO port and NIC1 port connect to the iLO switch (ISW). NIC2 is used to connect to an external network. NIC 3 is not connected but may be utilized to connect to an external network. NIC4 is used to connect to the management switch (MSW). The Myrinet adapter connection (MYR) is used to connect to the cluster interconnect switch (CSW). In this configuration, the cluster interconnect switch is a Myrinet switch and may vary in size depending on how may compute nodes the HPC solution contains. The DL360 servers have one (1) iLO port, two (2) NIC ports, and one (1) Myrinet adapter connection. The iLO port connects to the iLO switch (ISW). NIC1 is not connected. NIC2 is used to connect to the management switch (MSW). The Myrinet adapter connection (MYR) is used to connect to the cluster interconnect switch (CSW). In this configuration, the cluster interconnect switch is a Myrinet switch and may vary in size depending on how may computer nodes the HPC solution contains. Figure 3 above assumes you will be connecting your HPC LC Series solution to an external DHCP server. The DHCP server must be provided by the customer and be made available to the system before proceeding to use iLO. If you plan on using the control node and HPC cluster software to assign DHCP addresses, then you will need to cable the system accordingly. 10
powering on the equipment
The equipment should be completely installed into the rack(s) and cabled before powering on any hardware components. First, ensure the PDUs have been plugged into appropriate power receptacles and then powered on. Second, power on the monitor and all switch devices. Ensure all switches have initialized correctly before continuing. Before powering on the servers, please review the next section to learn about the factory system settings for each machine. Then proceed to the following sections for powering on and setting up the servers.
factory system settings
There are several factory system settings that are pre-configured before the HPC cluster is shipped to the customer. The pre-configuration settings of the ProLiant DL380 server and the ProLiant DL360 servers include the following: ROM-based Setup Utility (RBSU) Settings: The operating system setting is set to Linux Hyperthreading is disabled iLO settings: The iLO firmware is updated to version 1.40 or later The iLO Advanced License is installed on all nodes The iLO DNS name is set to match the iLO cable label for each node. For example, if the iLO cable label reads R1-A0-iLO, then the iLO DNS name will be set to R1-A0iLO for that node. The iLO tags for each machine have been removed. The iLO username and password for each machine are pre-defined to facilitate the setup process. The iLO username and password for each machine are as follows: o username = Administrator o password = Administrator Note: By default, iLO is set to obtain an IP address from a DHCP server. The DHCP server must be provided by the customer and be made available to the systems before proceeding to use iLO for initial setup. For detailed instructions and help in configuring DHCP for your network, please refer to the DHCP HowTo included with your Linux distribution and on the Linux Documentation Project web site. RAID settings: No RAID settings are configured
setting up the control node
First, power on and set up the ProLiant DL380 server control node. This server is used as the interface to the user community for job dispatch, control, monitoring, and job completion within the HPC cluster. Please refer to the software vendors installation instructions for setting up the control node for your specific operating system. Note: During the installation process you may be required to assign each NIC a specific function within the HPC cluster.
setting up the compute nodes
The ProLiant DL360 servers compute nodes can be powered on in any order. The compute nodes are not connected to a keyboard, monitor, or public LAN. Therefore, you must use the integrated Lights-Out (iLO) feature of the ProLiant servers in order to remotely manage each compute node. Each ProLiant machine has the iLO Advanced license installed. The iLO Advanced license (or feature suite) offers sophisticated virtual administration features for full control of servers in dynamic data center and remote locations. The iLO Advanced feature suite includes Virtual Graphical Console and Virtual Floppy Drive that provide significant cost savings by removing any advantages of being physically present in front of the server for routine access and maintenance. Another feature of iLO Advanced allows a local client CDROM to be connected to a remote host server as a USB device, removing the need to visit the host server to insert and use a CDROM device. The control node can be used to establish a remote iLO session to each compute node. Furthermore, you can connect additional iLO client machines to the iLO switch or you can connect the iLO switch to a corporate network if you want more than one iLO client machine. Note: Default iLO settings have been configured in the factory as specified above. Upon receipt of the HPC cluster solution, the customer may choose to change these default settings. Please refer to the iLO User Guide for details on changing any of these default settings. Please refer to the software vendors installation instructions for setting up the compute nodes for your specific operating system. Note: During the installation process you may be required to assign each NIC a specific function within the HPC cluster.
The HP High Performance Clusters LC Series is a range of pre-configured hardware configurations that are integrated, tested, and shipped assembled in racks ready for customer use. This paper has covered the overall setup and installation process for the HP High Performance Clusters LC Series solutions. This guide provided the customer with the information required to take the product from delivery of order to a fully cabled system, ready for operating system and application installation.
for more information
To learn more about HP High Availability and ProLiant Clusters visit the following Web site: http://www.hp.com/servers/proliant/highavailability. To learn more about HP High Performance Computing visit the following Web sites:
http://www.hp.com/techservers http://www.hp.com/techservers/clusters/ http://www.hp.com/techservers/resources/example_customers.html http://www.hp.com/techservers/support/developers.html
Help us improve our technical communication. Let us know what you think about the technical information in this document. Your feedback is valuable and helps us structure future communications. Please send your comments to: firstname.lastname@example.org
Copyright 2003 Hewlett-Packard Development Company L.P. Microsoft and Windows are trademarks or registered trademarks of Microsoft Corporation. Intel and Pentium are registered trademarks of Intel Corporation. All brand names are trademarks of their respective owners. The technical information in this document is subject to change without notice. 07/2003 P/N 341524-001
EMP-S1 VP-D6550I RS-BX747 ECM91 RH7521W CLX-3175WK DV989AVI-G 2-revelation PDC 2030 MW82N-B NWZ-A816 RC-50 RS277acbp Extensa 2300 KS1300 2 6319D Iriver H300 RH1F99MHS GA-X58a-ud3R 802C II JBL L300 Paperport 10 D-EJ021 ST6400C Tracktion 3 Keyboard 550710 Server HF3475 01 DAV-HDX589W 1200 LT EL-509WM WS WM1001ECO AE3750 C5-03 RH7800H GT-I7110 KF524G Navigator 3 CDX-R6550 PM-T990 Server Czrd513C EX-Z120 GC900 MC-767W Uk Singer 4220 2159 CWH Portege 4000 Review LA-36 PN V782NWK Headset XP4000 Yamaha DTR2 G600P DCR-HC20 K8N Neo3 Navigon 8310 1020 1W 6000I Gr-213 D845WN IBM T22 WPN511 SP954 TE-8500F Grandam 1996 Uhf II Traveller VII FAX 2306 321 GSX TH5110D Elna 344 Vision Software TC-WE405 FAX-920 Cornwall II Asus P5 Boost CD KD-SHX851 RM-100 TH-50PZ70B SA-AK78 Keypad Lexmark C770 R6740 DTH8005 Viewfinders Ppc 150 Excellence W550I Display Gmrs862 Dcx2496 MFC10 GA-8IG1000MK WD-1247RD
manuel d'instructions, Guide de l'utilisateur | Manual de instrucciones, Instrucciones de uso | Bedienungsanleitung, Bedienungsanleitung | Manual de Instruções, guia do usuário | инструкция | návod na použitie, Užívateľská príručka, návod k použití | bruksanvisningen | instrukcja, podręcznik użytkownika | kullanım kılavuzu, Kullanım | kézikönyv, használati útmutató | manuale di istruzioni, istruzioni d'uso | handleiding, gebruikershandleiding
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101