HP Cisco Gigabit Ethernet Switch FOR Bl P-class
Patch panel GB cisco
Part Numbers: 371098-001, 371098001
Here you can find all about HP Cisco Gigabit Ethernet Switch FOR Bl P-class, for example manual and review. You can also write a review. [ Report abuse or wrong photo | Share your HP Cisco Gigabit Ethernet Switch FOR Bl P-class photo ]
HP Cisco Gigabit Ethernet Switch FOR Bl P-class
User reviews and opinions
|pow||12:36pm on Tuesday, July 20th, 2010|
|Amazing is the only word. The only problem I have with it is that World of Warcraft can just barely be played on it :( Other than that, its amazing!|
|GAUBERT||8:02pm on Monday, June 14th, 2010|
|Not good for G5 Mac! Before buying any WD Raptor high performance hard drives for use in a Mac G5...|
|donmor||11:56pm on Monday, April 19th, 2010|
|This machine is a dream come true for anyone who uses the computer alot. very fast and reliable Bit expensive!|
Comments posted on www.ps2netdrivers.net are solely the views and opinions of the people posting them and do not necessarily reflect the views or opinions of us.
HP BladeSystem p-Class System Overview and Planning
Introduction..... 3 Executive summary.... 3 HP BladeSystem modular architecture key benefits... 4 HP BladeSystem manageability key benefits... 5 HP BladeSystem overview.... 6 Hardware components.... 10 ProLiant BL20p and BL25p series server blades... 11 ProLiant BL30p and BL35p series server blades... 13 ProLiant BL40p and BL45p Server Blades... 15 Integrity BL60p Server Blade.... 17 ProLiant and Integrity BL p-Class server blade SAN connectivity... 18 FC connectivity with ProLiant BL20p, BL25p, BL30p, BL35p, BL45p series and Integrity BL60p server blades... 18 FC connectivity with ProLiant BL40p Server Blades... 20 Specific requirements for attaching ProLiant BL30p and ProLiant BL35p Server Blade to FC SANs. 20 HP BladeSystem p-Class Server Blade Enclosure... 21 HP BladeSystem p-Class Blade Sleeve.... 23 HP BladeSystem p-Class network interconnect options.. 24 Cisco Gigabit Ethernet Switch Module for HP BladeSystem p-Class.. 25 ProLiant BL p-Class GbE2 Interconnect Switch... 26 ProLiant BL p-Class GbE Interconnect Switch.... 27 ProLiant BL p-Class RJ-45 Patch Panel.... 29 ProLiant BL p-Class RJ-45 Patch Panel 2.... 30 HP BladeSystem p-Class power subsystem.... 31 Enclosure-based power.... 31 Rack-centralized power.... 31 Power supplies..... 31 Power distribution.... 32 HP BladeSystem p-Class 1U and 3U power subsystem features.. 34 HP BladeSystem p-Class Diagnostic Station... 35 HP BladeSystem p-Class diagnostic and local I/O cables.. 36 HP BladeSystem Management Software overview... 37 HP BladeSystem p-Class operating system installation options.. 38
ProLiant BL20p, BL25p, BL30p, BL35p, and BL45p series server blades.. 38 Integrity BL60p Server Blades.... 38 Operating system support.... 38 HP ProLiant Essentials Rapid Deployment Pack... 39 HP Systems Insight Manager.... 39 Integrated Lights-Out Advanced Edition... 40 Smart Array RAID controllers.... 40 HP BladeSystem p-Class Interconnect Switch Management... 42 Planning for a HP BladeSystem p-Class installation... 43 HP BladeSystem p-Class Sizing Utility.... 43 Required input power.... 44 Facility DC power connection.... 45 Power phases and 3U power supply enclosures.... 45 AC connectors for the 3U power enclosure... 46 Deployment considerations: HP BladeSystem p-Class network interconnects.. 47 Deployment considerations: ProLiant BL p-Class RJ-45 Patch Panel and Patch Panel 2.. 47 Deployment considerations: ProLiant BL p-Class GbE2 Interconnect Switches.. 47 Deployment considerations: ProLiant BL p-Class GbE Interconnect Switches.. 49 HP BladeSystem rack specifications... 49 Server Blade Quantity.... 50 Configuring server blade options.... 50 HP BladeSystem server blade enclosures... 50 3U power distribution.... 50 Site recommendations.... 51 Power requirements.... 51 Cooling and airflow.... 51 Total weight..... 51 Total floor space.... 52 System installation planning guides.... 52 For more information.... 52
This white paper provides an overview of the HP BladeSystem p-Class solution. This solution includes: Server blades Server blade enclosures Network interconnect options Power subsystem components Management tools
The HP BladeSystem p-Class solution consists of server blades, server blade enclosures, network interconnect options, a power subsystem, and management tools that enable adaptive computing and is optimized for rapid deployment. HP BladeSystem server blades are designed for the high performance and high availability that you have come to expect from HP ProLiant industry-standard servers. The HP BladeSystem solution protects your investment with a modular portfolio that supports many different environments and workloads including: ProLiant BL20p and BL25p Server BladesIdeal for multi-tiered enterprise data centers. The ProLiant BL20p and BL25p Server Blades feature a dual-processor-capable design, highperformance memory, an integrated SmartArray RAID controller, Universal hot-plug SCSI hard drives, Integrated Lights-Out (iLO) Advanced functionality, up to four general-purpose Gigabit Ethernet network controllers, and optional Fibre Channel (FC) SAN connectivity. ProLiant BL30p and BL35p Server Blades Ideal for high-performance technical computing and enterprise datacenter environments that use external storage. The ProLiant BL30p and BL35p Server Blades feature a dual-processor-capable design optimized for maximum server density. Server blades feature high-performance memory, iLO Advanced functionality, two generalpurpose Gigabit Ethernet network controllers, and optional FC SAN connectivity. ProLiant BL40p and BL45p Server BladesDesigned to power back-end and mission-critical applications. The ProLiant BL40p and BL45p Server Blades support up to four processors, maximum performance DDR memory, integrated SmartArray RAID controller, Universal hot-plug SCSI hard drives, iLO Advanced functionality, four (BL45p) or five (BL40p) general-purpose Gigabit Ethernet network controllers, and optional FC SAN connectivity. Integrity BL60p Server BladesThe HP Integrity BL60p is the first Itanium 2 server blade for the HP BladeSystem p-Class family. When combined with the robust and secure HP-UX 11i v2 operating environment, the Integrity BL60p offers dramatically improved application deployment, resource utilization, capacity management, reliability and security - all for a low total cost of ownership. In addition, Integrity BL60p blades hosting HP-UX 11i are designed to function side by side with Opteron and Xeon server blades hosting Windows and Linux applications in the same p-Class enclosure.
Server blade enclosure
Both patch panel and interconnect switches are available with or without FC passthrough capability. Power enclosure with power supplies (Not needed if using facility -48 VDC 10%) The HP BladeSystem p-Class system offers two power enclosure options: The 1U power enclosure provides redundant power for a single server blade enclosure. It is ideal for remote offices, small businesses, or environments that do not have three-phase power available. The 3U power enclosure and power distribution components provide redundant power for multiple server blade enclosures. This solution is ideal for datacenter rack deployment. The 3U power enclosures are available in single-phase and three-phase models.
Power distribution (used only with 3U power enclosures)
Power is carried from the 3U power enclosure(s) to the server blade enclosure(s) through bus. Bus bars are available in mini and scalable versions depending on the number of server blade enclosures being deployed.
Note: Power requirements for an HP BladeSystem p-Class solution: 200 V to 240 VAC, 30 A or facility DC 48V 10%
ProLiant BL20p and BL25p series server blades
The ProLiant BL20p and BL25p series Server Blades are ideal for infrastructure and enterprise applications, including: Web E-commerce Server-based computing AV and streaming media Messaging front-end and mobility Small database
Figure 5. ProLiant BL20p G3 Server Blade
Figure 6. ProLiant BL20p G4 Server Blade
Table 6. Features of the ProLiant BL20p G3 and ProLiant BL20p G4 Server Blades ProLiant BL20p G3 Server Blade Processor Internal storage Up to two Intel Xeon processors Up to two universal hot-plug SCSI hard drives connected to the server through a SCSI Smart Array 6i Controller provide up to 600 GB capacity Four DIMM slots enable installation of up to 8 GB of PC3200 DDR2, ECC, Registered SDRAM. The memory is 2x1 interleaved for added performance. ProLiant BL20p G4 Server Blade Up to two dual-core Intel Xeon 5100 sequence processors Up to two SFF SAS hot-plug hard drives
Eight FBDIMM slots enable installation of up to 32 GB of PC2-5300 DDR, ECC, Registered SDRAM. The memory is 2x1 interleaved for added performance.
Four general-purpose Gigabit PCI-X 10/100/1000T NICs with Wake-on LAN (WOL) plus one 10/100T NIC dedicated to iLO. The four general purpose NC-Series NICs support PXE and HP NIC teaming. LEDs are provided to indicate the following: Power NIC link and activity Server blade health Unit identification
Up to eight server blades fit in a 6U server blade enclosure.
Up to two server blades fit in a 6U server blade enclosure.
Integrity BL60p Server Blade
The Integrity BL60p Server Blade is ideal for the following applications: HP-UX v11 applications HP-UX legacy application consolidation HP-UX test and development, particularly PA-RISC to Itanium migration Database tier in applications requiring enterprise class HP-UX Unix
Figure 11. Integrity BL60p Server Blade
Table 10. Features of the Integrity BL60p Server Blade ProLiant BL60p Server Blade Processor Internal storage Memory Up to two Intel Itanium2 1.6-GHz 3-MB cache processors Up to two universal hot-plug SCSI hard drives connected to the server through a SCSI Smart Array 6i Controller provide up to 600 GB capacity Four DIMM slots enable installation of up to 8 GB of PC2100 DDR SDRAM running at 133MHz and 266MHz transfers/second, memory must be installed in pairs, Chip Sparing is enabled if all four DIMMs are the same size/density Six network connectors total: Density Four PCI-X Gigabit embedded Ethernet connectors consisting of two Broadcom 5704s with one 10/100T NIC dedicated to management Two embedded Fibre Channel connectors (2-Gb per connector)
Up to eight server blades fit in a 6U server blade enclosure
ProLiant and Integrity BL p-Class server blade SAN connectivity
The ProLiant and Integrity BL p-Class server blades are optimized for HP StorageWorks arrays, and can attach to select third-party SAN solutions. In addition, the server blades can integrate with "fused" NAS and SAN configurations, providing the ability to work in file and block environments seamlessly. HP StorageWorks arrays include: StorageWorks MSA 1000 StorageWorks Enterprise Virtual Array (EVA) StorageWorks EMA/MA arrays StorageWorks XP Select Hitachi and EMC models are compatible. All StorageWorks models listed support SecurePath for multi-path functionality. All ProLiant and Integrity server blades support redundant FC SAN connections. With the exception of the ProLiant BL40p Server Blade, all BL p-Class server blades support dual port FC adapter options. The ProLiant BL40p Server Blade has two 64-bit, 100-MHz PCI-X slots that enable redundant FC SAN connectivity using standard host-bus adapter cards. The Integrity BL60p has dual, 2-Gb Fibre Channel connectors embedded. FC connectivity with ProLiant BL20p, BL25p, BL30p, BL35p, BL45p series and Integrity BL60p server blades FC signals are routed from the configured server blade through the server blade enclosure backplane to the interconnect modules. Optical transceivers added to the interconnect modules provide connectivity to the external fabric. For FC SAN connectivity with ProLiant BL20p, BL25p, BL30p, BL35p, and BL45p series server blades, an interconnect kit option with FC pass-through capability is required. Both the ProLiant BL p-Class RJ45 Patch Panel 2 and the ProLiant BL p-Class GbE2 Interconnect Switch with the GbE2 Storage Connectivity Kit option provide FC SAN pass-through capability. The dual port FC adapter option kits each include two SFF transceivers with LC connectors. These SFF transceivers are installed in the RJ-45 Patch Panel 2 transceiver slots. The SFF transceivers are universal and can be used with the RJ-45 Patch Panel 2, the GbE2 Interconnect Switch, or Cisco Gigabit Ethernet Switch Module (CGESM) (with GbE2 Storage Connectivity Kit). Refer to the QuickSpecs for your specific model of server blade to ensure that you are using the correct dual port FC adapter option kit. Each server blade model has a unique FC adapter option kit. For the server blade QuickSpecs, visit the HP website at http://www.hp.com/go/bladesystem.
Figure 12. BL20p G3 Dual Port FC Mezzanine Card (installed)
Figure 13. BL25p/BL45p Dual Port FC Adapter
Figure 14. The BL30p/BL35p Dual Port FC Adapter (installed)
FC connectivity with ProLiant BL40p Server Blades ProLiant BL40p Server Blades have two external PCI-X slots for use with standard FC HBAs. When configuring ProLiant BL40p Server Blades with FC HBAs, the FC signals are not routed through the signal backplane. Refer to the documentation included with the ProLiant BL40p Server Blade for details.
Figure 15. RJ-45 Patch Panel 2 installed in a server blade enclosure with ProLiant BL40p Server Blades and FC option
Table 11. Enclosure Components Component Description RJ-45 Patch Panel 2 SFF transceiver
Specific requirements for attaching ProLiant BL30p and ProLiant BL35p Server Blade to FC SANs The ProLiant BL30p/BL35p Dual Port FC Adapter Option is based on the Logic ISP2312 chipset. This chipset carries forward all the features of the ProLiant BL20p, ProLiant BL25p, and ProLiant BL45p Dual Port FC Mezzanine Cards and is an industry standard solution. The features of the Dual Port FC Adapter include: RDP scripted installation for Microsoft Windows and Linux Boot capability from SAN disk or LUN Blade bay to FC switch compatibility established by the server blade High availability through redundant paths The ProLiant BL30p/BL35p FC Adapter has a different subvendor ID than the ProLiant BL20p, ProLiant BL25p, and ProLiant BL45p Dual Port FC Mezzanine Cards. Because the Windows driver is subvendor ID sensitive, a new backward compatible driver was introduced with the ProLiant BL30p and the ProLiant BL35p Server Blades. Linux drivers are not subvendor ID sensitive, so the currently available Linux drivers are compatible.
FC port aggregation is required to accommodate the increased number of server FC HBA ports and to maintain compatibility with the available enclosure backplane signals and interconnect ports. The p-Class sleeve aggregates the four paths from two ProLiant BL30p or ProLiant BL35p Server Blades into two physical paths. This innovative port aggregation technology enables up to 16 physical FC ports from the Patch Panel 2, GbE2 Interconnect Switch, or Cisco Gigabit Ethernet Switch Module to connect directly to the customer external FC SAN switch. ProLiant BL30p and ProLiant BL35p FC implementations require the FC SAN switch to support FC-AL public loop login. With few exceptions, notably McData core switches, most FC switches provide this support. All Brocade SAN switches and most Cisco SAN devices support this feature. NOTE: The FC LED on the Patch Panel 2 or GbE2 Interconnect Switch does not display a live link when using the enhanced server blade enclosure. Port activity information can be obtained from the FC SAN switch or by using QLogic SANsurfer Blade Management software.
HP BladeSystem p-Class Server Blade Enclosure
ProLiant BL p-Class server blades and network interconnects are housed in a 6U server blade enclosure. The blades slide into the blade enclosure backplanes for power and network connections. Each blade enclosure has eight server blade bays in the center of the enclosure and two interconnect bays at each end. The two interconnect bays are populated with either patch panel interconnects (for direct signal pass-through) or interconnect switches (for network cable reduction). The middle eight bays support server blades. The two types of server blade enclosures are: standard server blade enclosures and enhanced server blade enclosures. Some server blade models are supported only in enhanced server blade enclosures. For details, refer to the enclosure compatibility matrix at http://www.hp.com/go/bladesystem/enclosure/compatability The enhanced server blade enclosure provides the following: A server blade management module that simplifies setup and management by: A single physical iLO port for all installed server blades that provides up to 16:1 management cabling consolidation Static IP Bay Configuration for automated configuration of iLO addresses Support for all ProLiant BL30p and ProLiant BL35p Server Blades, as well as support for all current and future ProLiant BL20p, ProLiant BL25p, ProLiant BL40p, and ProLiant BL45p series server blades Support for all current and future network interconnect options, including the new CGESMs
HP BladeSystem p-Class power subsystem
HP provides two power subsystem alternatives to accommodate various environments and customer needs. These two options are enclosure-based power and rack-centralized power. All server blades, interconnect options, and management tools are fully compatible with either power subsystem.
The new HP BladeSystem p-Class 1U Power Enclosure provides hot-plug, fully redundant power for a single server blade enclosure containing any mix of server blades and interconnects. This option is ideal for small blade deployments such as in remote or branch offices or small and mid-size businesses.
Figure 25. HP BladeSystem 1U Power Enclosure
The HP BladeSystem p-Class 3U power subsystem provides hot-plug, fully redundant power for multiple enclosures that would typically be deployed in racks in a datacenter environment. Rack-centralized power provides an efficient way to support a large number of blades while eliminating power cables and the clutter and cost of PDUs associated with traditional power schemes. The 3U rack-centralized power subsystem includes the power supplies, 3U power enclosure, and a power distribution option. The HP BladeSystem p-Class system can be powered from single-phase or three-phase AC power or from 48 VDC power sources. Power supplies The power supplies for the 3U solution convert 200240 VAC to 48 VDC to power server blades and interconnect switches and are housed in a 3U power enclosure. The power supplies are front-accessible, hot-pluggable, and can be configured redundantly. The power enclosures are rackmounted below the server blade enclosures that they support.
HP offers two models of 3U power enclosure that are designed to meet installation power demand and redundancy requirements, depending on the number and type of server blades you plan to deploy: Single-phase HP BladeSystem 3U p-Class Power Enclosure (holds up to four power supplies) Three-phase HP BladeSystem 3U p-Class Power Enclosure (holds up to six power supplies) Because the three-phase power enclosure holds up to six power supplies, it supports more server blades and interconnect switches than the single-phase power enclosure. It is generally recommended for the HP BladeSystem p-Class solution. For more detailed information about the specific power enclosure options and power planning tools, refer to the HP BladeSystem p-Class website: http://www.hp.com/go/bladesystem
Integrated LightsOut (iLO)
For ProLiant BL p-Class server blades, iLO provides advanced levels of remote manageability. This guide details iLO functions in various steps of initial configuration, as well as for common operational tasks. For more information, refer to http://www.hp.com/servers/ilo/ ACU is used to set up local drive controllers and RAID environments for ProLiant BL20p, BL25p, BL40p, and BL45p Server Blades. ACU is also used with the HP StorageWorks Modular SAN Array 1000 (MSA1000) storage system to set up the SAN drive controller, RAID environment, and logical drives for connection to ProLiant BL server blades. This guide provides instructions for the use of this tool during initial server setup and ProLiant BL system SAN setup.
Array Configuration Utility (ACU)
Located in the HP ProLiant Essentials Foundation Pack, shipped with HP BladeSystem enclosures. Available for download at
http://h18004.www1.hp.com/ products/servers/ proliantstorage/softwaremanagement/acumatrix/
Table 20: Key Management Components (continued) Tool HP BladeSystem Interconnect Switch configuration and management software Function ProLiant BL Interconnect Switches provide both command line and Web-based interfaces for configuration and management of interconnect switches within server blade enclosures. For more information, refer to: HP BladeSystem p-Class GbE and GbE2 Interconnect Switches at Where to Find Ships with HP BladeSystem p-Class Interconnect Switch Kits.
F5 Networks Big-IP Blade Controller software The F5 Big-IP Blade Controller software provides load balancing and L3-7 traffic management functions for the ProLiant BL system environment. The software, once installed on a server blade, converts the server blade into an F5 Big-IP appliance. The HP BladeSystem System Common Procedures Guide provides setup instructions and uses the F5 software in several common operational tasks. Optional software available from F5 Networks, Inc., at
A license from F5 Networks is required.
HP BladeSystem p-Class operating system installation options
ProLiant BL20p, BL25p, BL30p, BL35p, and BL45p series server blades The operating system for a server blade may be deployed using one of the following options: RDP for installation of the operating system on one or many blades simultaneously from a centralized deployment console iLO with Advanced Pack features enables installation of an operating system using the iLO remote console and virtual floppy or virtual CD-ROM features Directly cable KVM and removable media devices to a server blade using the local I/O cable (not supported on all server blade models.) Integrity BL60p Server Blades The Integrity BL60 server blade supports only the HP-UX 11i operating system. There are management differences for this server. Please see http://www.hp.com/products1/servers/integrity/index.html for access to complete BL60p support documentation.
Smart Array RAID controllers
The Smart Array 6i Controller is a hardware-based, cost-effective RAID solution used in the ProLiant BL20p G3, ProLiant BL25p, and ProLiant BL45p Server Blades. The Smart Array 6i Controller is an intelligent array controller for entry-level, hardware-based fault tolerance with support for Ultra3 SCSI technology and an improved data transfer rate maximum of 160 MB/s per channel. Embedded into the server blade, the Smart Array 6i Controller provides worry-free data protection for all server blade internal storage needs. The Smart Array 5i Controller is the RAID solution used in the ProLiant BL40p Server Blade. The Smart Array E200i Controller is a hardware-based, cost-effective RAID solution used in the ProLiant BL20p G4 Server Blade. The Smart Array E200i Controller is an intelligent array controller for entry-level, hardware-based fault tolerance with support for SAS technology. ProLiant BL20p G3, ProLiant BL20p G4, ProLiant BL25p, ProLiant BL40p, and ProLiant BL45p Server Blades support drive mirroring (RAID 1) and drive striping (RAID 0). In addition, the ProLiant BL40p Server Blade supports RAID 5. The ProLiant BL20p G3, ProLiant BL20p G4, ProLiant BL25p, ProLiant BL40p, and ProLiant BL45p Server Blades also offer a Battery-Backed Write Cache option to prevent data loss during power interruptions. The Integrity BL60p supports disk mirroring via the HP-UX EOE operating environment. Smart Array RAID controllers are not supported in the BL60p.
For more information on the Smart Array 6i Controller, refer to http://h18006.www1.hp.com/products/servers/proliantstorage/arraycontrollers
HP BladeSystem p-Class Interconnect Switch Management
The HP BladeSystem p-Class GbE and GbE2 Interconnect Switches are industry-standard managed Ethernet switches that customers configure and manage in the same manner as other industry-standard Ethernet switches. To aid users during initial deployment, the interconnect switch includes a default configuration that is fully operational at initial boot. A web browser-based interface and a command line interface with scripting capability are preinstalled in the switch firmware to configure, manage, and monitor the interconnect switches. Telnet access is also supported. Any combination of the switch ports can be disabled, enabled, configured, and monitored on a per port basis. Out-of-band and in-band access to the switch management interfaces is supported locally and remotely from anywhere on the network. Administration of the pair of interconnect switches in the server blade enclosure is possible through any uplink port, the serial port, or the two Ethernet ports conveniently located on the front panel of each switch. The interconnect switch supports industry-standard SNMP management information bases (MIBs), HP enterprise switch MIBs, and environmental traps. The SNMP agents are pre-installed in the interconnect switch firmware. This capability allows the interconnect switch to be monitored remotely from an SNMP network management station such as HP SIM and HP OpenView. The interconnect switch may also be configured through the HP OpenView Network Node Manager. For rapid deployment from one-to-many interconnect switches, RDP for Windows includes server-side scripting. With server-side scripting, interconnect switch scripts can be integrated in an RDP for Windows job for deployment of both server blade and switches. This is ideal for using RDP for Windows to deploy a server blade and then configure associated switch VLANs, although any scriptable interconnect switch parameter can be integrated. The interconnect switch supports trivial file transfer protocol (TFTP) allowing a copy of the interconnect switch configuration file to be saved and downloaded either to the original switch or to a different interconnect switch. This provides another method to rapidly deploy multiple systems with similar configurations and to provide backup and restore capabilities. Configuration settings may be modified through the user interfaces or directly within the configuration file. The configuration file has a text-based format, which allows it to be directly viewed, printed, and edited. Users with Windows or Linux-based deployment stations can perform interconnect switch firmware upgrades by using TFTP through the Ethernet port after boot-up, and by using ZModem (for GbE Interconnect Switch) or XModem (for GbE2 Interconnect Switch) through the serial interface during boot-up. The interconnect switch simplifies system upgrades by retaining its configuration after a firmware upgrade and by supporting the HP Support Paq automated firmware upgrade process for Windows deployment stations. For more information about HP BladeSystem p-Class Interconnect Switches, refer to http://h18000.www1.hp.com/products/servers/proliant-bl/p-class/bl-p-interconnect-switch.html
Planning for a HP BladeSystem p-Class installation
The HP BladeSystem p-Class Sizing Utility is a free, flexible, graphical tool that provides valuable information necessary to help plan and prepare a site for delivery and installation of HP BladeSystem p-Class solutions and order the necessary components for the installation. Site planning information, such as power requirements and environmental specifications, is generated based on user-defined system configuration criteria. Simply configure each server blade and blade enclosure with appropriate options, choose interconnects for each server blade enclosure, and enter data center power information.
HP BladeSystem p-Class Sizing Utility
Figure 30. HP BladeSystem p-Class Sizing Utility
Once configuration information is entered, the tool calculates: Power specifications Heat generation and cooling requirements Summary table of server blade components in the rack (server blades, memory, processor, etc.) Number of power supplies and power enclosures needed for configuration entered System weight Equipment list (refer to Figure 31)
Figure 31. HP BladeSystem Equipment List Output Example
This Equipment List can be copied to an excel worksheet or word document. To copy this table, select the table using the mouse. Copy the selected table using Copy command from the Edit menu. Go to the destination document and paste it.
ProLiant BL20p server blace with ONE Pentium lll P1400-512K, 512MB RAM, no drives
ProLiant BL40p server blade with ONE Xeon MP1.5GHz-1MB Cache, 512MB RAM (2 x 256MB), no drives ProLiant BL40p server blade with TWO Xeon MP2.0GHz-2MB Cache, 1GB RAM (2 x 512MB), no drives (Not available in NA) ProLiant BL20P G2 server blade with ONE Xeon P2.8GHz, 512MB RAM (2 x 256MB), no drives ProLiant BL20p G2 server blade with TWO Xeon P2.8GHz, 1GB RAM (2 x 512MB), no drives (Not available in NA) ProLiant BL20P G2 server blade with ONE Xeon P2.8GHz, 512MB RAM (2 x 256MB), no drives, with FC Mezz Card ProLiant BL20p G2 server blade with TWO Xeon P2.8GHz, 1GB RAM (2 x 512MB), no drives, with FC Mezz card (Not available in NA)
293461-B21 293462-B21 300876-B21 300877-B21 300980-B21 300981-B21
Additional 2 GB (2 x 1 GB) Memory Kit(s) Additional DDR 512 MB (2 x 256 MB) Memory Kit(s) Additional DDR 1 GB (2 x 512 MB) Memory Kit(s) Additional DDR 2 GB (2 x 1 GB) Memory Kit(s) Additional DDR 4 GB (2 x 2 GB) Memory Kit(s) Additional Pentium III 1.4GHz Additional Xeon MP 1.5 GHz
201695-B21 300678-B21 300679-B21 300680-B21 300682-B21 234277-B21 309330-B21
For more information about the HP BladeSystem p-Class Sizing Utility, refer to http://www.hp.com/go/bladesystem/sizingutility
Required input power
The HP BladeSystem p-Class 3U power subsystem has specific AC power requirements. However, DC power may be used instead if the facility has DC power available. When using an AC power source, the HP BladeSystem requires single-phase (1U or 3U power enclosures) or three-phase (3U power enclosures) 200240 VAC input power. The 3U power enclosure requires two 30 A power sources. The capacity of an HP BladeSystem p-Class Power Supply varies with the voltage level of the local AC power source. Maximum capacity can only be achieved using a 240-V nominally rated power source. Lower voltages may result in lower server blade capacity. The HP BladeSystem p-Class Sizing Utility considers this. Refer to Figure 32 for the maximum output power capacity for various input voltages between 200240 VAC.
Figure 32: Maximum output power capacity
For more information about the HP BladeSystem p-Class Sizing Utility, refer to http://www.hp.com/go/bladesystem/sizingutility Facility DC power connection Facility DC power requires a Facility DC Power Connection Option Kit to distribute the current through the mini or scalable bus bars to the server blade enclosures. The HP BladeSystem p-Class system requires 48 VDC with no more than 10% voltage variance. If using facility DC power, power supplies and power enclosures are not needed for operation, because the DC Power Connection Option Kit provides power through a direct connection to the bus bars. Power phases and 3U power supply enclosures The HP BladeSystem p-Class solution is designed for AC input power from either single-phase or three-phase power sources. Three-phase power supports maximum density configurations and is highly recommended. Geography and number of AC phases dictate the appropriate model of 3U power enclosure for the data center. Each model of 3U power enclosure uses a different connector, as detailed in the following chart. Additionally, each 3U power enclosure requires two separate 30 A power feeds.
Figure 33. 3U power enclosure connectors
AC connectors for the 3U power enclosure Table 21 shows the four models of power enclosure connectors that are available.
The optional GbE2 Storage connectivity kit provides pass-through of ProLiant BL20p, ProLiant BL25p, ProLiant BL30p, BL35p, and ProLiant BL45p FC signals. This kit includes two OctalFC interconnect module connect modules, each with eight SFF slots. The SFF transceivers with LC connectors shipped with each BL20p G3, BL25p, or BL45p Dual Port FC Mezzanine Card are installed in the OctalFC SFF slots. Two optical cables with LC connectors will be required for each ProLiant BL20p, ProLiant BL25p, or ProLiant BL45p Server Blade with the Dual Port FC Mezzanine Card installed or each pair of ProLiant BL30p or ProLiant BL35p Server Blades with the Dual-Port FC Adapter installed. LC-to-SC optical connector converters can be used if SC connectors are preferred. For more specific information on the ProLiant BL30p or ProLiant BL35p Server Blades and FC, see Specific requirements for attaching ProLiant BL30p and ProLiant BL35p Server Blade to FC SANs in this document. Table 23 lists the Ethernet and FC cable requirements.
Table 23. Ethernet and FC Cable Requirements Interconnect RJ-45 Patch Panel Ethernet 1 to 3 cables per ProLiant BL20p, ProLiant BL25p, or ProLiant BL45p Server Blade 1 to 2 cables per ProLiant BL30p or ProLiant BL35p Server Blade 1 to 4 cables per ProLiant BL40p Server Blade 1 cable for the centralized server blade management module on enclosures with enhanced backplane components RJ-45 Patch Panel 2 Same as RJ-45 Patch Panel 2 optical cables with LC connectors for each ProLiant BL20p or ProLiant BL25p Server Blade with the Dual Port FC Mezzanine Card 2 optical cables with LC connectors for each pair of ProLiant BL30p or ProLiant BL35p Server Blades with the Dual-Port FC Adapter C-GbE Interconnect Kit 1 to 12 cables per server blade enclosure 1 cable for the centralized server blade management module on enclosures with enhanced backplane components F-GbE Interconnect Kit 1 to 4 optical cables with LC connectors per server blade enclosure 1 to 8 cables per server blade enclosure 1 cable for the centralized server blade management module on enclosures with enhanced backplane components C-GbE2 Interconnect Kit 1 to 12 cables per server blade enclosure 1 cable for the centralized server blade management module on enclosures with enhanced backplane components Requires Storage Connectivity Kit N/A N/A FC N/A
Table 23: Ethernet and FC Cable Requirements (continued) Interconnect F-GbE2 Interconnect Kit Ethernet 1 to 8 optical cables with LC connectors per server blade enclosure 1 to 4 cables per server blade enclosure 1 cable for the centralized server blade management module on enclosures with enhanced backplane components Storage Connectivity Kit N/A Same as RJ-45 Patch Panel 2 FC Requires Storage Connectivity Kit
Configuring server blade options
For accurate site planning, server blade option entries should be the aggregate total number of all options to be installed in the server blades over the life of the installation. Options such as processors and hard drives can have a significant effect on power consumption, heat generation, and system weight. For more information about the HP BladeSystem p-Class Sizing Utility, refer to http://www.hp.com/go/bladesystem/sizingutility
HP BladeSystem server blade enclosures
The number of server blades determines the required quantity of enclosures. Each enclosure has ten slots; two of these are reserved for interconnects and eight are designated for server blades. Table 24 shows the capacity of a server blade enclosure:
Table 24. Capacity of server blade enclosure Server Blade Series Maximum Number of Server Blades per Enclosure 2 4
ProLiant BL20p, ProLiant BL25p, and Integrity BL60p series ProLiant BL30p and ProLiant BL35p series ProLiant BL40p series ProLiant BL45p series
To plan for future growth, additional server blade enclosures can be installed in advance enabling rapid server deployments as needed. The HP BladeSystem p-Class Sizing Utility summary page indicates the appropriate number of enclosures and required power for the configuration specified by the customer in the tool. For more information about the HP BladeSystem p-Class Sizing Utility, refer to http://www.hp.com/go/bladesystem/sizingutility
3U power distribution
The HP BladeSystem p-Class Sizing Utility suggests the optimum power distribution method for both redundant and non-redundant power configurations. For more information about the HP BladeSystem p-Class Sizing Utility, refer to http://www.hp.com/products/servers/proliant-bl/p-class/info
The HP BladeSystem p-Class Sizing Utility provides environmental load estimates (total DC and AC power consumption, generated heat in BTU, weight and floor space requirements) based on the configuration. This information can be useful when planning and managing the data center environment. For more information about the HP BladeSystem p-Class Sizing Utility, refer to http://www.hp.com/go/bladesystem/sizingutility
The installation of this equipment shall be in accordance with local and regional electrical regulations governing the installation of information technology equipment by licensed electricians. This equipment is designed to operate in installations covered by NFPA 70, 1999 Edition (National Electric Code) and NFPA 75, 1992 Edition (code for Protection of Electronic Computer/Data Processing Equipment). For electrical power ratings on options, refer to the rating label of the product or the user documentation supplied with that option. When installing the HP BladeSystem, observe the following guidelines: The power load must be balanced between available supply branch circuits. The overall system current load must not exceed 80 percent of the branch circuit current rating. For DC systems, HP BladeSystem p-Class 3U rack-centralized solutions run on 48 VDC 10%. When power supplies are included in the HP BladeSystem p-Class solution, they require 230 VAC (International) or 208 VAC (US).
Cooling and airflow
The server blades use front-to-back ambient air for cooling. Therefore, the front and rear rack doors must be adequately ventilated to allow ambient room air to enter the cabinet, and the rear door must be adequately ventilated to allow the warm air to escape from the cabinet. When server blades or rack components do not fill the entire vertical space in the rack, the gaps between the components cause changes in airflow through the rack and across the server blades. Cover all gaps in the rack with blanking panels and fill all open bays in the server blade enclosure with blanks to maintain proper airflow. HP 10000 and Compaq 9000 Series Racks provide proper server blade cooling from flow-through perforations in the front and rear doors that provide 65% open area for ventilation.
Data on the dimensions and weights of HP BladeSystem p-Class components can be found in the HP BladeSystem p-Class System Maintenance and Service Guide. The same data can be determined by using the online HP BladeSystem p-Class Sizing Utility. In general, the raised floor must be capable of withstanding a uniform load of 1,220 kg/m2 (250 lb/ft2) or a load of 454 kg (1,000 lb) on any 6.5 cm2 (1.0 in2) surface, with a maximum deflection of 2.5 mm (0.1 in). For more information about the HP BladeSystem p-Class Sizing Utility, refer to http://www.hp.com/go/bladesystem/sizingutility
Total floor space
To enable servicing and adequate airflow, observe the following spatial requirements when deciding where to install an HP, Compaq, telco, or third-party rack: Leave a minimum clearance of 63.5 cm (25 in) in front of the rack. Leave a minimum clearance of 76.2 cm (30 in) in the back of the rack. Leave a minimum clearance of 121.9 cm (48 in) from the back of the rack to the rear of another rack or row of racks.
System installation planning guides
When planning for an HP BladeSystem installation and setup, HP recommends that you reference the HP BladeSystem System Common Procedures Guide and HP BladeSystem System Best Practices Guide. These guides provide critical information, including the best practices, helpful hints, and suggestions for: Setting up and configuring an HP BladeSystem, including server blade enclosures, power subsystem, networks, server blades, and storage connectivity Setting up and configuring ProLiant tools needed for common system management tasks, such as deployment, configuration, and monitoring Planning and building the management environment Planning and building the blade system environment Plan the HP BladeSystem environment Building the HP BladeSystem infrastructure Configuring the enclosure IP address Configuring the blade network environment Configure the switches and VLANs Setting up and configuring the SAN environment (optional) Setting up and configuring the first server blade Installing the operating system Installing and setting up additional server blades Configuring the connection for each server blade to SAN virtual drives (optional) For more information on site planning, or to view the guides referenced in this topic, refer to the HP website, http://www.hp.com/go/bladesystem
ProLiant BL p-Class GbE2 Interconnect Switch Compatibility with Cisco-based Networks
Abstract..... 2 Introduction..... 2 Terminology..... 3 Same technology, different form factor.... 3 VLANs and VLAN tagging.... 4 Spanning tree..... 4 Multi-link trunking..... 5 Security..... 6 Management..... 6 Port mirroring.... 7 Multicast traffic.... 7 Network time.... 7 Conclusion..... 7 For more information.... 8
This white paper describes the interoperability of the ProLiant BL p-Class GbE2 Interconnect Switch with Cisco-based Ethernet networks consisting of Catalyst switches. This document is not intended to be a guide for deploying a GbE2 Interconnect Switch within a Cisco-based network. For this information, see the Deploying the ProLiant BL p-Class GbE2 Interconnect Switch into a Cisco-based Network white paper1. The intended audience for this paper includes engineers and system administrators familiar with the HP ProLiant BL p-Class system. For readers not familiar with the HP ProLiant BL p-Class system, more information is available at http://h18004.www1.hp.com/products/servers/platforms/index-bl.html. For general information about the p-Class GbE2 Interconnect Switch options, see ProLiant BL p-Class GbE2 Interconnect Switch Overview white paper2.
The ProLiant BL p-Class system consists of ProLiant BL server blades, the 6U (10.5 inch) BL p-Class server blade enclosure, network and power infrastructure components, and software that enables adaptive computing optimized for rapid deployment. The p-Class server blade enclosure holds the server blades and two interconnects. Each server contains multiple network interface controllers (NICs). The enclosure has a signal backplane that routes the server blade NIC signals to the interconnects in a redundant, highly available architecture. HP offers a family of interconnect options for a choice of how the Ethernet, as well as Fibre Channel, signals exit the server blade enclosure. Available interconnects include two patch panel pass-through kits and two integrated Ethernet switch kits (interconnect switch kits). The two patch panel options allow all Ethernet network signals to pass through to third-party LAN devices, thus giving customers flexibility in choosing their own switches. The interconnect switch kits provide up to 32-to-1 Ethernet cable consolidation reducing the time to deploy and manage ProLiant BL p-Class systems. The ProLiant BL GbE2 Interconnect Switch is the newest interconnect option available for p-Class systems. The ProLiant BL GbE2 Interconnect Switch is an industry-standard, 24-port all Gigabit Ethernet switch intended for: Applications that require up to 1000 megabits per second (Mb/s) NIC consolidation Connectivity to copper-based 10/100/1000T or fiber-based 1000SX Ethernet networks Fibre Channel storage signal pass-through for the ProLiant BL20p and BL30p series servers Advanced network feature support (including planned future options for layer 3 through 7) Future planned upgradeability for 10 Gigabit Ethernet bandwidth connectivity to the network For more information about the GbE2 Interconnect Switch, see the ProLiant BL p-Class GbE2 Interconnect Switch Overview white paper.1 In a typical application, the GbE2 Interconnect Switches acts as a redundant access switch layer that is in turn connected to the core network often consisting of Catalyst switches from Cisco Systems (Cisco). This white paper identifies the ProLiant BL p-Class GbE2 Interconnect Switch interoperability within a Cisco-based Catalyst switch Ethernet network. Topics discussed include VLANs, spanning tree, multi-link trunking, security, management, and more.
Available at http://h18004.www1.hp.com/products/servers/proliant-bl/p-class/bl-p-interconnect-switch2.html. Available at http://h18004.www1.hp.com/products/servers/proliant-bl/p-class/bl-p-interconnect-switch2.html.
Terminology used in this document that differs between Cisco Catalyst switches and the GbE2 Interconnect Switch is identified in Table 1.
Table 1. Network terminology cross reference HP ProLiant GbE2 Interconnect Switch VLAN tagging, 802.1Q tagging port VLAN identification (PVID) link aggregation, multi-link trunking (MLT) spanning tree protocol group (STG) IEEE 802.1s, multiple spanning tree port mirroring Cisco Catalyst switches trunking, VLAN or 802.1q encapsulation VLAN identification (VLANID) EtherChannel, channeling spanning tree instance per VLAN spanning tree (PVST), PVST+ SPAN, RSPAN
Same technology, different form factor
In a typical tiered server network configuration designed with redundancy, two or more network interface controllers (NIC) are used per server. The Ethernet signals from these NICs are routed to two separate access switches that are in turn connected to the core network. One or more crosslink connections are commonly made between the access switches for added availability. The access switch downlink ports are used to collect NIC signals from the servers for aggregation to the network backbone via one or more uplink ports. The GbE2 Interconnect Switch and p-Class blade architecture uses the same technology to provide this function, but in a different form factor (Figure 1).
Figure 1. Typical redundant network configuration
Layer 2 Switch
ProLiant BL server blade enclosure with interconnect switch
The access switches and connections have been moved inside the BL p-Class server blade enclosure. The GbE2 Interconnect Switches become the access switch layer that is in turn connected to the core switch layer. The same network technology is used and the tiered network configuration remains unchanged. Because the interconnect switch is an industry-standard managed layer 2 switch, it is compatible with other industry-standard switches including Catalyst switches from Cisco. The remainder of this paper discusses the GbE2 Interconnect Switch interoperability with Cisco Catalyst switches in the areas of. VLANs and VLAN tagging (VLAN trunking) spanning tree multi-link trunking (EtherChannel) security management port mirroring (SPAN, RSPAN) multicast traffic
VLANs and VLAN tagging
Each GbE2 Interconnect Switch provides 255 port-based IEEE 802.1Q virtual local area networks (VLANs) compatible with Catalyst switches that support this industry standard. Both the Catalyst switches and GbE2 Interconnect Switches utilize VLAN1 as the default VLAN. This permits immediate out-of-the-box passing of Ethernet traffic. To create VLANs across the network, the GbE2 Interconnect Switch supports IEEE 802.3ac VLAN Ethernet frame extensions for 801.2Q tagging3. Each switch port may be individually configured as tagged or untagged. Therefore, GbE2 Interconnect Switch VLANs may span Cisco switches that support the 802.1Q tagging methodology. Although Cisco typically refers to 802.1Q VLAN tagging as VLAN trunking or dot1q trunking, the technologies are the same and, therefore, completely interoperable. The key is to ensure that ports on both ends of the tagged link (or dot1Q trunk) are assigned to same VLANs. The Cisco proprietary VLAN tagging Inter Switch Link (ISL) is an alternative method that predates the IEEE 801.1Q tagging standard. The GbE2 Interconnect Switch does not support ISL. Cisco recommends that new implementations follow the IEEE 802.1q standard and older networks gradually migrate from ISL to allow multi-vendor interoperability, greater field exposure, greater third party support, and, to a lesser degree, 802.1Qs lower encapsulation overhead.4 Lastly, the GbE2 Interconnect Switch cannot be used as a participating node with Ciscos VLAN Trunk Protocol (VTP). However, the interconnect switch may be used as a VTP transparent mode to forward VTP information.
Spanning tree is enabled by default on the GbE2 Interconnect Switch to ensure that any existing network layer 2 loops are blocked. The GbE2 Interconnect Switch meets the IEEE 802.1D standard and is compatible with Cisco switches that are 802.1D compliant. The bridge priorities, port costs, and port priorities may be manually assigned on the GbE2 Interconnect Switch. This allows the core or other Catalyst switches to be the root bridge.
The IEEE 802.3 standards have been merged into a single standard defined as IEEE 802.3-2002. IEEE 802.3-2002, section 3.5 (Elements of the Tagged MAC Frame) now contains the specifications previously defined in IEEE 802.3ac. Best Practices for Catalyst 4000, 5000, and 6000 Series Switch Configuration and Management, Cisco Systems, Document 13414, October 1, 2003; available at http://www.cisco.com/en/US/products/hw/switches/ps663/products_tech_note09186a0080094713.shtml.
The GbE2 Interconnect Switch further provides interoperability with Ciscos Per-VLAN Spanning Tree Plus (PVST+) 801.Q tagging proprietary protocol via the use of spanning tree groups (STG). In the GbE2 implementation, an administrator creates an STG and then assigns a VLAN to it. This differs from the Cisco implementation where an administrator creates a VLAN, and then a spanning tree instance (i.e. STG) is automatically created and assigned to the VALN. The PVST+ interoperability feature on the GbE2 Interconnect Switch includes the following: Tagged ports may belong to more than one STG, but untagged ports can belong to only one STG. When a tagged port belongs to more than one STG, egress BPDUs are tagged to identify their STG membership. An untagged port cannot span multiple STGs. Sixteen STGs operating simultaneously are supported per GbE2 Interconnect Switch. The default STG 1 can hold multiple VLANs; all other STGs (groups 216) can hold one VLAN. The GbE2 Interconnect Switch provides two methods to interoperate with PVST+: 1. All GbE2 Interconnect Switch VLANs configured on the ports connected to the Catalyst switches may be added to the default STG (STG 1). 2. An unique GbE2 Interconnect Switch STG may be created for each of the configured VLANs connecting to the Catalyst switches. For rapid spanning tree convergence, many Catalyst switches support Ciscos proprietary features PortFast, UplinkFast, and BackboneFast, as well as the industry-standard IEEE 802.1w. The 802.1w extension is an enhancement to the original 802.1D standard. As noted by Cisco, 802.1w provides similar convergence time improvements to the Cisco methods, but 802.1w provides the added benefit of interoperability between vendors. Support for the 802.1w standard is planned for a future GbE2 Interconnect Switch software release. In the meantime, the GbE2 Interconnect Switch does allow the disabling of spanning tree on a per switch or port basis. This capability is ideal for networks designed without loops or individual switch ports connected to server blades or other devices where a loop does not exist.
Multi-link trunking (MLT), also know as link aggregation, port trunking, and Cisco EtherChannel, combines multiple physical switch ports into a single logical port called a trunk. The bandwidth of the trunk is the aggregate of the bandwidth of the individual links. The industry standard for multi-link trunking is IEEE 802.3ad5. Cisco has developed a similar multi-link trunking method known as EtherChannel. The GbE2 Interconnect Switch supports IEEE 802.3ad (802.3-2002) without LACP6 that is compatible with EtherChannel. The GbE2 Interconnect interoperates with both Fast EtherChannel, providing link aggregation for Fast Ethernet (100MB) ports, and Gigabit EtherChannel, which aggregates Gigabit Ethernet (1000MB) links. The GbE2 Interconnect Switch supports twelve trunks per switch. Each trunk may contain two to six ports providing a 12-Gbps aggregate throughput full duplex. An algorithm automatically applies load balancing to the ports in the trunk. A port failure within the group causes the network traffic to be directed to the remaining ports. Load balancing is maintained whenever a link in a trunk is lost or returned to service. This provides flexible and scalable bandwidth with resiliency and load sharing across the links between the GbE2 Interconnect Switch and Cisco
The IEEE 802.3 standards have been merged into a single standard defined as IEEE 802.3-2002. IEEE 802.3-2002, section 43 (Link Aggregation) defines the standards specified in IEEE 802.3ad. Link aggregation control protocol (LACP) is an enhancement over EtherChannel and other static multi-link trunking methods. LACP dynamically learns about the link status and takes decisions on which links to use for and load balancing and failback in case of link failure. As a result, IEE 802.3ad with LACP is often called dynamic trunking.
devices. To determine the load balancing decisions, varying methods are used. Catalyst switches may use the packets source MAC (SMAC) address, destination MAC (DMAC) address, source IP (SIP) address, destination IP (DIP) address, or a combination of these methods. The GbE2 Interconnect Switch uses a combination of SMAC and DMAC addresses to make the load balancing decision.
The GbE2 Interconnect Switch supports remote authentication dial-in user service (RADIUS) client, communicating to the network RADIUS server to authenticate and authorize a remote administrator using the protocol definitions specified in RFC 2138 and 2866. The GbE2 Interconnect Switch will integrate into an existing Cisco network that uses this industry-standard authentication and authorization protocol. As is performed on the Catalyst switches, the RADIUS configuration on the GbE2 Interconnect Switch requires the user to specify the IP address of the RADIUS server and the RADIUS secret. For enhanced security, the GbE2 Interconnect Switch permits modification of the RADIUS application port, user-configurable RADIUS server retry and time-out values, and support for SecurID if the RADIUS server can perform an ACE/server client proxy. Both a primary and a secondary RADIUS server may be configured. The industry-standard RADIUS protocol is an alternative to Ciscos proprietary Terminal Access Controller Access Control System Plus (TACACS+) method. Unfortunately, RADIUS and TACACS+ are not compatible. TACACS+ interoperability is planned for a future GbE2 Interconnect Switch firmware upgrade.
The operating system (OS) of the GbE2 Interconnect Switch provides multiple industry-standard methods to easily configure and manage the GbE2 Interconnect Switch. As with many Catalyst switches, the GbE2 Interconnect Switch provides the ability to store in memory redundant OS images and configuration files. The GbE2 Interconnect Switch may be managed and configured via: 1. Command Line Interface (CLI) 2. Browser based interface (BBI) 3. Simple Network Management Protocol (SNMP) The GbE2 Interconnect Switch CLI consists of a hierarchal menu/command-based hybrid interface that has a Linux/Unix type look and feel. The hybrid approach permits new users to see available parameters for each command and walks them through command parameters one-by-one. It also allows advanced users to perform command stacking and abbreviations similar to Cisco devices. Industry-standard scripting capabilities are supported for simplified configuration management and switch deployment. The web console or BBI can be utilized via Internet Explorer or Netscape Navigator over a TCP/IP network. Thus, access is possible throughout the Cisco-based network. Like the CLI, the BBI provides the ability to view and alter GbE2 Interconnect Switch information and settings. The GbE2 Interconnect Switch supports industry-standard SNMP management information bases (MIBs), HP enterprise switch MIBs, and environmental traps. The SNMP agents are preinstalled in the interconnect switch firmware. Redundant community strings and SNMP trap manager hosts can be configured per switch. This capability allows the interconnect switch to be monitored remotely from an SNMP network management station such as HP Systems Insight Manager7 and HP OpenView8. Additionally, any SNMP-based manager within CiscoWorks or other third party offering may also be used provided it can read industry-standard MIBs and process industry-standard traps.
Available at http://h18000.www1.hp.com/products/servers/management/hpsim/index.html. Available at http://www.hp.com/products1/softwareproducts/software/openview/index.html.
The GbE2 Interconnect Switch provides other familiar management capabilities consistent with Catalyst switches. These include a local console port with XModem support, access through Telnet and secure shell (SSH), and deployment, back-up, and restore capabilities using trivial file transfer protocol (TFTP) and secure copy protocol (SCP).
The GbE2 Interconnect Switch port mirroring feature provides the ability send a copy of any network traffic that enters or leaves the switch to a designated (monitor) port for examination by a network analyzer. Traffic ingressing the port, egressing the port, or both may be monitored. The GbE2 port mirroring provides similar functionality as the switched port analyzer (SPAN) feature on Catalyst switches. The GbE2 Interconnect Switch also interoperates with the Cisco Remote SPAN (RSPAN) feature. By targeting mirrored GbE2 data to a port connected to a Catalyst switch utilizing RSPAN, the traffic can be captured for analysis on the designated Catalyst monitoring port.
Multicasting reduces network traffic and congestion. The GbE2 Interconnect Switch has the ability to pass IP multicast traffic that is forwarded to it from Catalyst switches. Support for actively participating within internet group management protocol (IGMP) multicasting is not provided on the GbE2 Interconnect Switch at this time, but this support is planned for a future release. Meanwhile, provided the VLANs between the GbE2 Interconnect and Catalyst switches are correctly configured, the GbE2 Interconnect Switch will automatically forward IP multicast traffic out all ports on the VLAN from which the multicast traffic was received.
The industry standard network time protocol (NTP) synchronizes timekeeping among a set of distributed network devices and time servers. This synchronization allows events to be correlated when system logs are created and other time-specific events occur. As with Catalyst switches, the GbE2 Interconnect Switch provides NTP support. On the GbE2 Interconnect Switch, users can specify the NTP server IP address, update interval, and time zone, and then the Cisco and GbE2 switches are synchronized to the same network time. The GbE2 Interconnect Switch includes a battery-backed real time clock that will maintain the time in the event the NTP server is unavailable.
With the introduction of industry-standard blade servers, the number of Ethernet connections and cables within a rack can quickly become overwhelming. To consolidate theses cables, blade manufacturers introduced the concept of integrated Ethernet switches. For network administrators to successfully deploy these new blade switches within their existing networks, interoperability with existing devices and compliance to network industry standards is a must. Available as one of the several interconnect options for ProLiant BL p-Class systems, the GbE2 Interconnect Switch is ideal for reducing Ethernet network cabling and the time required to deploy, manage, and service ProLiant BL p-Class systems. Its advanced feature support and compliance to IEEE and other Ethernet protocols permits interoperability with networks based on Cisco Catalyst switches and devices from other common vendors found in todays datacenter.
For more information
For additional information, refer to the resources detailed below.
Resource description ProLiant BL p-Class system home page ProLiant BL p-Class GbE2 Interconnect Switch home page ProLiant BL p-Class Networking Overview white paper ProLiant BL p-Class GbE2 Interconnect Switch Overview white paper Deploying the ProLiant BL p-Class GbE2 Interconnect Switch into a Cisco-based Network ProLiant BL p-Class GbE2 Interconnect Switch user guides Web address
http://h18004.www1.hp.com/products/servers/proliant-bl/pclass/index.html http://h18004.www1.hp.com/products/servers/proliant-bl/p-class/bl-pinterconnect-switch2.html http://h18004.www1.hp.com/products/servers/proliant-bl/p-class/bl-pinterconnect-switch2.html http://h18004.www1.hp.com/products/servers/proliant-bl/p-class/bl-pinterconnect-switch2.html http://h18004.www1.hp.com/products/servers/proliant-bl/p-class/bl-pinterconnect-switch2.html http://h18004.www1.hp.com/products/servers/proliant-bl/p-class/bl-pinterconnect-switch2.html
2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Catalyst, Cisco, Cisco Systems, and EtherChannel are registered trademarks of Cisco Systems, Inc. All other trademarks or registered trademarks mentioned in this document are the property of their respective owners. 5982-5037EN, 05/2004
CDX-4500 Sbcru258-00H Tahoe 1997 System NP-R40 Plus IS7 V101 ENB43691X L1753S-BF RX-SL100RDS Sevilla MP54 Review 865GV 4000 W R-770B EF2210 Lumina 1995 IC-703 CDR795 WPN 802 Plus 5P MP LC-46D64U Enxdvr-4C Presario 7000 Yamaha F40 Port III MUM7400UC Touring AV-28KM3 LT 9988 3700GX Optio P80 XL-MP150E MP 153 TK280 LN32B360 SMX-F30 BP DMC-FZ20EG 632-PS MAX-DN55 VS3121 DCR-SR290E SVI43BF1 4 2 C1600 DP-C213 82333 SPA-240 KX-T4550B Limousine Vectra N30 ST-S550ES Speed 10 245XLT TXL42S10E GA-945GCM-s2 Silver Evo4 20-5102 Dvdr3600 58 PA-60 SD-506 ERZ36700X8 SM-332B FP665BF1 IVA-D310 23PF9946 S2 PRO DSB-B350W Cintiq 21UX Suites ICF-M770SL GFA-535L ZWD1471W M1310 KX-F155 LC-32D41U Activa 220 VS-880EX Stylus C79 Tycc10W Cappucino Ring Makita 3709 DPP-FP60 UB1202FX ONE V1 WD-80151TUP PAD-8 WM-FX521 MVX20I BD-C8200M NEC LT35 CDX-C460 BM-801 KV-XA29m50 Boss GT-3 MCO160UW MP450 Ixus 700 Plugin FVS336G Primus 160
manuel d'instructions, Guide de l'utilisateur | Manual de instrucciones, Instrucciones de uso | Bedienungsanleitung, Bedienungsanleitung | Manual de Instruções, guia do usuário | инструкция | návod na použitie, Užívateľská príručka, návod k použití | bruksanvisningen | instrukcja, podręcznik użytkownika | kullanım kılavuzu, Kullanım | kézikönyv, használati útmutató | manuale di istruzioni, istruzioni d'uso | handleiding, gebruikershandleiding
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101