Thursday 1 September 2011

What is Fabric Security – Zoning ?


Zoning is a switch function that allows devices within the fabric to be logically segmented into groups that can communicate with each other. When a device logs into a fabric, it is registered by the name server. When a port logs into the fabric, it goes through a device discovery process with other devices registered as SCSI FCP in the name server. The zoning function controls this process by only letting ports in the same zone establish these link level services. A collection of zones is called a zone set. The zone set can be active or inactive. An active zone set is the collection of zones currently being used by the switched fabric to manage data traffic.

Single HBA zoning consists of a single HBA port and one or more storage ports. A port can reside in multiple zones. This provides the ability to map a single Storage port to multiple host ports. For example, a Symmetrix FA port or a CLARiiON SP port can be mapped to multiple single HBA zones. This allows multiple hosts to share a single storage port.

The type of zoning to be used depends on the type of devices in the zone and site policies :-

1) In port zoning, only the ports listed in the zone are allowed to send Fibre Channel frames to each other. The switch software examines each frame of data for the Domain ID of the switch, and the port number of the node, to ensure it is allowed to pass to another node connected to the switch. Moving a node that is zoned by a port zoning policy to a different switch port may effectively isolate it. On the other hand, if a node is inadvertently plugged into a port that is zoned by a port zoning policy, that port will gain access to the other ports in the zone.

2) WWN zoning creates zones by using the WWNs of the attached nodes (HBA and storage ports). WWN zoning provides the capability to restrict devices, as specified by their WWPNs, into zones. This is more flexible, as moving the device to another physical port with the fabric cannot cause it to lose access to other zone members.

Understand EMC Navisphere Management Suite.


EMC Navisphere Management software is a suite of tools that allows centralized management of CLARiiON storage systems. Navisphere provides a centralized tool to monitor, configure, and analyze performance. Navisphere can be launched from EMC ControlCenter. In the event that you already have a Symmetrix and ControlCenter, ControlCenter will allow you to manage all your information from one central location.Navisphere stores initiator and host information on the array. It is used to create the relationships used for access control, as well as information presented in Navisphere. It also can automatically set any unique features required to support an operating system, enabling it to support a heterogeneous environment on a single port. Navisphere Manager provides speed and flexibility using the familiar Microsoft windows interface. It lowers cost of management/ownership, including training and administrative costs, and has a proactive focus on addressing potential problems. It reduces personnel requirements by its ability to increase the productivity of staff by managing larger amounts of storage with fewer resources.

The Navisphere Suite includes:-
1) Navisphere Manager: Allows graphical user interface (GUI) management andconfiguration of a single or multiple system, and is also the center for management and configuration of system-based access and protection software including Access Logix, SnapView, and MirrorView applications.
2) Analyzer: A performance analysis tool for CLARiiON hardware components.
3) Agent: Provides the management communication path to the system. Enables CLI access.
Salient Features :-

# Centralized management for CLARiiON storage throughout the enterprise
Centralized management means a more effective staff
# Allows user to quickly adapt to business changes
# Keeps business-critical applications available
# Key features
– Java based interface has familiar look and feel
– Multiple server support
– EMC ControlCenter integration
– Management framework integration
# Navisphere Software Suite
– Navisphere Manager
– Navisphere Analyzer
– Navisphere CLI
# Discover
– Discovers all managed CLARiiON systems
# Monitor
– Show status of storage systems,Storage Processors, disks,snapshots, remote mirrors, and other components
– Centralized alerting
# Apply and provision
– Configure volumes and assign storage to hosts
– Configure snapshots and remote mirrors
– Set system parameters
– Customize views via Navisphere Organizer
# Report
– Provide extensive performance statistics via Navisphere Analyzer

What is EMC Clariion Navisphere Analyzer.


Analyzer was designed and developed specifically to identify bottlenecks and hotspots in CLARiiON storage systems. Data may be collected by the storage system, or by a Windows host, running the appropriate software, in the environment. Parameters are displayed in graphical form using the familiar Navisphere Manager GUI. Data may be displayed in real-time, or, for later analysis, saved as a .nar (Navisphere Archive) file. Two display modes show data in a slightly different way – Performance Summary is useful when the display should show minima and maxima, but where the time that those values were reached is not important. Performance Detail is a time-related display. Time is shown along the x-axis, and various values may be simultaneously displayed along the y-axis. 

In addition to those views, a Performance Survey chart shows up to 5 commonly used performance parameters, with clear visual indication if they have exceeded user-defined thresholds. Sometimes the sizes of transfers from the host can be a factor in performance issues, and Analyzer displays I/O size information in 2 different ways. Data may be exported to text as comma separated value file; graphics are exported as jpg. Analyzer is added to the Tools menu of Navisphere Manager when installed.

Salient Features :-

1) Bottleneck and Hotspot Finder.
2) Examine present and past performance of CLARiiON storage systems.
3) Performance Survey allows “at a glance” view of parameters that are over a preset threshold.
4) Focused performance charts.
Detailed SP, LUN, and Disk performance
IO Size distribution
5) Export data as .csv files
6) Export chart as .jpg files

What is Volume Access Control Lun Masking ?


Device (LUN) Masking ensures that volume access to servers is controlled appropriately. This prevents unauthorized or accidental use in a distributed environment.A zone set can have multiple host HBAs and a common storage port. LUN Masking prevents multiple hosts from trying to access the same volume presented on the common storage port.LUN Masking is a feature offered by EMC Symmetrix and CLARiiON arrays. LUNs can be masked through the use of bundled tools. For EMC platforms these include ControlCenter; Navisphere or Navicli for CLARiiON; and Solutions Enabler (SYMCLI) for a Symmetrix.

When servers log into the switched fabric,the WWNs of their Host Bus Adapters (HBAs) are passed to the storage fibre adapter ports that are in their respective zones. The storage system records the connection and builds a filter listing the storage devices (LUNs) available to that WWN, through the storage fibre adapter port. The HBA port then sends I/O requests directed at a particular LUN to the storage fibre adapter. Each request includes the identity of their requesting HBA (from which its WWN can be determined) and the identity of the requested storage device, with its storage fibre adapter and logical unit number (LUN). The storage array processes requests to verify that the HBA is allowed to access that LUN on the specified port. Any request for a LUN that an HBA does not have access to returns an error to the server

Salient Features :-

1) Restricts volume access to specific hosts and/or host clusters.
2) Policies set based on functions performed by the host
3) Servers can only access volumes that they are permitted to access
4) Access controlled in the Storage Array - not in the fabric.
Makes distributed administration secure
5) Tools to manage masking
GUI & Command Line

What is EMC Clariion SnapView Clones ?


EMC Clariion SnapView Clones are full copies of the Source LUN, built on a LUN which has already been bound. That LUN must therefore be the same size as the Source.Clones provide users with the ability to create fully populated point-in-time copies of LUNs within a single array.Clones are packaged with SnapView, as of Version 2.0, and expand SnapView functionality by providing the option to have fully-populated copies (as well as the pointer-based copies of SnapView).For users familiar with MirrorView, Clones can be thought of as mirrors within arrays, as opposed to across arrays. Clones have additional functionality, however, in that they offer the ability to choose which direction the synchronization is to go between source and clone; and also, Clones are available for read and write access (when fractured), unlike secondary mirrors, which have to be promoted (or made accessible via a Snapshot) to allow for data access.

For users familiar with the Symmetrix product line, Clones can be thought of as Business Continuation Volumes (BCVs).Because Clones are fully-populated copies of data, they are highly available, and can withstand SP or array reboots or failures, as well as path failures (provided PowerPath is installed and properly configured).It should be noted that Clones are designed for users who want the added functionality of being able to periodically fracture the LUN copy, and then to sync or reverse sync the copies back together. Users who simply want a mirrored copy would likely implement RAID 1 LUNs. Clones are full copies of the data. This implies that we need 100% of the disk space of the Source, and that a Clone is independent of the Source once synchronization is complete.

What is EMC Clariion SnapView Snapshots.


SnapView was designed to allow system backups to run concurrently with application processing. From your study of the prerequisite material, you’ll know how it produces point-in-time copies of LUNs, and allows secondary or backup hosts to access those snapshots.Even though backups are the primary use of SnapView, it is versatile enough to be used in other ways. For critical applications, snapshots may be taken every hour – this allows easy recovery from corrupted files. Decision support systems may also use the snapshots, and thus real data, with minimal effect to the application.For backup purposes, then, we need to ensure that good, consistent data was written to the backup medium. This will require host buffers to be flushed, and I/O to be halted until such time as the backup is completed. Without the use of snapshots, that down time could be several hours.

If the application uses several LUNs for storage – say, tables and logs are all on separate LUNs – then we need to ensure that all those LUNs are in the same state when we perform the backup. SnapView can take care of this as well – Release 19 added consistency support for SnapView Snapshots.SnapView Snapshots are a view of the data, not the actual data. As a result, creating a Snapshot and starting a Session is a very quick process, requiring only a few seconds. The view that is then presented to the secondary host is writable by that host, but will be a frozen copy of the Source LUN as the primary host saw it at a given time.

The mechanism uses pointers to track whether data is on the Source LUN, or in the Reserved LUN Pool. These pointers are kept in SP memory, which is volatile, and may therefore be lost, along with the Session information, if the SP should fail or the LUN be trespassed. An optional SnapView feature, therefore, is persistence for Sessions – this stores the pointers on disk, so Sessions can then survive SP failures, trespass operations, or power failures.

What is Incremental SAN Copy in EMC Clariion ?


Incremental SAN Copy copies only changes to the destination(s). Changes are tracked by means of bitmaps, and, if required, a COFW process will be performed (both of which require that a Reserved LUN is assigned to the Session) Networks with speeds starting at T1 connections are supported. SAN Copy will perform a certain amount of optimization,depending on network speed. SAN Copy shares the ‘chunk’ size with SnapView - 64 KB so that a COFW copies 64 kB of data to the Reserved LUN. When the changed data must be copied to the secondary, only the 2 kB ‘sub-chunks’ that have changed are actually copied – this makes the link utilization much more efficient. The mark and unmark processes start and end the point in time copy process; the tracking of changes continues, though, and the ISC SnapView Session continues to run. SAN Copy is aware of which copies have succeeded, and can resume to failed destinations only. SAN Copy can recover from SP reboots and LUN trespasses.

Salient Features:-

1) Incremental copy
Changed data tracked at 64 KB granularity.
Data transferred at 2 kB granularity.
Map of tracked changes stored persistently.

2) Mark and Unmark operation
Affects state of ISC SnapView Session.

3) Modify Incremental Copy Session Properties
Turn on/off incremental tracking.
Link Bandwidth & Latency.
Sync Required (full copy).

4) Resume to only failed destinations

5) ISC-specific statistics
Time last marked.
Time copy started.
Blocks to copy in next incremental update.

6) Auto-Recovery (all SAN Copy Sessions)
SP Reboot.
LUN trespass.

Know EMC Clariion SAN Copy Software.


SAN Copy allows fast bulk transfer of data between CLARiiON and Symmetrix systems. One of the key benefits is the off-load of host traffic, with an associated increase in copy performance. Data is copied directly from one storage system to the other, with no host involvement in the copy process. Copies can be performed without regard to the host operating system, again because no hosts are involved in the copy. Because ownership of the logical units or volumes does not have to be shared, a level of security can be maintained. SAN Copy requires no special software to be loaded on the destination storage systems. It does not use any special protocol which makes SAN Copy efficient and fast.Either the source logical unit, destination logical units, or both must reside on a SAN Copy storage system – a CLARiiON running the SAN Copy software. 

In order for Navisphere Manager to provide the drive letter/file system mapping of participating Symmetrix volumes, the Navisphere Host Agent must be installed on the hosts that own the volumes, and, if that host is not also connected to a CLARiiON in the domain, it must be in a portal configuration. You must make logical units participating in a SAN Copy session accessible to the participating SAN Copy port; this will involve zoning as well as the use of LUN masking software. For SAN Copy to operate correctly, the Storage Processor port must become an initiator and register with the non-SAN Copy storage system.

Salient Features :-

1) Concurrent Copy sessions (up to preset limit)
2) Queued Copy sessions
3) Create/Modify Copy Sessions
4) Multiple Destinations per session
5) Checkpoint at user-specified intervals
6) Pause/Resume/Abort
7) Throttle

Define EMC Clariion PowerPath Failover Software.


PowerPath is EMC’s latest path failover software for CLARiiON. Like its predecessors, it is host-based software.There is a specific version for each supported operating system. PowerPath revision 3.x and later supports both CLARiiON and Symmetrix systems on the same host, using native drivers for the HBAs. PowerPath is currently available for a number of different platforms including Windows 2000, Windows 2003, HPUX, and Solaris. It provides path failover as well as sophisticated load balancing. It can handle multiple paths from host to storage systems. Up to 16 paths from a host to a LUN are allowed, of which 8 may be active and 8 must be passive. Minimum requirements are at least one path from each HBA to each SP, as well as paths to both SPs in an storage system. Volumes may also be prioritized, so that LUNs that need fast access, like swap areas, will be handled more quickly than other LUNs, where access times are not as critical.

Salient Features:-

1) Host Based Software
Specific version for each supported OS.
3.x and above supports CLARiiON systems.
2) Currently Available for various platforms:
Windows NT, Windows 2000, Windows 2003, Solaris, …
3) Provides Path Failover
Allows servers and applications to access LUNs in the event of a path failure.
Requires at least one path from each HBA to each SP, and paths to both SPs in a storage system.
4) Provides configurable load balancing.
5) Allows LUN prioritization.

What is EMC Clariion MirrorView ?


MirrorView also known as MirrorView/S, because of its synchronous nature is a storage-based application that resides on the CLARiiON. It provides an online, host independent, mirrored data storage and protection solution that duplicates production site data (primary) to one or two secondary sites (secondary/secondaries) in a campus environment. The mirroring is synchronous, meaning that every time a host writes to the primary array, the secondary array mirrors the write before an acknowledgement is returned to the host. MirrorView ensures that there is an exact byte-for-byte copy at both the local CLARiiON and the remote CLARiiON. Since MirrorView is storage-based software, no host CPU cycles are used. This allows MirrorView to operate in the background, transparent to any hosts or applications, and to be able to provide the same information protection services to all server platforms and operating system that connect to the CLARiiON.

Salient Features:-

1) Independent of server, operating system, network, applications, and database.
2) Centralized, simplified management via EMC Navisphere.
3) Concurrent information access when used with SnapView.
4) Synchronous Remote Mirroring Between Two CLARiiON Systems.



MirrorView is fully integrated with EMC SnapView, the CLARiiON host-based software that creates consistent point-in-time copies for remote location snapshots. For simplified management and staff training, both MirrorView and SnapView are managed from within CLARiiON’s Navisphere Management software. That means that the same user-friendly Windows-like interface is common among all the CLARiiON software products, which minimizes learning curves and reduces training costs.The MirrorView software must be loaded on both arrays, regardless of whether the customer wants to implement bi-directional mirroring or not. 

If only synchronous mirroring is required, then only MirrorView will need to be active on the local and remote CLARiiON(s). The secondary LUN must be the same size, though not necessarily the same RAID type, as the primary LUN. The Host cannot attach to an active secondary LUN as long as it is configured as a secondary mirror, unless you promote the secondary mirror to be the primary mirror (as seen in a disaster recovery scenario), or if you remove the secondary LUN as a secondary copy. Once this is done, a full resynchronization to the LUN would have to be performed.

Know EMC Clariion Navisphere Event Monitor.


EMC Clariion Event Monitor is tightly integrated with Navisphere Manager version 6.x. .Event Monitor is designed to run in the background, without permanent supervision by the operator. It runs on one or more hosts or SPs and watch over the storage systems. It offers versatile reporting of errors in a number of ways. Although defaults exist for which errors and event classes will be reported, the user may modify those defaults at will. The user may also choose the notification method – this ranges from dial-home/dial-back to theEMC Support Center, to the use of SNMP traps for notification. A custom option exists, so that the user may use an existing utility, or a custom-crafted one, to report the errors.Navisphere Integrator allows Event Monitor to integrate with industry-leading Enterprise Management Platform software, such as CA-Unicenter. The reporting of events is controlled by a Navisphere agent – the host agent on supported platforms, or the SP agent on FC4700 and CX-series storage systems. 

Salient Features :-

1) The Event Monitor GUI is integrated with Manager.
2) Event Monitor is part of the Navisphere Agent.
3) Monitors for user-configurable events.
4) Reports those events in user-configurable ways.
5) May launch other utilities/applications.
6) Can send SNMP traps to Enterprise Management Platforms.

What is EMC Clariion CLARalert ?


CLARalert software monitors your storage system’s operation for error events and automatically notifies your service provider of error events. You install CLARalert using the InstallAnywhere wizard, which guides you through the process and, if required, provides context-sensitive help on how to perform a particular step. CLARalert does not support upgrades to distributed (non-centralized) monitoring environments. If you have an existing CLARalert configuration that uses a distributed monitoring environment, the upgrade will create a centralized monitoring environment.The existing distributed monitoring environment and the newly created centralized monitoring environment may result in double notifications to your service provider. Your existing distributed monitoring environment will not be upgraded.

For a new CLARalert installation, the wizard automatically installs the email notification service, which sends event notification to your service provider via email (SMTP). CLARalert integrates withWebEx, allowing you to initiate real-time customer support sessions over the web. CLARalert requires a centralized monitoring environment. In a centralized monitoring environment you designate a monitor station to monitor the events of storage systems that you specify. The centralized monitoring environment is a monitoring environment option for Navisphere® Event Monitor, which is a feature of the Navisphere Manager application. During CLARalert installation, you designate the monitor station and portal system to configure your centralized monitoring
environment. The storage system that you designate as a portal system is automatically added to the list of monitored storage systems. You can later add storage systems to your centralized monitoring environment using Navisphere Manager.

What is the Role of EMC Clariion Access Logix ?


On a CLARiiON storage system without Access Logix installed, or on a storage system where Access Logix is installed but not yet enabled, all CLARiiON LUNs are presented to all storage system ports. Any host that connects to the storage system will then have access to all of the LUNs on that storage system. In environments where multiple hosts attach to the storage system, this may cause problems. Windows systems may attempt to take ownership of LUNs belonging to other Windows systems, and Unix systems may try to mount Windows LUNs, for example. In addition to this, security is compromised because of the lack of access control. Access Logix solves these problems by performing LUN masking – it masks certain LUNs from hosts that are not authorized to see them, and presents those LUNs only to the server(s) which are authorized to see them. In effect, it present a ‘virtual storage system’ to each host – the host sees the equivalent of a storage system dedicated to it alone, with only its own LUNs visible to it.

Another task which Access Logix performs is the mapping of CLARiiON LUNs, often called FLARE LUNs or FLUs, to host LUNs. It will determine which physical addresses, in this case the device numbers, each attached host will use for its LUNs. Note that this feature is configurable by the user through the CLI and the GUI. Access to LUNs is controlled by information stored in the Access Logix database, which is resident in a reserved area of CLARiiON disk - the PSM LUN. The Access Logix software manages this database.When host agents in the CLARiiON environment start up, typically shortly after host boot time, they send initiator information to all storage systems they are connected to. This initiator information is stored in the Access Logix database.

Salient Features:-

1) LUN masking.
2) Presents a virtual storage system.
3) Maps CLARiiON LUNs (FLARE LUNs) to host LUNs.
4) Manages the Access Control List.
5) Manages Initiator Registration Records.
Access Logix database entries.

What is Emc Clariion Access Logix.


Access Logix is CLARiiON-specific software that runs on all CX-series storage systems with the exception of the CX200LC. It enhances the functionality of FLARE software, and allows the storage system to be shared by multiple hosts. Access Logix is a licensed software package that runs on each storage processor (SP) in supported storage systems.Access Logix software lets multiple hosts share the storage system. It implements this storage sharing using Storage Groups. A Storage Group is one or more LUNs within a storage system that is reserved for one or more hosts and is inaccessible to other hosts. Access Logix software enforces the host-to-Storage Group permissions. The Access Logix software is preinstalled on all CX300, CX500 and CX700 arrays at the factory. Access Logix runs within the FLARE Operating Environment and resides with FLARE software. Disks 0_0 and 0_2 store mirrored copies of the software for SP A, and disks 0_1 and 0_3 store mirrored copies of the software for SP B.When you power up the storage system, each SP boots and enables the Access Logix capability within FLARE.

Access Logix is enabled within the FLARE Operating Environment and is no longer a separate binary image of the software.Release 14 introduced a software packaging model that bundles software together for ease of installation and upgrade. Please refer to the FLARE Operating Environment Release Notes for details on this software packaging.

Salient Features:-

1) Storage system-based licensed software
2) Enhances the FLARE operating environment
3) Installed from a management host if required
Factory preinstalled on CX300, 500, and 700 arrays
Enabler (for Release 13 and later)

Role of Cache Memory in EMC Clariion.


The EMC CLARiiON uses cache in a manner similar to how cache memory is used in traditional workstations and servers. With servers and workstations, application code and application data locality and immediacy are the focus of the design. A characteristic of many applications (including file systems) is to cache new data locally, then periodically flush the data to the actual storage device. This “lazy write” approach would result in large bursts of large I/Os to the storage systems - a perfect fit with the “burst smoothing” benefit of CLARiiON’s caching. Cache page size is perhaps the most influential parameter on cache performance. The division between staging and storage memory is not formally defined by addresses, but more by functionality - as data arrives, it is always staged, and as needed, it may be marked for storage as well. The Partition Memory dialog in Navisphere Manager enables part of the cache to be used for storage operations. Unlike the Symmetrix, certain writes may bypass cache memory.

Salient Features :-

1) Cache memory on an SP performs two tasks.
Staging: Temporary buffering of current read and write data.
Always performed on each I/O.
Storage: Repository for frequently accessed data.
Maintaining copies of read and write data.
User must explicitly enable this (for both read and write).

2) Benefits of caching
Burst Smoothing - Absorb bursts of writes without becoming “disk bound”.
Write cache optimization.
Locality - Merge several writes to the same area into a single operation.
Increases write performance.
Immediacy - Satisfy user requests without going to the disks.
Read cache optimization prefetching of data for sequential reads.

Know Difference Between SAN Versus DAS.


SANs make effective use of Fibre Channel networks and IP networks to solve the distance and connectivity problems associated with traditional DAS solutions such as parallel SCSI. In a SAN, a device can be added or removed without any impact on I/O traffic between hosts that do not participate in the configuration change. A host can reboot or disconnect from the SAN without affecting storage accessibility from other hosts. New arrays can be added to the SAN,and storage from them can be deployed selectively on some hosts only without any impact on other hosts. Thus, SANs enable dynamic, non-disruptive provisioning of storage resources.

SAN architecture allows for multiple servers to easily share access to a single storage array port. This is technically possible with parallel SCSI too, via the use of daisy-chained cables. However, the setup is static, physically cumbersome, subject to practical constraints from requirements on signaling integrity, and difficult to establish and maintain. 

SAN architecture also allows for a single host to easily connect to a storage frame via multiple physical and logical paths. In a multipathed configuration, and with the use of multipathing software such as Powerpath, the host experiences I/O failures only if every one of its logical paths to the storage array fails. Multipathing software can also help balance the host’s I/O load over all available paths. Multipathing capability thus allows for the design of a high performance, highly available, redundant host system.

SANs make it simple to consolidate multiple storage resources – such as disk arrays and tape libraries - within a single physical or logical infrastructure. These resources can be selectively shared across host computers. This approach can greatly simplify storage management, when compared to DAS solutions.

Know EMC FLARE Operating Environment.


FLARE software manages all functions of the CLARiiON storage system. Each storage system ships with a complete copy of FLARE software installed. When you power up the storage system, each SP boots and executes FLARE software.Access Logix software is optional software that runs within the FLARE operating environment on each storage processor (SP). Access Logix provides access control and allows multiple hosts to share the storage system. This “LUN Masking” functionality is implemented using Storage Groups. A Storage Group is one or more LUNs within a storage system that are reserved for one or more hosts and are inaccessible to other hosts. When you power up the storage system, each SP boots and executes its Access Logix software. Navisphere Management software is a suite of tools that allows centralized management of CLARiiON storage systems. Navisphere provides a centralized tool to monitor, configure, and analyze performance. CLARiiON can also be managed as part of EMC ControlCenter, allowing full end-to-end management.
Salient Features:-
1) FLARE Operating Environment runs in the CLARiiON Storage Processor.
– I/O handling, RAID algorithms.
– End-to-end data protection.
– Cache implementation.
2) Access Logix provides LUN masking that allows sharing of storage system.
3) Navisphere middleware provides common interface for managing CLARiiON.
4) CLARiiON optional software including
– Access Logix.
– MirrorView, SnapView, SAN Copy.
5) EMC ControlCenter provides end-to-end management of a CLARiiON.
6) FLARE performs provisioning and resource allocation.
7) Memory budgets for caching and for snap sessions, mirrors, clones, copies.
8) Process Scheduling.
9) Boot Management.

What is a Storage Processor ?


The main component in all CLARiiON series arrays is the Storage Processor. Storage Processors (SPs) are configured in pairs for maximum availability and are Field Replaceable Units (FRUs). SPs provide both front-end connectivity to the hosts and back-end connectivity to the physical disks. Each Storage Processor also includes up to 4 GB of memory, most of which is used for cache. Cache memory is segmented into read cache memory and write cache memory. Read cache memory is used for staging and prefetching read requests from the host. Write cache is used to accelerate host writes to the storage system. 

With write cache enabled, writes are mirrored to the write cache memory in the other storage processor over the CLARiiON Messaging Interface (CMI). The CMI is a Fibre Channel based link and operates at either 100MB/sec on FC series or 200MB/sec on CX series systems. Each storage processor also includes a TCP/IP connection that is used for configuration and management of the storage system.Each storage system ships with a complete copy of FLARE software installed on the first four disks on back-end loop 0. Disks 0_0 and 0_2 store mirrored copies of the software for SP A, and disks 0_1 and 0_3 store mirrored copies of the software for SP B. When you power up the storage system, each SP boots and executes FLARE software.


Salient Features :-
# Storage processors are configured in pairs for maximum availability.

# One or two processors per Storage Processor board.

# Two or four Fibre Channel front-end ports for host connectivity
– 1Gb or 2Gb.
– Arbitrated loop or switched fabric.

# Dual-ported Fibre Channel Disk drives at the back-end.
– Two or Four Arbitrated Loop connections.

# Maximum of 4GB of memory per SP.
– Write Cache is mirrored between Storage Processors for availability using the CMI 
(CLARiiON Messaging Interface)
– Write Caching accelerates host writes.

# Ethernet connection for management.