Direct Attached Storage (DAS)
Today, greater than 95% of all computer storage devices such as disk drives, disk arrays and RAID systems are directly attached to a client computer through various adapters with standardized software protocols such as SCSI, Fibre Channel and others. This type of storage is alternatively called captive storage, server attached storage or direct attached storage (DAS) as illustrated in Figure 1.
The committees that established these standards, however, allowed such wide flexibility in interoperability that there a many variations of SCSI and Fibre Channel (FC) for the many available UNIX and Windows NT systems. For example, there are seven variations of SCSI, and most UNIX vendor implements FC differently. This is because storage was local to a specific server when these standards were defined and server vendors implemented variations that were not compatible. Storage standards therefore are weak standards and driven by component considerations. In other words, the problem with storage standards is that there seems to be so many of them.
As a result of weak storage standards, third-party DAS vendors such as EMC and Compaq Corporation, need to re-qualify their products with each revision of a server's operating system software. This can often lead to long lists of supported operating systems for SCSI or FC interconnects to different hosts. Each interconnect often requires special host software, special firmware and complicated installation procedures.
Network Attached Storage (NAS)
In contrast, network standards are strong standards that are driven by system considerations. There are two true network standards for accessing remote data that have been broadly implemented by virtually all UNIX and Windows NT system vendors.
Networks are now faster than storage channels
During the past five years the transfer rate for leading edge Direct Attached Storage (DAS) interconnects has increased fivefold from 20MB per second for F/W SCSI-2 to 100MB per second for Fibre Channel. Over this same period, however, the transfer rate for leading edge networking interconnects has increased tenfold from 12.5MB per second for 100baseT Ethernet to 128MB per second for Gigabit Ethernet. In other words, network data rates have not only caught up, but have surpassed direct attached storage (DAS), and are no longer two times slower — as they were five years ago. This has shifted the bottleneck from the network to the server and its direct attached storage.
Storage Area Network (SAN)
Storage networks are distinguished from other forms of network storage by the low-level access method that they use. Data traffic on these networks is very similar to those used for internal disk drives, like ATA and SCSI.
In a storage network, a server issues a request for specific blocks, or data segments, from specific disk drives. This method is known as block storage. The device acts in a similar fashion to an internal drive, accessing the specified block, and sending the response across the network.
In more traditional file storage access methods, like SMB/CIFS or NFS, a server issues a request for an abstract file as a component of a larger file system, managed by an intermediary computer. The intermediary then determines the physical location of the abstract resource, accesses it on one of its internal drives, and sends the complete file across the network.
Most storage networks use the SCSI protocol for communication between servers and devices, though they do not use its low-level physical interface. Typical SAN physical interfaces include 1Gbit Fibre Channel, 2Gbit Fibre Channel, 4Gbit Fibre Channel, and (in limited cases) 1Gbit iSCSI. The SCSI protocol information will be carried over the lower level protocol via a mapping layer. For example, most SANs in production today use some form of SCSI over Fibre Channel system, as defined by the "FCP" mapping standard. iSCSI is a similar mapping method designed to carry SCSI information over IP.
|All times are GMT +5.5. The time now is 21:34.|