The Global Database of Data Center Industry Expertise
Added by roberto sanchez,RCDD on April 30, 2011 at 22:29 — No Comments
PDUs are required for every rack to deliver multiple outlet power ports. Every Rack in a data center must have PDU devices. The number of outlets is based on the number of servers and switches and similar devices installed in each rack. Typically this is 16-24 devices, so the most common PDU sizing is 24 outlet ports. The topic of interest recently regarding these power distribution devices is the level of intelligence required for each given customer. Some customers treat these as…Continue
Added by Chris MacKinnon on April 29, 2011 at 14:47 — No Comments
Data is constantly at risk from hackers that launch advanced, automated, and large scale attacks as well as from malicious and privileged insiders that may abuse their access for economic or personal gain. Data Security has also become subject to intense regulatory scrutiny – so much so that any viable Data Security solution must be able to address the requirements imposed by auditors and regulators. Organizations need to protect data which lives on file servers and databases, and is…Continue
Added by Chris MacKinnon on April 28, 2011 at 15:12 — No Comments
Network performance and security monitoring is not only helpful, it is required in today’s enterprise data centers to provide reliability and performance of today’s complex applications. Data center and IT managers should be focusing on efficient delivery of applications, and network performance and security are two big areas that affect overall application reliability and performance. While most enterprises are monitoring their data center ingress and egress, and possibly some critical…Continue
Added by Chris MacKinnon on April 27, 2011 at 11:39 — No Comments
Application-layer DDoS attacks have quickly become the most significant threat to availability of Internet Data Center and Cloud-based services. Application-layer attacks are low bandwidth, difficult to detect and target both end customers and network operators’ own ancillary supporting services, such as HTTP web services, domain name system (DNS), etc.
Added by Chris MacKinnon on April 26, 2011 at 11:42 — No Comments
When implementing removable disk backup to replace a legacy tape backup system the best advice we have is to not over complicate things. Remember, removable disk backup is in essence the same as backing up to tape. You are just using larger, faster and more reliable backup media. If you use a GFS rotation with tape, you can do the same with disk. If you use Backup Exec with tape, you can use it with disk.
Added by Chris MacKinnon on April 21, 2011 at 18:23 — No Comments
APC by Schneider Electric launched the data center - industry's most innovative, energy efficient and cost-effective approach to free cooling. This video: http://www.youtube.com/watch?v=aEoAZLEn8UM shows the functional principle of EcoBreeze.
Added by Norbert Keil on April 21, 2011 at 1:07 — No Comments
FCoE (Fibre Channel over Ethernet) is a storage networking protocol that supports Fibre Channel natively over Ethernet. FCoE allows Fibre Channel frames to run alongside traditional Internet Protocol (IP) traffic, because it encapsulates the frames into Ethernet frames.
With Fibre Channel over Ethernet, Fibre Channel becomes another network protocol running on Ethernet, alongside traditional IP traffic. This means Fibre Channel over Ethernet runs alongside IP traffic on Ethernet,…
Added by Chris MacKinnon on April 20, 2011 at 17:25 — No Comments
Aging data centers are forcing many organizations into a corner. Many are running out of space, have insufficient power or operate in environmental conditions that threaten business continuity.
Clients considering their data center strategy might be unsure whether to retrofit their existing data centers or to build new from the ground-up. The decision to build or retrofit is not a simple choice and will have major financial and operational repercussions.
Added by Chris MacKinnon on April 19, 2011 at 17:31 — No Comments
Many IT budgets focus on the near-term CapEx and the twelve month OpEx. This often creates a budget where new business initiatives are vetted but past business strategies are not revisited through a needs analysis, so they become a part of the ongoing OpEx. But then IT leaders are held accountable for the long-term costs associated with past business strategies.
Here is a simple example to illustrate the challenge introduced by the process:
Added by Chris MacKinnon on April 18, 2011 at 15:14 — No Comments
USB storage devices provide a clear benefit to making it easier to handle the large amounts of data that people need on a day to day basis. The benefits arise out of not having to rely on network services which often have technical and policy limitation on the amount of data that can be moved easily. For instance, for as long as I can remember, it’s been fairly common to impose a 10 megabyte limitation on email attachment size. Yet over the years, that size limit is growing tighter and…Continue
Added by Chris MacKinnon on April 15, 2011 at 15:12 — No Comments
Today, enterprises and service providers that are interested in launching cloud computing services face the difficult task of integrating complex software and hardware components from multiple vendors. The resulting system could end up being expensive to build and hard to operate, minimizing the original motives and benefits of moving to this new model.
Added by Chris MacKinnon on April 14, 2011 at 15:20 — No Comments
Pls revise if all are in this WP or is missing something.
Your opinion is valuable for my work
roberto sanchez, RCDD
Added by roberto sanchez,RCDD on April 14, 2011 at 1:52 — No Comments
Given the dispersed nature of today’s organizations, with mobile workers and regional offices, the data center and IT infrastructure in reality extends beyond the boundaries of one or more centralized physical locations. What this means is that the operations team will be required to monitor, from a central NOC location, the performance of core IT infrastructure at remote sites and offices.…Continue
Added by Chris MacKinnon on April 13, 2011 at 13:38 — No Comments
The data center model is constantly evolving. A few years ago it was good enough to take daily tape backups of your critical information and send them into offsite storage. Often the off-site storage was at a data center and the service included a “cold backup.” This meant that in the event of a disaster on your primary server – the data center would provide a backup server and restore your latest tape saves – and effectively rebuild your environment from scratch.
Added by Chris MacKinnon on April 12, 2011 at 16:24 — No Comments
Data center cleaning is useful in today’s enterprise data centers for several reasons. Decontaminating a data center in accordance with ISO Standard 14644 (Class 100,000 .05 micron particles per cubic foot of atmosphere) has the following benefits:
Added by Chris MacKinnon on April 11, 2011 at 15:59 — No Comments
It is challenging enough to manage a slew of disparate resources without having to deal with the additional noise from cloud vendors offering too-complex products that do not provide sufficient capabilities or performance for the average organization, or that lock them into a specific platform that does not integrate with their existing IT infrastructure.
Added by Chris MacKinnon on April 8, 2011 at 15:43 — No Comments
A data center is built with fixed available capacity which includes provision for Space, Power and Cooling. However, owner/operators rarely, if ever, achieve anywhere near full utilization of the available capacity. The cause can be directly associated with the dynamic nature of the facility; IT assets as well as the type of equipment housed are in a constant state of flux.
Added by Chris MacKinnon on April 8, 2011 at 14:34 — No Comments
Whether an organization has an e-commerce site relying on web applications for revenue, or is a service organization dependent on information delivered through web applications, constant and continuous availability is a major concern.
Added by Chris MacKinnon on April 8, 2011 at 14:30 — No Comments
The explosion in data volumes and the increasing complexity of new data types have added stress to traditional data warehouses and raised the cost to store years of data on expensive hardware environments that require specialist DBA resources. Information lifecycle management initiatives, while attractive as an architectural concept, have proven difficult to implement, as organizations struggle with defining the business rules for which data classes require disk versus lower-cost storage…Continue
Added by Chris MacKinnon on April 8, 2011 at 14:27 — No Comments