Another year, another need to define the 'modern' data center. We have endured so many recent innovations in this space in recent times that I wonder if this should be a quarterly redefinition. The cloud craze is over, at least in terms of everyone racing to the cloud as fast as they can, though the public cloud players only continue to increase in revenues even as many organizations retreat from the cloud or strive to implement hybrid solutions.
So where do we stand today? "I don't want to be in the data center business" is no longer a design-driving criteria, as IDC reports that 87% of cloud-enabled organizations are keeping some portion of their workloads on-premise. Cloud is a reality, but so is the need to keep a sophisticated data center online. The talent gap only continues to grow as manual processes give way to automation, but even automation requires an expertise that many organizations lack.
In this 5 part series we are going to be talking about what the modern data center looks like and next-gen technology that we are seeing be implemented today.
So let's start with a clean slate and lay out some goals for today's data center design:
Simplicity must be at the core of the data center. I love all the nerd knobs that my traditional network/storage/compute products give me, but that comes at a cost organizations can no longer afford. Don't get me wrong: as an engineer, simplicity also comes at the cost of granular control. However, what is the greater cost - a minor inefficiency or a data center no one fully understands? Trends like hyper-convergence and even hyper-converged backup systems are giving us a data center we can grasp and troubleshoot.
Visibility is no longer a luxury. For far too long, we've had no clue what's happening in our data center. Which servers talk to which servers on a daily basis? What are my most loaded connections? What is at the greatest risk of failure? Our reality used to be that we could live without these answers, but that reality is no more. Visibility tools can now deliver visualization of this data, plus machine learning can create proactive analysis of our data centers. The best part is that accessibility to these tools is no longer limited to the largest of data centers - more and more mid-sized and even small environments are able to subscribe to these services.
MORE WITH LESS
We have to do more with less. Even as we strive to simplify the data center, system are becoming more mission-critical, and the expectation is that these systems will stay online. Human error is the largest cause of failure, so automation and programmability will be key in reducing our self-inflicted wounds. This also has the benefit of freeing us engineers to do higher-level tasks, such as future planning and engaging in business-relevant conversations. If you fear the robot takeover of our industry, then make yourself irreplaceable to management by becoming an expert at something other than the CLI.
Disaster Recovery (DR) is no longer an option. I recently attended a conference, and the #1 pain point I heard at our booth was DR. As just discussed above, the impetus from executives is to keep systems online. Sometimes this prompts a move to the cloud, but even clouds have their failures, which is sometimes the fault of a WAN link or VPN circuit. Backup systems must be modernized to be simple, effective, and able to leverage cheap cloud storage for offsite archival.
Everyone belongs in the cloud. This isn't to say that every workload belongs there, but the cloud must be appropriately leveraged for your specific environment. The cloud must also be secured at the layers that remain within our control, such as DNS security and account access with Cloud Access Security Brokers (CASBs).
I'm going to unpack each one of these elements with an individual post, so stay tuned and/or click ahead as we explore these in more detail!