4 Day Work Week

In this blog I will be reflecting on an article I read. A 4 day work week was tested by Microsoft Japan during the summer of 2019 where employees were working 4 days a week with a 3 day weekend for…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Automation and connectivity will enable the modern data center to extend to many more remote locations

That’s the message from a broad new survey of 150 senior IT executives and data center managers on the future of the data center. IT leaders and engineers say they must transform their data centers to leverage the explosive growth of data coming from nearly every direction.

Here are some excerpts:

Gardner: Martin, what’s driving this movement away from mostly centralized IT infrastructure to a much more diverse topology and architecture?

Olsen: It’s an interesting question. The way I look at it is it’s about the cloud coming to you. It certainly seems that we are moving away from centralized IT or centralized locations where we process data. It’s now more about the cloud moving beyond that model.

We are on the front steps of a profound re-architecting of the Internet. Interestingly, there’s no finish line or prescribed recipe at this point. But we need to look at processing data very, very differently.

Over the past decade or more, IT has become an integral part of our businesses. And it’s more than just back-end applications like customer relationship management (CRM), enterprise resource planning (ERP), and material requirements planning (MRP) systems that service the organization. It’s also become an integrated fabric to how we conduct our businesses.

Olsen: Yes, that’s right, and the starting point is well beyond just content distribution networks (CDNs), it’s also about home automation, so accessing your home security cameras, adjusting the temperature, and other things around home automation.

That’s now moving to business automation, where we use compute and generate data to develop, design, manufacture, deploy, and operate our offerings to customers in a much better and differentiated fashion.

In the past, we pushed that data out for consuming, but now it’s much more about data meets people, it’s data interacting with people in a distributed IT environment. And then, going beyond that is 5G.

We see a paradigm shift in the way we use IT. Take, for example, the amount of tech that goes into a manufacturing facility, especially high-tech manufacturing. It’s exploding, with tens of thousands of sensors deployed in just one facility to help dramatically improve productivity, differentiate, and drive efficiency into the business.

Retail operations, from a compute standpoint, now require location services to offer a personalized experience in both the pre-shop phase as well as when you go into the store, and potentially in the post-shop, or follow-up experience.

Gardner: The complexity over the past 10 years about factoring cloud, hybrid cloud, private cloud, and multi-cloud is now expanding back down into the organization — whether it’s an environment for retail, home and consumer, and undoubtedly industrial and business-to-business. How are IT leaders and engineers going to update their data centers to exploit 5G and edge computing opportunities despite this complexity?

Olsen: You have to think about it differently around your physical infrastructure. You have the data aspect of where data moves and how you process it. That’s going to sit on physical infrastructure somewhere, and it’s going to need to be managed somehow.

And so, the reliance on onsite technical and operational expertise has to evolve, too. You won’t necessarily have that technical support, a data center engineer walking the halls of a massive data center all day, for example. You are going to be in places like some backroom of a retail store, a manufacturing facility, or the base of a cell tower. It could be highly inaccessible.

You should also consider where the applications are going to be hosted, and that’s not clear now. How much bandwidth is needed? It’s not clear. The demand is not clear at this point. As I said in the beginning, there is no finish line. There’s nothing that we can draw up and say, “This is what it’s going to be.” There is a version of it out there that’s currently focused around home automation and content distribution, and that’s just now moving to business automation, but again, not in any prescribed way yet.

So we don’t want to adopt the “right” technologies now. And that becomes a real concern for your ability to compete over time because you can outdate yourself really, really quickly if you don’t make the right choices.

Gardner: When you face such change in your architecture and potential decentralization of micro data centers, you still need to focus on security, backup and recovery, and contingency plans for emergencies. We still need to be mission-critical, even though we are distributed. And, as you point out, many of these systems are going to be self-healing and self-configuring, which requires a different set of skills.

Olsen: We wanted to gauge the thinking and gain a sense of what the C-suite, the data center engineers, and the data center community were thinking as we face this new world of edge computing, 5G, and Internet of things (IoT). The top findings show a need for fundamental strategic change. We face a new mixture of architectures that is far more decentralized and with much more modularity, and that will mean a new way to manage and operate these data centers, too.

Based on the survey, 11 percent of C-suite executives don’t believe they are currently updated even to be ahead of current needs. They certainly don’t have the infrastructure ready for what’s needed in the future. It’s much less so with the data center engineers we polled, with only 1 percent of them believing they are ready. That means the vast majority, 99 percent, don’t believe they have the right infrastructure.

There is also broad agreement that security and bandwidth need to be updated. Concern about security is a big thing. We know from experience that security concerns have stunted remote monitoring adoption. But the sheer quantity of disparate sites required for edge computing makes it a necessity to access, assess, and potentially reconfigure and remotely fix problems through remote monitoring and access.

Vertiv is driving a high level of configurability of instruments so you can take our components and products and put them together in a multitude of different ways to provide the utmost flexibility when you deploy. We are driving modularized solutions in terms of both modular data center and modularity in terms of how it all goes together onsite. And we are adding much more intelligence into our offerings for the remote sites, as well as the connectivity to be able to access, assess, and optimize these systems remotely.

Gardner: Martin, did the survey indicate whether the IT leaders in the field are anticipating or demanding such self-configuration technologies?

Olsen: Some 24 percent of the executives reported that they expect more than 50 percent of data centers will be self-configuring or have zero-touch provisioning by 2025. And about one-third of them say that more than 50 percent of their data centers will be self-healing by then, too.

That’s not to say that they have all of the answers. That’s their prediction and their responses to what’s going to be needed to solve their needs. So, 29 percent of engineers say they don’t know what percentage of the data centers will be self-configuring and self-healing, but there is an overwhelming agreement that it is a capability they need to be thinking about. Vertiv will develop and engineer our offerings going forward based on what’s going to be put in place out there.

Gardner: So there may be more potential points of failure, but there is going to be a whole new set of technologies designed to ameliorate problems, automate, and allow the remote capability to fix things as needed. Tell us about the proper balance between automation and remote servicing. How might they work together?

Olsen: First of all, it’s not just a physical infrastructure problem. It has everything to do with the data and workloads as well. They go hand-in-hand; it certainly requires a partnership, a team of people and organizations that come together and help.

We are trying to alleviate that challenge by making our offerings more intelligent and offering up actionable alarms, warnings, and recommendations to weigh choices across an overall platform. Again, it takes a partnership with the other vendors and services companies. It’s not just from a physical infrastructure standpoint.

Gardner: And when that ecosystem comes together, you can provide a constellation of data centers working in harmony to deliver services from the edge to the consumer and back to the data centers. And when you can do that around and around, like a circuit, great things can happen.

So let’s ground this, if we can, to the business reality. We are going to enable entirely new business models, with entirely new capabilities. Are there examples of how this might work across different verticals? Can you illustrate — when you have constructed decentralized data centers properly — the business payoffs?

Olsen: As you point out, it’s all about the business outcomes we can deliver in the field. Take healthcare. There is a shortage of healthcare expertise in rural areas. Being able to offer specialized doctors and advanced healthcare in places that you wouldn’t imagine today requires a new level of compute and network that delivers low latency all the way to the endpoints.

Imagine a truck fitted with a medical imaging suite. That’s going to have to operate somewhat autonomously. The 5G connectivity becomes essential as you process those images. They have to be graphically loaded into a central repository to be accessed by specialists around the world who read the images.

That requires two-way connectivity. A huge amount of data from these images needs to move to provide that higher level of healthcare and a better patient experience in places where we couldn’t do it before.

So 5G plays into that, but it also means being able to process and analyze some of the data locally. There need to be aggregation points throughout the network. You will need compute to reside at multiple levels of the infrastructure. Places like the base of a cell tower could become a focal point for this.

You can imagine having four, five, six times as much compute power sitting in these places along a remote highway that is not easily accessible. So, having technical staff be able to troubleshoot those becomes vital.

Again, that requires compute right at the endpoint device. It requires aggregation points and connectivity all the way back to the cloud. So, it requires a complex network working together. The more advanced these use cases become — the more remote locations we have to think through — we are going to have to deploy infrastructure and access it as well.

Gardner: Martin, when I listen to you describe these different types of data centers with increased complexity and capabilities in the networks, it sounds expensive. But are there efficiencies you gain when you have a comprehensive design across all of the parts of the ecosystem? Are there mitigating factors that help with the total cost?

Olsen: Yes, as the net footprint of compute increases, I don’t think the cost is linear with that. We have proven that with the Vertiv technologies we have developed and already deployed. As the compute footprint increases, there is a fundamental need for driving energy efficiency into the infrastructure. That comes in the form of using more efficient ways of cooling the IT infrastructure, and we have several options around that.

So, the amount of infrastructure that’s going to go out there will certainly increase. We don’t think it’s necessarily going to be linear in terms of the cost when you pay close attention to how, as an organization, you deploy edge computing. By considering these new technologies, that’s going to help drive energy efficiency, for example.

Gardner: Were there any insights from the Forbes survey that went to the cost equation? How do the IT executives expect this to shake out?

Olsen: We found that 71 percent of the C-suite executives said that future data centers will reduce costs. That speaks to both the fact that there will be more infrastructure out there, but that it will be more energy efficient in how it’s run.

It’s also going to reduce the cost of the overall business. Going back to the original discussion around the business outcomes, deploying infrastructure in all these different places will help drive down the overall cost of doing business.

It’s an energy efficiency play both from a very fundamental standpoint in the way you simply power and cool the equipment, and overall, as a business, in the way you deliver improved customer experience and how you deliver products and services for your customers.

Gardner: How do organizations prepare themselves to get out in front of this? As we indicated from the survey findings, not that many say they are prepared. What should they be doing now to change that?

Olsen: Yes, most organizations are unprepared for the future — and not necessarily even in agreement on the challenges. A very small percentage of the respondents, 11 percent of executives believe that their data centers are ahead of current needs, even less so for the data center engineers. Only 44 percent of them say that their data centers are updated regularly. Only 29 percent say their data centers even meet current needs.

To prepare going forward, they should seek partnerships. Get the data centers upgraded, but also think through and understand how organizations like Vertiv have decades of experience in designing, deploying, and operating large data centers from a physical infrastructure standpoint. We use that experience and knowledge base for the data center of tomorrow. It can be a single IT rack or two going to any location.

Very few organizations can do this on their own. It’s about the ecosystem, working with companies like Vertiv, working closely with our strategic partners on the IT side, storage networks, and all the way through to the applications that make it all work in unison.

Think through how to efficiently add compute capacity across all of these new locations, what those new locations should look like, and what the requirements are from a security standpoint.

There is a resiliency aspect to it as well. In harsh environments such as high-tech manufacturing, you need to ensure the infrastructure is scalable and minimizes capital expenditure spending. The modular approach allows building for a future that may be somewhat unknown at this point. Deploying modular systems that you can easily augment and add capacity or redundancy to over time — and that operate via robust remote management platforms — are some of the things you want to be thinking about.

Add a comment

Related posts:

Problem

In the era of globalization like today we cannot escape from the machine called a computer. All activities related to learning and work activities really need a computer. Sophisticated technology…