RELATED HOT TOPICS: Personal computers Data centers Software Development Tools Cloud services Storage Apps Hardware Iot

Dell EMC pursues 'post flash' storage

By BNamericas Monday, November 19, 2018

Contact us should you have any questions or want to schedule a demonstration. Chile: +56 (2) 2941 0300

BNamericas: It's been two years since the merger of Dell EMC. How successful has that been?

Srinivasan: Michael Dell saw storage as just a piece of the puzzle. People are really thinking about applications, the whole stack, and there were a lot of synergies in the whole stack, servers, computer networks, storage. That has worked out very well. If you throw VMware into the mix, it becomes an incredibly powerful story.

On top of that is the Pivotal Cloud Foundry. Now we're not just talking about infrastructure but about management of the infrastructure of the cloud as well as building native applications. Now we have a holistic story to sell to our customers.

Internally, there's been a lot of breaking down of silos and making everything operate like one team and strategically aligning the seven businesses. Now within Dell EMC there is an incredible cooperation between servers and storage, and software defined storage. Things like that would have been very difficult to do pre-merger.

BNamericas: Storage is all about data. But with artificial intelligence and the internet of things there is the need to access that data much more quickly and to create value with it.

Srinivasan: From a storage perspective flash is definitely sweeping through. A couple of years ago flash was only in the high end, now it is all the way into the mid-range products and I think that in a few years it is going all the way into the low range products, including cold, or unstructured, data products like ECS and Isilon. So in a sense, flash is in the mainstream cycle.

But we're now also looking at the 'post-flash' era. Storage class memory is coming, and even beyond that there is persistent memory, where you're getting memory speeds. We're talking less than a microsecond of latency.

With regular memory you turn off the power, the memory goes away. Persistent memory stays, but it feels to the application that it is writing to memory. So when you're doing things like artificial intelligence, very high computing intensive, low latency, things that require very fast operation, you no longer have to convert from the memory to the storage device, it's all in one. Those lines will be blurred in the next two to three years.

BNamericas: What sort of business applications do you think this next generation of storage will serve?

Srinivasan: Mostly data analytics, like machine learning and blockchain. The Chilean stock exchange, for example, is talking about going down to milliseconds. For a trading transaction that normally takes two days to fulfill, this will be completed in four minutes or so.

From a storage perspective we're focused on how to make the media faster. The other big focus is around software-defined. Everything is going towards industry standard servers, which is where Dell Technology is so powerful. Today, building software that can run on anybody's server is doable but it is an expensive proposition. Our approach is we own the servers as well. Our software is going to be open so it can work on any generic intelligent server, but we'll engineer it with our servers as well.

So, for example, our software defined products are sold as software only, but also as ready nodes, pre-loaded onto our PowerEdge servers. That is what is possible with the end-to-end stack.

BNamericas: There has been a lot of talk this year about artificial intelligence. 

Srinivasan: We see three areas we're investing in to support use cases. One use case is user experience like your Siri or Alexa, the human-machine interaction part. That has changed a lot with AI algorithms. Secondly is applying machine learning to improve your business processes, your productivity and product quality. The third area is making your infrastructure smart. Not only are we supporting AI applications for our customers, we are actually building AI into our products.

For the first and second use cases we offer solutions like Isilon and ECS. A lot of AI works with unstructured data, which is files and objects. For the third bucket we are applying machine learning into our own products, our PowerMax product has a machine learning engine inside.

The analogy I use is the PowerMax storage array is like a self-driving car. So if you can have a self-driving car, why can't you have a self-driving storage system? Why does it take one or two or three humans to manage that thing? We want to automate storage, so our mission is to build self-driving infrastructure. 

BNamericas: How does this fit with edge computing where you bring everything closer to the device?

Srinivasan: Deep learning is happening in the cloud and that learning is impacting the things that are out in the field, at the edge. The next level is to do that out at the true edge; we're talking about the internet of things.

Devices everywhere will have inference engines, or learning models, embedded. Then the learning happens either in the data center or in the cloud, and those models are then pushed out periodically to the edge of the devices.

We're investing in hardware, like GPU-enabled devices that can do real time analytics. With traditional big data, you collect the data for a month and then you analyze it. That is no good for real time processing. When you see Twitter feeds, stock trades, or shares coming in you want to be able to optimize your behavior as that is happening.

BNamericas: How much faster will this new generation of storage be than flash memory?

Srinivasan: Flash took latency to 100+ microseconds, down by a factor of ten from spinning disk. Storage class memory will take speeds down by another factor of 10. Persistent memory takes it even further to sub-micro second latency.

The key insight here is that not everything needs sub-micro second latency. Many applications cannot take advantage of that. Where it is going to be powerful is being able to mix and match these different types of memories and storage media into a system that automatically knows where to put what.

In the old days we had memory and hard drives and flash. Now we'll have memory, persistent memory, storage class memory and flash.

BNamericas: Will spinning disk eventually disappear?

Srinivasan: I don't think so. It will take a long time. There is a lot of cold data that will be stored in the cloud. You don't want to use up a lot of precious data center space and cooling for that. You will store on spinning disk certain data that doesn't need to be retrieved rapidly, that is needed for compliance reasons.

Most of the hard drive business is going to go either for long-term retention or backup. But also remember that the economics of hard drives is still improving. It's down to a penny a gig, it's still very cheap, and you're talking about exabytes of information to store.

BNamericas: What are the main challenges for the industry?

Srinivasan: It's just about execution. There is so much innovation going on, you have to keep innovating, while keeping your business running and bringing your customers along with you. Sometimes customers tell us that the rate of change is too much for them to consume. They're keeping their assets longer, for five years or more. We have some assets out there in the field that are beyond maintenance because they're so old but they keep running. Sometimes you have to convince your customers to accelerate and move to new generations of technology.

Our most recent reports Discover more

Build your own Media Monitoring service

Unlock the full potential of BNamericas. Create your account today and start building your personal dashboard with news, projects, companies and reports about any topic in Latin America.

Join thousands of companies doing business in Latin America.

BNamericas’ Data and Insights can help you identify business opportunities in 10 industriesin Latin America. Let us show you how today.