HPE Nimble Storage – Snapshots, Clones & Copy Data Management Lightboard

Hi, I’m Nigel Williams. And hello, I’m Nick Dyer.
And we’re part of the HPE Storage Team. Now, we’ve been doing [unintelligible] videos
for a while now, haven’t we Nick? And we’ve had as part of that, customer submission, where
they wanted to know that, they know about snapshots. They hear about them a lot the time. They
want to know how they work specifically on the Nimble platform. So I thought
It would be good for me to bring you in. And we’re gonna have a quick run through and give them
a run down how it operates. Yeah. Absolutely. And as we all know, data services and data technologies
and application features that you see from storage platform to platform are not created equally.
Some are great, some are not so great. And so, therefore, it’s worthwhile talking through how we do that technology
and why it might be worthwhile to use it in your business. Exactly. Now let’s get the
initial stuff straight off the bat. Those are people that are having a look at Nimble,
they are already aware of what snapshots are to a point. So they’re gonna have a few set questions.
Let’s fly with the ones straight off the bat. Yes, we can do application
consistent snapshots. See here, we’ve got VMware or we know
the V stands for integration, Linux with physical hosting, Windows with VSS,
and from all those yet we can prompt an application consistent snapshot
straight off the bat. So that’s one thing you can take off.
Yeah, and absolutely, Windows is also Hyper-V. So it can be virtualization with VMware
or Hyper-V consistent snapshots. Now, why would I take an application consistent shapshot?
Well, at the end the day you want that snapshot to be application aware, you want the
application to be able to, if in the case of a roll back, we know there’s
not gonna be a problem with it. Newly, the IO is already being coalesce straight there.
Absolutely. So let’s talk a bit deeper about why all this is important. So
we have a block storage array. So, the HPE Nimble Storage array is a
block based storage platform. It’s not file, it is not object, it is block. So, therefore,
the data sets that we’re presenting to your applications and your hosts are block
based. Therefore, when you take snapshots, snapshots essentially are a point in
time reference as to what your block based volume looked like at that
point in time. So if you’re doing what we call crash consistent snapshots,
you’re going to have IO coming from an application down to your storage
volumes. If you take a snapshot with no application consistency, what you’re
going to find is that you’re gonna have data in transit, and it’s gonna look like you
pulled the plug out of the back of the platform, ultimately, when you try and restore it.
And that’s not to say that they’re unusable, but it is really a fall back
option if you’ve got nothing. Absolutely. Database administrators,
Oracle administrators, lots of technologies could easily roll back from
crash consistent snapshots. It’s a really good way to compliment
application consistent snapshots, and this is what you do as part of your
backup window. Your backup window essentially puts everything into hot backup
mode and then takes it off every night to a bucket repository of some sort, right?
Now, we know, of course, if you’re going to do a VMWare backup, or
if you have that evil of all worlds, which is, of course, the VM [unintelligible].
Correct. VM [unintelligible] and VMWare consolidation of snapshots
is not something very pretty. Backups themselves are not pretty.
Typically, you take them every 24 hours. To restore them back, it’s not much fun.
And the challenge with that is you’ve got a 24-hour period of time where you’re
going to take your next backup, and during that window, something could go
wrong and you’re going to have data loss from the point in time you have a problem,
to your most recent fall back point, which could be yesterday, which isn’t
so great. Snapshots are a really good way to complement this. So, on the Nimble
array you can take up to 190,000 snapshots. It is a huge number. Not many
other arrays in the world can do this. And the reason being is the Nimble array
does something called redirect to write snaps. Now, just to cover off what the
difference is between different snapshot technologies, I’ve
got a couple of volumes here. Let’s say this volume is a terabyte in size. If
I do a traditional snapshot, which is based on copy on write technologies, I normally
have another snapshot volume, maybe on a different tier technology, it might be
slower. And that also needs to be the same size. So you end up allocating
more capacity. And as you make changes to this data set from your application,
that’s going to do copies of the data from here to here. So, if you’re
doing 1000 IOPs to a volume, you’re doing copying around snapshots,
that’s typically three or four copies of data for every one host copy. That could mean
three or four thousand IOPs to a backup. So essentially, every write that you
have coming into that volume, there has to be another write before that gets
acknowledged back up to the host. Now, you could have a wonderful tier
of storage here that’s doing all the work. But if your snapshot’s into a lower
tier of storage, then straight away you’ve just put the bottleneck.
This is useless. It’s no longer the fast spin of what it used to be.
So you might have heard stories vendors talk about front-end IO and back-end IO.
The back-end IO is the bottleneck. This is how it works. This is why storage
vendors tend not to say keep lots of snaps. It’s why they don’t tend to
say keep them for a long time. Because the longer you keep them for,
the more data change you’re going to have, On Nimble, you can take them every
5 minutes, 10 minutes, half an hour, a day, a week, a month. It is up to you.
You can keep them for years. We allow you to take 190,000 on the whole array.
Now, why is this? It is something called redirect on write snaphots. And just
to walk you through how that works, I Just want to draw your eye over here. So,
I’ve got a volume here getting some terabyres. And we’ve got four blocks inside
of the volume: A, B, C and D. If I take a snapshot at 10 am, which
is a metadata copy of the volume. We’re not copying blocks, we’re not
allocating space, we’re just copying the metadata. Essentially, nothing’s
changed. So how big’s the volume? It’s gonna be effectively zero.
Just to take a step back and know the customers are going to work it.
They take that snap, they’ve got that point in time. So, as far as they’re concerned,
that is now a roll back point that they can go to at any point.
And, it is instantaneous. It takes no time to take a snapshot, it is
less than a second. So it’s instantaneous, it’s very efficient, it has no performance
overhead because we’re not copying data, and you’re not storing more blocks
of data on the storage array. Bonus. If I then make a change. So
let’s change B to B1 and write some new data to the volume. As part of our
castle file system we’re gonna append that as a new stride write. So we don’t
overwrite data sets and we don’t get get [unknown word] with how that works. It
is very efficient. If I take a snapshot at 10:30, well, the only thing that’s changed
now is B1, so the snapshot will now reference B, and it will own B, so your shaphsot
is the size of what what B is, but it can also see A, C and D. So it can
restore that data. You’ve got a roll back point for that data, but it doesn’t
own it. So you’ve got a single notion of ownership of data. The benefit of that
as well is that you’re not duplicating the data. So, we don’t have to dedup it. So,
we’re not burning cycles in the array to be more efficient.
It’s efficient by design. So wonderfully, what they can do is, we go to large
numbers of snapshots, and this is why we can. We can effectively pick the relevant
blocks on the array and build the snapshot at that point out of those
blocks. One of the benefits of that is that we don’t end up with this multitude
of copies running through the array. On the old copy on write design, you started
putting out multiple copies, even 10, 20 snapshots. All of a sudden, you had a large chunk
of that array that was being used up. So, the great thing about
this, as you rightly say, every snapshot is independent. Yes,
they are metadata reference points, but they’re not parent and child. So
you can roll back. You can roll forwards. if you roll back, you don’t lose the
one that you created after the one you rolled back to. You don’t have that
because each one is completely independent. So we’ve taken some snaps. That’s
great. How do I recover from these? Well, in essence, we have a number
of options in what we want to do. And this is where the power really comes
through. The fact that this snapshot and technology integrated really well, is that
we don’t have to plug anything else onto it. So, I’ll steal this volume,
because you’ve taken those two. If I’ve come over here and I have taken
a number of volumes, I’ll use the same time as you. Just say we’ve taken
10, 11 and 12. I’ve decided there’s been an incident, whether it be ransomware,
whether it be an element that’s causing me to want to roll back and fall
into a particular point in time. I can take this snapshot and the
option that you just mentioned before, where I can roll back to that point.
It will rebuild that volume out of reference blocks, and therefore present it out,
and I can choose to present that back to the original host, or I can present that to
a new host, if I wish, or what I can do if I want to see what the contents of
that offer before, is use what we call a zero copy clone. So, in essence, I can
take that out and create a brand new logical volume out of that.
Now, even though it is still the snapshot I’ve set, right, I want to be
able to see that as a new logical unit. So how much space do you
think that will actually take? Well that’s going to be by reference to the name, zero.
Yes. So, what I’m doing is is I’m taking the blocks that I need to take. I am building
a new logical unit out of that, and I am presenting it back
up to what I want to do. Now, I can either present it back up to the
original or I can present it up to some new host. I can even as you’ve mentioned and
we discussed before, I can go through and take snapshots of that if I want to.
I can treat it as a new individual entity. And that gives a whole new realm of
opportunities for people like Dev and SQL to do reporting and things like this,
without impacting the original volume. Right. So this is a first class volume.
It’s not a second class volume. It’s not a second tier of storage.
It’s not like it gets the worst performance. It’s a first class volume, you get the
same performance as what you do get by the original production volume. You can
take clones of a clone. You can take a snapshot of a clone. You can go as
deep as you like with this. Very efficient. And this is how you can start
to use it for test DEV, for UAT, and for single object restores,
because the really cool thing that pulls all of this together
is the application consistency. We’ve got the plug ins natively in the array
to do up consistency for VMware, Windows, Hyper-V and Linux applications.
Fantastic. But back at providers is where it gets really interesting.
And this is the bit that I really like. I’m a bit of a backup guy at heart and,
you know, we always have the debate out here, are snapshots backups, are they not backups.
The honest answer here is that we can do both. You mentioned the pain that you
can have in a backup environment. We can use these to supplement
that and gain a better result, either to accompany it or to maybe tackle
those jobs or those workloads that the existing backup application is just
struggling with. Now there’s a number of backup products out there
that Nimble natively just plugs into. If you want to talk about Commvault, you
want to talk about Veeam, you want to talk about RMC, you want to talk about
Data Protector. We can go in there and we can drop that array in to the GUI
of the backup application and at that point that has the visibility of the
snapshops that have been taken. It can even orchestrate and tell the
Nimble to take snapshots on a schedule. That means then that you can then have
in a single pane of glass, your existing backup operations and the snapshot
operations from the Nimble and be able to recover from any of those points.
All of a sudden, the options open up. If you want to use a snapshot
as a source to do backups from. If you want to move a workload to
perhaps a different array. You can really start to build a complex workflow.
So, consider this, you ‘ve got a backup application that backs up every 24 hours. You’re
going to a repository somewhere else. Well, that’s great for every 24 hours, but
you don’t see VM stuns, and you’ve gotta do application consistency here. With
this, you drop this into a backup App. You can start using Nimble snapshots and
clones to complement that backup strategy to take hourlies or half-hourlies
or five minute based snapshots that the backup technology can now
refer to to do single object restore, or even to clone out for bubble
wrapped, isolated test environments. You’re getting the best of both worlds
by using Nimble snapshots and clones with your backup technology.
And the best part of all this is that it’s bundled in with the array,
there’s no license cost. Right. Just like everything that we’ve ever done
in Nimble Storage, everything’s complimentary. If you’re not using snapshots today, I
highly recommend you take a look at them. If you are using snapshots today, I’m
sure that you’ll be definitely a fan of them. If you’ve got any comments
about them, please do leave them in the Comments
pane below. So thank you very much for watching this
video today. I do hope you find it useful. If you’d like to learn more,
we’ve put some links in the description below that you can go and follow.
Wonderful, and if you need to find out anything else, just reach out to your local HPE
Account Representative or go to the website or get in search directory.
Thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *