The Uncast Show

Unraid 7 Full Walkthrough & New Features Breakdown

β€’ Unraid

Grab yourself a coffee, tea, or beer 🍻 and get ready to dive deep into everything new in Unraid 7! This update brings massive changes, including no more mandatory Unraid array, ZFS improvements, a redesigned dashboard and other UI enhancements, and big upgrades to Docker & VM support! Whether you're upgrading or setting up fresh, this guide has you covered.

Note: Portions of this guide are best viewed on Youtube!


πŸ“Œ Chapters
00:00 Intro
02:50 OS Upgrade Tips and Best Practices.
06:00 ReiserFS is no more. Learn how to move to other file systems like xfs.
08:10 Use mover to empty and move data so that disks can be reformatted.
13:50 No 'Unraid Array' necessary now in version 7 -- perfect for SSD/NVMEs.
19:10 A look at different ZFS RAID levels and what they all mean.
24:25 Advantages of Compression in a Zpool.
25:25 Unraid ZFS subpools, aka auxiliary vdevs: What they all mean and how to use them properly.
35:30 Setting up a fresh install on Unraid 7
39:10 Unraid 7's brand new Dashboard, UI tweaks, and new settings
49:12 New Unraid 7 Docker and VM features
1:11:55 Next video tease

πŸ”— Resources & Links:
πŸ“₯ Get Started with Unraid in 15 minutes or less

πŸ“– Unraid Docs


πŸ’¬ Join the Community

Send us a text

Other Ways to Connect with the Uncast Show


Speaker 1:

Hi there, everyone, and welcome to another episode of the Uncast Show. So I want to wish you a good morning, good evening, good afternoon to wherever you're watching this from in the world. Now, today's video is the official Uncast video about Unraid 7. So this video is going to be absolutely great. Now, in this video, we're going to be talking not only about all of the new features. We're going to be talking about things such as unraid subpools or auxiliary VDEVs. We're going to be talking about the fact that we don't need an unraid array anymore, and we're going to be looking at how we can actually remove it and setting up a server from scratch without an array as well. And as we'll be setting up a server from scratch, of course we're going to be talking about z-pools. Now, have you ever wondered what's the difference when you've got four drives, if you put that into a RAID Z2,? So you have two drives of redundancy, or you chose to have two sets of two mirrored drives? Well, you're going to have the same amount of redundancy, but is there any difference? Well, we'll be talking about that later Now. As you all probably know, there's loads of new cool docker features, and we've talked about them before in previous Uncast videos, but we'll be looking at that as well. But there's also loads of actually awesome VM features as well, and we're going to be taking a bit of a deep dive into VMs. We're going to be looking at all of the new features that the VM manager offers, and we're also going to be looking at snapshots and how we actually restore them. Now there's going to be this and much, much more in this video.

Speaker 1:

Now, the first thing we're going to be talking about in this video is riser FS, or, in Unraid 7, you could say, the lack of riser FS, as it's being phased out. So we're going to start with that now. It's a pretty long video, guys, so grab yourself a coffee, a beer, sit back and relax, and I'm going to go over to my desk and let's make a start. So Unraid 6.12 is the last version of Unraid that allows us to actually format disks in Riser FS. And even if we choose Riser FS in 6.12, we do get a warning that Riser FS is depreciated and we should use another file system, and the file system to use in the array, in my opinion, is XFS. Now I don't want to actually format this drive, so I'm just going to remove it and start back up the array.

Speaker 1:

So let's upgrade this server here to unraid 7, and what I like to do before any upgrades is I go to the plugins page and check for updates and make sure all of my plugins are up to date before I do any upgrades, and as well as upgrading the plugins, I think it's a good idea to also do a check for any Docker container updates that we might have as well. And the final thing I like to do is, with the Fix Common Problems plugin installed, is to just check my server for any errors before I do any upgrades. Okay, so we can see there's a few errors and warnings here. Now the riserFS warning on my disk 2, fix Common Problems recommends that I migrate this to XFS, and we'll do that in a moment after we've upgraded to Unraid 7. Now there's a new patch here, the Unraid patch plugin, which I don't have installed on this server. So let's install that now. Basically, this plugin just keeps the Unraid server up to date with all the latest patches for the version of the OS that you're running. Okay, so other warnings I've got here. There are two plugins that are not compatible with the system the Dynamics Cache Directories plugin and the Waken LAN plugin is no longer supported, so I'm going to go across and remove those. So let's remove these. And now, with our last check on fixed common problems Okay, so, fixed common problems, all but the riser FS error has got a clean bill of health.

Speaker 1:

So before I go and update, let's just have a look at the kernel that Unraid 6.12 is currently using. So it's using Linux kernel 6.1.126. And also, whilst we're here, let's have a look at the Docker version that 6.12 is using, and the Docker version is 24.0.9. And finally, let's go to the VM manager and have a look at the libvert version, which is 8.7.0. And QMU 6.12 is running 7.2.0.

Speaker 1:

Okay, anyway, it's ready for us to upgrade now. So to do that we need to go to tools here and under about here we can see the upgrade OS button, so I'm going to click onto that. And here I can see there's one stable version of Unraid, which is Unraid 7.0.0, which was released on the 9th of January this year. So I'm going to click on to view changelog here. And here we can view the changelog of everything that's new in Unraid 7. But we're not going to read that, because that's what this video is all about. So I'm going to click on here to continue the update on serenity, which is the name of this server, and so the update's now being downloaded. Okay, so everything's downloaded, so I'm going to click onto done. Okay, so at the top here we can see the last thing we need to do is to reboot the server. So I'm going to click on here to reboot, and rebooting the server will bring us straight into Unraid 7. Okay, so let's log in and here we can see we're straight into Unraid 7.

Speaker 1:

Now, like I was saying earlier, when we're looking at Unraid 6.12, if I was to add this disk now, we can see here that the option to actually format in riser fs has been removed from unraid totally. Now, the reason being is riser fs has actually been deleted from the kernel as of linux kernel 6.13, and for those of you who don't really know about riser fs, it's actually got quite a checkered history. The person who actually made riserfs, he actually ended up murdering his wife, and it has quite a whole dark history to it, to be honest. Anyway, that's not what we're here to talk about, but let's go back to the Unraid server here, and what I'm going to do is I don't want to add any more drives to this server, so I'm just going to start up the array. Okay, so here we are in Unraid 7. So we can see, although we can't actually format a disk in RiserFS, well, we can still actually use disks that we already have in the server that may already be RiserFS, and that's because we're not actually on the kernel. That's actually dropped supporters. Yet this version of unraid unraid 7.0.0 is using linux kernel 6.6.68. Now, a great thing about this kernel for everyone out there who's using intel arc gpus well, this kernel gives full support for those. So that's really cool. Even though this kernel version here does support riserfs, it's a good idea, in my opinion, to make preparations for when we do actually go on to Linux kernel 6.13. Because when we do that, well, the riserfs disks are not going to work in the array anymore. So it's a good idea to actually be able to reformat these as XFS. And it's actually really easy now to be able to clear one of these drives using Mover. Now we can't actually use Mover from the GUI to do this, but it really is super easy to do.

Speaker 1:

Now. If I go here and we look at some of the notes about this release. Here we can see we can use mover to empty an array disk. We use this command here mover, space, start, space, hyphen e, disk, and then the number of the disk, and then we pipe that into logger and at the end of the command here we can see an and symbol, which means this runs in the background. So what I need to do, because I'm using disk 2 here, that's riser fs. What I want to do is clear off everything here and have this moved onto another disk somewhere in my array. Obviously for me that's going to be disk one, because there's only two drives in this array. But if you had multiple disks of, say, kind of 10, it will just find somewhere to keep the data and clear off that disk. So what I'm going to do is I'm going to open up a terminal window and I'm going to type mover, space, start, space, hyphen e, space and disk 2 that's the disk that I want to clear, space and then pipe if I can find that on my keyboard here and logger, and then a space and an. And now this and at the end is pretty important because what that does is it just runs this in the background. So by having that on the end it means we can close this window and the command will still be running. So let's hit enter and we can see here we've got a response. So I can close this now.

Speaker 1:

And if we scroll down, we can see here that move is now running. So we just need to wait for mover to finish and what it will do is it will move the files and folders off my disk too, and because I've only got two drives in this array, it's going to put it onto disk one. But if you had more drives it may well actually scatter it across multiple drives, especially if you didn't have enough space on the actual first drive. And we can see here the reads and writes, the activity on this drive. So we can see things are moving. So we just need to wait for this to finish and then we can move to the next stage.

Speaker 1:

Okay, so it looks like here the disk activity is finished. So let's scroll down and see if move is still running. No move is finished now. So if I scroll up here and I look on this disk, we can see now this disk is empty and the contents have been put onto my disk one here. So what I need to do now is now this disk is empty is I can stop the array and reformat this in XFS. So let's stop the array and now all we need to do is to actually select this disk here and change the file system from riserfs to one of these other file systems here.

Speaker 1:

Now, if we wanted to as well, we could actually encrypt this drive and we could use the same system to be able to encrypt any drive that we want on the array. So we could just clear that drive and then we could reformat it encrypted if we wanted to go through and encrypt some disks. So why not encrypt this one? I'm going to click on to apply here and done, and I'm going to scroll down to the bottom here and click on to start Now. Here we can see that it says unmountable and it's the wrong file system. This is perfectly normal because this hasn't been formatted as yet. So I'm going to scroll back down to the bottom here and I'm going to check this box here and I'm going to click on to format.

Speaker 1:

Now you may be wondering, if I'm formatting this drive encrypted, why it's not asking for a pass phrase. Well, there are these other drives that I have in the server that are already encrypted, and so Unraid's going to use the same password for those. But if I didn't have any encrypted drives at all in the server, it would in fact ask for a passphrase before formatting. And now we can see that the former riserfs disk is now all formatted correctly in XFS and also it hasn't affected parity at all. Just because we've reformatted the drive. It doesn't actually break parity, so the array is going to be working absolutely fine.

Speaker 1:

Now I'm sure that most of us out there we don't have riserfs drives in our server, but I just wanted to go over that first before moving on to what I'd call the more exciting things about unraid 7. Now, as I said earlier, I'd really recommend doing this sooner rather than later. Yes, on unraid 7 currently we can still read riser fs disks, and that's because unraid 7, like I was earlier, is on kernel 6.6.68. But what I want to do is show you what happens when we actually go to a kernel above this. So let me show you that now. Now what will happen when we update to a newer kernel, and on this Unraid, I've updated here to Linux kernel 6.14 RC2, which is the newest kernel. As of making this video Now, this isn't actually officially supported by Unraid, so I don't suggest you upgrade your kernel, but I just wanted to show you here when it is upgraded to this kernel, well, any kernel above 6.13, well, riserfs is not going to work, and we can see here it says that it's unmountable and there's no file system. So that's why it's a good idea to now, whilst it's still easy and you're not going to have to downgrade to earlier versions of Unraid to come off your riserfs disks. I really suggest doing it now if you've got any in your system. Okay.

Speaker 1:

So in my opinion, one of the biggest things in unraid 7 is the fact that we don't actually need to have an unraid array anymore. If we go across to my server here, battlestar, we can see an example of this. Now this server has only two z-ps, both of which are SSDs. This top one here has a usable space of about 10 terabytes and it consists of two terabyte SATA SSDs. Now this faster pool beneath here. This is made up of three 4 terabyte NVMe drives and, as you can see, here there is no unraid array. Oh, and there's a third pool here I forgot about, which I call z rust, basically because this drive here is a 12 terabyte regular hard drive. Hence I call this one z rust as in zfs rust pool.

Speaker 1:

Okay, so one thing I want to mention about nvmes and ssds in general is it's highly recommended not to actually use them in a regular Unraid array, and the reason for that is because inside of the Unraid array, trim, or discard, as it's called, is not actually supported on SSDs. So if we use SSDs in our server, we should always use them in a pool. Now, before we actually go ahead and look at how we can have our Unraid server without an array, I just want to say that I really do love the Unraid array. If we go to my main server here, basestar, we can see here that I still have an Unraid array. I think the Unraid array is absolutely excellent. I think the Unraid Array is unbeatable for media duties, for storing your media, for things like Plex, jellyfin or MB. You can't actually beat the Unraid Array because whenever you need more space you can just add another drive really easily without any problem at all. And my favorite file system for the Unraid Array, as I said earlier, is XFS. But you'll notice that I do actually have a ZFS drive in my Unraid array, and the reason I have one driver ZFS is because I like to actually have some of the other Z pools in this server be able to replicate ZFS data into the array onto this particular drive. But we're not going to talk about ZFS replication today. We can have a look at that in another video sometime in the future.

Speaker 1:

Now let's go back to the server that we were on at the beginning of this video. Now we can see here we've just got this small array of three drives, one parity and two data. So if we wanted to actually get rid of this array now, maybe I decide that I don't want to actually have an unraid array anymore. I still want to use these drives, but I want to use them in a pool. Now, obviously, one thing to note if you ever change your unraid array and you want to make it as a pool, you're going to have to move the data from your unread array to somewhere else, and we can't actually do it in the same way as what we did a moment ago using mover. We'd have to actually manually move all of the data somewhere else first if we wanted to convert our unread array into a pool. Okay, so I don't have any data on this array that I actually care about. So I'm going to stop the array here and what we need to do is unassign all of these drives and then, with all of the drives unassigned, we go to slots here and we set the slots to none. So here we can see. Now there's currently no unrayed array, okay. So now, with these drives removed from the array, I can now set them, if I want to, to be in their own pool.

Speaker 1:

Now, if I scroll down here, we can see here I've got five drives I can choose from. There are four 4TB drives here which are spinning Rust drives they're regular hard drives and also I've got a 1TB SSD regular hard drives and also I've got a one terabyte SSD. So what I'm going to do, I'm going to make a new pool, so I'm going to click on add pool here and I'm going to give it a name let's call it rusty and for slots, because I've got four, four terabyte drives. I think I'm going to choose four and I'm going to click on to add. Okay. So now we can see here that another pool has been created. So now I just need to select the disks of which I want to have this pool made from. So here I've selected all of my four terabyte drives. Now one thing I like to do before I actually start is I come along and I click onto the pool here, and before I do anything I always click erase pool, and to do that I just need to confirm the name of the pool. Here, and before I do anything, I always click erase pool, and to do that I just need to confirm the name of the pool. So I'm going to type in rusty and now that's erased the pool, getting rid of any data that I might have had on those drives.

Speaker 1:

For pools made up of multiple disks, we've got the choice of either zfs or butterfs. Now, if we were only to have one drive in the pool, we would also have the choice of XFS. But for multiple drives it's either ZFS, butterfs or the encrypted Lux versions of those file systems. Now, in my opinion, zfs wins hands down, although there are use cases for ButterFS. I do prefer ZFS If we're using a RAID 5 or equivalent. Well, butterfs isn't the best for that. Raid 5 in ButterFS, in fact, is still not officially marked as stable. So I'm going to use ZFS here and now I get the choice of how I want to use these disks in my Zpool.

Speaker 1:

So what I've got here? I've got the choice of stripe, mirror, raid Z1, raid Z2, and RAID Z3. So a stripe what's that? Well, if I was to use a stripe, this would put all of my four disks together and stripe all the data across it, but there'd be no redundancy at all. This would give me the best speed, but it would be quite a risky thing to do.

Speaker 1:

So the next thing here is a mirror. So with a mirror, that's exactly what it sounds like it mirrors the data on different drives. So here I can have two VDEVs of two devices, so that's basically two groups of two drives that would be mirrored. Or I can have one VDEV of four drives, so that would just give me four copies of the data, each drive being a copy of the other. Now, obviously, when you're using mirrors, we are losing a significant amount of space.

Speaker 1:

So in my opinion, using RAID-Z is a good compromise between the both. So what are RAID-Z levels? Now, if you're familiar with regular RAID levels, raid-5 is basically what RAID-Z levels. Now, if you're familiar with regular RAID levels, raid 5 is basically what RAID-Z 1 is. Now, the properties of RAID-Z 1 mean we stripe the data across all four drives but we lose the space of one drive in the parity calculations that are actually written to these drives. Unlike what we're used to in a regular unraid array where one drive holds all of the parity, in RAID C1 the parity information is striped across all of the four drives. In fact this is one reason why unraid arrays are really easy to actually add disks to, because the parity is kept separately. And in fact if we look at my unraid array here, you can see here that I've got one drive of parity making up my pool. So basically a one disk parity array is very similar to RAID C1, because we lose one disk in size of available space in the actual pool or the array due to parity.

Speaker 1:

Using RAID C1, I would have three of the four terabyte drives available for data, so I'd have a usable space of 12 terabyte. Now if I wanted to be a bit more cautious, I could choose RAID C2. And this would give me two drives out of the four that would be used for parity. But because I've only got four drives that make up the pool, I would be losing 50% of usable space. So RAID Z2 would allow me to lose any two drives in my pool and still not lose any data. But because I've only got four drives in this pool, I would be losing two drives worth of space. So it would give me a very similar protection and usable space than if I chose a mirror and had two VDEVs Now a VDEV being a group of drives that were mirrored. So again, I would lose 50% of the space. But I could lose two drives and still actually be okay.

Speaker 1:

Now I want to actually talk about, when we come to choices like this, of what actually is the difference. So if I'm going to lose, say, 50% of the space of these drives, what would be the advantage of having a mirror of two groups of two devices or choosing RAID Z2 and having one VDEV of four devices? Well, if I chose the mirror here, in my opinion this is actually going to give me slightly better performance having two mirrored VDEVs than if I used RAID Z2. But with RAID Z2 I can lose any two of these drives and not lose data. But with a mirror with two groups of two devices let's say we've got group A and group B. If I lost one drive from group A and one drive from group B, everything's cool, I'm not going to lose any data. If the two drives that failed were both from group A, well I'm going to lose the whole pool and I'll lose all of my data. So if I want to actually be super safe, I would use RAID Z2. So any two drives can actually fail and it doesn't matter which two that do fail, because I've only got one group of four devices. But if I wanted to go for a little bit more performance, I would probably use the mirror and the two groups of two devices.

Speaker 1:

Okay, so let's move on and think about RAID Z3. Now again, this just wouldn't be used really, if you've only got four drives in your pool. It's really pointless. But what this allows us to do is have any three drives fail and we don't lose any data. Now if I had, say, 12 drives in this pool, raid Z3 might be a good fit. But for my little 4x4 terabyte pool I've got here, raid Z1 is going to give me the best bang for buck. I can lose any one drive in the pool. My performance is going to be pretty good. So that's what I'm going to choose.

Speaker 1:

Now you can see here that I've got compression turned on. Now a lot of people worry about turning on compression. They think if they turn compression on, well, it's got to actually slow down the server. Well, in fact, in 99% of cases it's the reverse of that. It's quicker to actually read compressed data and less of it off the drives than it is to read more data that's uncompressed and read that off the drives. Modern CPUs are so fast that it doesn't really touch the sides when decompressing compressed data. So the read and write speeds are just faster from having the compression and plus we get to fit more data on our drives. So in my opinion, it's a win-win. Okay, so that's how I'm setting up my pool. So with that done, I'm going to click on our drives. So in my opinion, it's a win-win. Okay, so that's how I'm setting up my pool. So with that done, I'm going to click on to apply and then done. Okay. So now we can see. Here are my four drives in the pool.

Speaker 1:

Okay, so now let's take a look at here where it says add subpool. Now these are basically auxiliary VDEVs. Unraid calls subpools, and that's because their additional devices that can be added to the zpool provide specific performance or functionality benefits. So, unlike the main pool, these subpools serve specialized purposes that can help improve performance, but only in certain workloads. However, there's one critical thing that we need to remember If some of these auxiliary VDEVs fail, we can end up losing all of the data on a pool. So that's really something important to keep in mind. And what I like to say is if you're not sure whether you need a sub pool, well, you probably actually don't, but anyway, let's go through each type, what they do and whether, in fact, they might be useful for you. Now, I think for most unraid home labbers, the only two of these really that are worth considering is the slog and the l2 arc. The others, like special metadata and deduplication, in my opinion, they're often more trouble than what they're worth.

Speaker 1:

So, first up, we've got the SLOG, or what that stands for is Separate Log Device. This is for what are called synchronous writes. Now, most everyday writes are actually not synchronous at all, they're actually asynchronous and don't use this. But some applications, such as certain databases or NFS shares these do actually require sync rights to guarantee the data integrity. So what does this slog actually do? What it does is it gives a fast location for these rights to be committed to before they're written to the main pool. So it basically makes the synchronous writes faster because it's going to like a fast SSD first and so it can improve reliability and consistency for these type of workloads. Now, what a slog doesn't do? It won't make your gaming VM run faster. It won't speed up your file transfers unless they do rely on synchronous writes.

Speaker 1:

I like to think of a slog a bit like a car spoiler. It looks fancy on, kind of like supercars and things like that, and, yes, it does provide aerodynamic benefits for those type of cars. But if we just got a little family car and we only drive it at 30 miles an hour, well, we could stick a spoiler on it and besides looking really cool and that's debatable most home users just don't need one. So unless you know, your workload relies heavily on sync rights, you don't really need a slog. Okay, next, l2 arc, or what that stands for, is level 2 adaptive replacement cache, and this acts as a secondary cache for ZFS, which primarily uses the RAM for caching recently accessed files.

Speaker 1:

So when data is frequently read, zfs tries to keep a copy of the most recent data in either its RAM or, if you have an L2 arc, when the RAM's maxed out, it can keep it there as well. So what does this do? Well, it speeds up the reads for data we constantly access. But it only does that if our RAM becomes full. If it doesn't, then this won't actually do anything at all. So, in my opinion, if you've got a choice between an L2 arc and adding more RAM to your server, add more RAM to your server. Zfs loves RAM, it's super fast. And also more RAM in your server. It benefits the server all round. Now, one thing I think as well that a lot of people sometimes forget with an L2 Arc device is it must be significantly faster than the actual main pool. Okay, so I think it's time for another analogy Now.

Speaker 1:

I really do love my analogies. Now, if you like coffee, like me and I've got mine here Imagine you go to work and there's a Starbucks across the street from your office. You go in there and you pick up a cup of coffee and you put it onto your desk. Now, the fastest way to get your caffeine hit is obviously to reach across your desk, grab the coffee and to take a sip. But what happens when you run out of coffee? Well, you're going to have to walk all the way back to Starbucks and get another cup out of coffee, well, you're going to have to walk all the way back to starbucks and get another cup. So imagine the starbucks is like the pool and the cup of coffee on your desk is like the ram or just the regular arc cache. Now an l2 arc is a bit like you put a coffee machine down the hall a bit in the break room and so you don't have to go all the way to starbucks to get a coffee. You can more quickly get one by going to the break room. But if you put that same coffee machine up on, say, the third or fourth floor, then you might have to go all the way up the stairs. It's going to be really tiring and it'll take you longer than it would do just to pop out the door, pop across the street to Starbucks and grab a coffee. So my roundabout way with this crazy silly story is basically what I'm trying to say is you always have to make sure with your L2 arc device that it is actually faster than the actual pool you're pulling the data from. If it isn't, then you're really wasting your time having an L2 arc because it would be as fast or faster just to pull the data straight from the pool anyway.

Speaker 1:

Okay, so let's move on to my least favorite auxiliary VDEVs. Now I'm going to talk about the special meta VDEV. Now, this auxiliary VDEV can store metadata like file structure and small files on a fast device, basically to improve access time. So what does this drive actually do? Well, it speeds up operations that involve lots of metadata lookups, like directories with thousands and thousands of files.

Speaker 1:

Now there is a big downside. The downside is, if this VDEV is lost like you don't have it mirrored, you only had one you would actually lose the whole pool. So imagine that you had, say, 10, 20 terabyte drives in your Z pool and you just had one 1 terabyte NVMe that you used as a special metadata VDEV, and that one NVMe fails. Well, you're going to lose the data on the whole of the pool. So unless you've really got a special reason to use a special VDEV, then I would advise not to, and personally, if I was going to use a special VDEV, I would probably have it triple mirrored. You really don't want that to fail because you would lose all of the data on the pool.

Speaker 1:

Okay, I think it's time for another analogy. So imagine a pool without a special metadata VDEV. It's basically like a delivery driver who uses a map. He gets around, he looks up the address, looks on the map, gets his paper map out, has a look and gets to the destination. So having a special metadata vdev is a bit like he's got a gps, so it's great, he can easily get around much quicker than using the map. But as soon as that GPS fails and he doesn't have that anymore, well, he can't actually do any deliveries at all. He knows the addresses, but he's got no maps and he can't see where anything is. So, in my opinion, for us home labbers, we probably don't really have the use case that we'd need to have a special metadata VDEV. So I think it's probably, in my opinion, best avoided, and the same I'd say with the deduplication VDEVs as well.

Speaker 1:

So how does deduplication work in ZFS? Well, basically we don't actually need to have a dedicated deduplication VDEV in order to use deduplication. It will normally just use RAM, but it does use huge amounts of RAM. I think it's about five gigs per terabyte of deduplicated data that we need to have for deduplication to work properly. And so what the deduplication VDEV does is we can add an additional drive to actually store the deduplication tables on that drive when we don't have enough RAM. But for most of us we probably don't have loads of duplicated data. Now if we had millions of, say, windows 10 install ISOs, yes, we'd probably save a lot of space. But for people who use media servers and that kind of thing, I doubt you're going to have hundreds of copies of the same movie and that kind of thing.

Speaker 1:

Now an analogy for deduplication Imagine that I hire someone to stay at home and I say when I go to the shop I'm going to call you, look in my cupboards and check I don't buy the same groceries. So I give them a call and I say, hey, pasta, do I have pasta? And they say, no, you got no pasta. I think, well, yeah, that's because I ate it all last week and I wouldn't be going shopping if my cupboard was full of food. So what I'm trying to say here with my analogy is it's costing me more money to pay that person to be at home looking in my cupboards and my shopping trip's taking me twice as long because I'm having to call them and say, is that food in the cupboard, yes or no? And then they have to tell me. So basically, deduplication as well. If that VDEV actually failed, I would lose access to the data on that pool, just the same as I was talking about with these special VDEVs earlier. So I think for most people you probably don't need to use a D-Duke VDEV. That's just my opinion.

Speaker 1:

Now the last auxiliary VDEVs I'm going to speak about. They're not fully supported in Unraid yet, but they are coming soon. They're spares and they are exactly what they sound like, basically a spare drive you can add to a pool. So should one of the drives fails, it will automatically basically a spare drive you can add to a pool. So should one of the drives fails, it will automatically use that spare drive, pop it into the pool, re-silver the data and you don't actually have to swap out the drive yourself. So really, really useful in my opinion, right, okay? So anyway, that's enough about the sub pools, let's move on and have a look at some other things. Okay, well, that took me a bit longer to actually talk about the auxiliary vdevs than I actually thought it would.

Speaker 1:

So now here I'm on a different server. This is a fresh install of Unraid here, because I wanted to show you, with Unraid 7, what a new install would look like. So with Unraid 7, by default, the slots for the array devices are automatically set to none and there are no pool devices. So if I wanted to add an array, well, I would choose the amount of slots here. Now the minimum is three and the maximum here we can have up to actually 30. So that would be two parity drives and up to 28 data drives. Like I said, I do love the Unraid array. It is so versatile One of my favorite things about the whole OS.

Speaker 1:

To be quite honest, now I'm going to just say I'm going to use three drives here. No, in fact, I'm going to use four, because two slots will always be taken by parity. So I'm going to put one parity drive in here and two data drives. Now I've still got two drives left here, so I could actually add them to the array or I could add them into a separate pool. I think what I'm going to do is I'm going to add these into a separate pool. I think what I'm going to do is I'm going to add these into a separate pool. I'm going to have two different slots, add those drives in here and use my favorite file system for pools. I'm going to choose these to be ZFS and I'm going to mirror the two drives and put on compression.

Speaker 1:

Okay, so when you've got your storage set up how you want, all we'd need to do then is just to click start, and what this will do is it will start up the array and it will format all of these drives and start building parity. Now, as we can see here, all of the drives they're currently not formatted. So I actually have to come here and choose to do that and format all of the unmountable disks. So I'm going to do that and click format. Ok, so we just need to wait for these drives to format and then after that, the parity sync to be completed. Ok, so everything's done now. So we can see here a fresh server on Unraid 7 is all set up so slightly different how we set up a server from scratch on Unraid 7, the changes being just how we actually select the drives and pulls. Okay, so I'm going to pop across now to another server this one here that's also running Unraid 7, and let's check out on this server all of the cool new features that we can find in Unraid 7.

Speaker 1:

Okay, so now in Unraid 7.0, let's get a little bit of insight based off numbers. So, if you can remember, earlier we had a look in Unraid 6.12 what kernel version we had, what Docker, docker version, libvert and QMU. So let's do the same here. Okay, so, kernel version. I think we've probably mentioned this multiple times today, but anyway, let's mention it one more time. So Linux kernel version in 7.0 is 6.6.68. 6.6.68. And let's have a look at Docker, and the Docker version we can see here is 27.0.3. Now let's have a look at the hypervisor. Let's click on the VM manager here. So here we can see the libvert version is 10.7.0. And QMU we're running 9.1.0. Okay, so that's the boring numbers out the way.

Speaker 1:

Now let's go across and have a look at the dashboard. Okay, so here we are on the dashboard in Unraid 7. But I've also got another server up that's running Unraid 6.12, so we can just compare the dashboards on each. Now this is obviously 6.12. And if we look at this tile here, where it gives the name of the server, the model, the license type and the uptime, well, if we go across to Unraid 7, we can see this tile's been greatly improved. I really like the fact that it's got a clock, and clicking onto the clock brings us through to the date and time settings. If we need to make any changes Now.

Speaker 1:

The next thing that is greatly improved, in my opinion, on the dashboard is this system tab here, which shows our RAM usage, flash drive usage, log usage and Docker image here. Now, if we compare it to the earlier version of Unraid, we actually had five things that we could see here RAM, zfs, flash, log and Docker. As we can see, it was in a basic bar graph here. Now let's pop back to Unraid 7. And what we can see with the RAM here. Now there's not much RAM being used on this server for me, but the RAM usage is actually split up into different colors as well as being written down here what it consists of. So my system ram's using four gigs cfs only 625 megs, docker 633 megs, so I've got 120 gigs free. So this is a really nice, presentable way to show everything really, really, really easily. It's much less confusing, I think, than the original, where it is just these bar graphs. Ok, so other things that are new the Docker container part here, which shows the Docker containers on our server, where we can toggle it on from all of the containers to just the started ones. This has remained well exactly the same.

Speaker 1:

Looking here, we can see nothing additional has actually been added. But let's look at the virtual machine one underneath here. Well, nothing's actually changed in Unraid 7 for that particular piece, but if I start up a virtual machine here and we can see this one's running now underneath here, we can see some stats here for the virtual machine usage. So we can see that the guest CPU is using 1.6% of what I've assigned and the host CPU, well, it's only using 1.5% as well, and we also get some nice metrics here for memory, disk and network. So that is a really cool thing. I think that's been added. It's nice to be able to see that on the dashboard. And whilst we're just looking at this virtual machine usage, I think why not go over and look at the VMs tab here on 7, and we can see that same information is presented at the top above all of my VMs here. So two places to see it on the dashboard or on the VM tab itself. So other than that, I don't think there's really much different on the dashboard than there was in Unraid 6, but definitely some really nice improvements that make it look a lot better.

Speaker 1:

Okay, let's move on to the main tab now. Well, obviously, as what we pointed out before we don't need to have an Unraid array anymore. And if I go across to my other server here, we can see here that we actually have to have an array. Hence I've got one here. Now I don't have a parity drive on this array, basically because this server is not on all the time and all it does is actually back up my main server's array here. So that's why you don't see a parity disk. So, going back across onto Unraid 7, onto the main tab here, one other thing that's slightly different that you may not notice is if we scroll right down to the bottom here, well there is something different, very minor, that you won't really see Now, obviously because this Unraid 7 server doesn't have an array now. This would be present if it did have an array. There's no history button here. If I go on to a server with an array this Unraid 6 one here well, here's a history button here, and if I go on to my main server, we can see the parity check history displayed at the bottom. So obviously, no Unraid array, no parity check history.

Speaker 1:

Okay, so now let's look at what's new in shares here. But before looking at what's new, I think it makes sense. Really, we go back to the Unraid 6 server here and we can see here I've got the array, I've got one cash pool here and a second here. And for those of you guys who are really paying attention, yes, this second pool wasn't here a moment ago. I've got one cache pool here and a second here. And for those of you guys who are really paying attention, yes, this second pool wasn't here a moment ago. I've just added it to demonstrate this next part.

Speaker 1:

So if I go to the shares tab on Unraid 6 and I click add share, well, let's make a share and call it test and we can choose the primary storage from any one of the pools in the server. So if I wanted the primary storage to any one of the pools in the server, so if I wanted the primary storage to be the regular cache here, I could choose that. And then for my secondary storage I've only got the option of having the array and the mover action. It can move it from the cache to the array. So the first place the files can be written can be on some fast storage on the SSD cache and then mover later on moving it across to the array. But there's no option here to actually have any other pool as my secondary storage, and that's the big change in Unraid 7.

Speaker 1:

If we go back to Unraid 7 now and here we add a share and we call it test the same, and for my primary storage again, I've got three pools here. So if I added it to say warp speed, all of my primary files will go here first. But for my secondary storage I can choose any pool or the array. I can have it go anywhere I want. So if I want it to go to ZR, I can add that as my secondary location and mover action, just as before. We can have it move either way, the standard where we write the files to the fast pull first and then later on move them to the slower pull. But also we can do it the other way around.

Speaker 1:

So you might think well, if we're writing it to the primary storage, which is warp speed, why would you want mover to move it from z rust to warp speed? Well, there's a very good reason for that is if I have my primary storage and this might become full, what this share will do then is it will continue writing the data onto the secondary storage z rust here, and then what mover will do instead of moving everything from the primary to the secondary. If it ever sees anything on the secondary and there's enough free space on the primary, it will move it back. So setting mover to this direction basically makes your primary storage the preferred location for data when space allows, and that's another really good reason why to put a minimum free space amount in, another really good reason why to put a minimum free space amount in. So if I put 100 gigs here for this share, if there's ever less than 100 gigs, it will start putting it on ZRust, but otherwise everything will always stay on warp speed. Okay, so that's all that's new with shares, but something I think that is really really useful. Okay, for the next new feature in Unraid 7 that wasn't in 6, I'm going to go across to my main server, I think. Okay for the next new feature in Unraid 7 that wasn't in 6, I'm going to go across to my main server, I think.

Speaker 1:

And here at the top you can see there's a tab called Favorites. If I click onto that you can see here I've got Docker Fix, common Problems, samba System Devices, system Drivers, tailscale, usb Settings, update OS and VM Manager. So how do these all get here? Well, let's go back to this server here, and if we go to either the settings tab here and we hover over any of these, if we click the little heart button here, it adds it to favorites. And we can do exactly the same from the tools menu. Here as well, we can add things into a favorite, which I find really useful, because sometimes, to be honest, I tend to forget if an item is in tools or settings, because they're both quite similar. So once we actually add things into our favorites here, we can just put the things here that we use a lot and we don't really have to remember where they are. And if you don't want it anymore, well, you just click on to the little trash can here and remove them off and your favorites bar will disappear. Okay, now let's move on to settings here, because we have got a few new settings in Unraid 7.

Speaker 1:

The first thing here that's new we've got this power mode which basically allows us to run our server in different ways. We can run it with best performance, balanced, if that's available, or best power efficiency. Now you might wonder why this is grayed out. Basically, because my CPU doesn't have that option, but some CPUs will and the ones that do, obviously well, they're not going to be grayed out. So I've got the choice of best power efficiency or best performance. Now this server I have running as best performance because this is my most powerful server for running VMs, but my main server here you'll notice the power mode on this one is running as best power efficiency because this one's on 24-7.

Speaker 1:

Okay, so another new feature under the network services here is this outgoing proxy manager. So, simply put, an outgoing proxy is a bit like a middleman that sits between your unraided server and the internet and what it does is it routes certain types of network traffic through another server. Now, this can be useful in workplaces or places where you need to route all of the internet traffic through a specific system for things like security and monitoring. So in Unraid this will be used by the web interface and some processes, but it won't be used by docker containers and VMms. So that's something to keep in mind. But basically, the key point I think is if your server's already up and running, and running smoothly, and you're thinking, what even is an outgoing proxy? Well, don't bother about it, because you probably just don't need it. If you're going to use this feature. You probably already know that you need it. It can be used in some work environments or just for users who want to control their network traffic flow more carefully. Okay, so I think everything else in the settings is just the same. Now plugins again, nothing new on the plugins tab, but if we come across the docker, well, there is a whole load of new things here.

Speaker 1:

Now I have spoken in detail about the tailscale integration into unraid docker container, so I'm not really going to go into it that much in this video. So if you want to know all about tailscale and unraid, then I really suggest you watch the uncast video where we really deep dive into that, looking at all of it in detail. But for those of you who haven't seen it, we'll quickly talk about it. What we can do is we can take a container. So let's take this Firefox container here and if I go to edit and scrolling down here, you can see this button here we've got called use tailscale. If I toggle that on, what this allows me to do is install tailscale directly into this docker container. So what I will do is I'll call it firefox and then scroll down. I'm not going to set anything else at all here. I'm just going to click on to apply.

Speaker 1:

And now at the bottom you'll notice on docker pages, we've got a little button here that says view container log. Now this is just quite useful for nothing to do with tailscale, just generally, I find it quite useful to be able to view the log straight away after installing a container. But we're setting up tailscale for the first time, we do actually have to click this button, or well, we have to view the log anyway, so we can see here that the server's installed Tailscale into this container, and so all I need to do now is authenticate this by signing into my Tailscale account, which will then add this container straight into my Tailnet. So why is that useful? Well, it means we can access the containers from anywhere we like, but also we can actually share them with other people. You just pop their email address in and click on to share, and then that means anyone that I want to, I can have them share this firefox container totally privately. It's not exposed on the internet. So a really useful feature in unraid 7. And now, if we look back at the installed container here, we can see there's a little icon that tells us that Tailscale is installed inside of this container. And now, clicking on the container here, you can see we've got an additional web GUI button here which uses the Tailscale address. So if I click onto that, allowing us in this case to log into Firefox through our Tailscale address. So definitely a very useful feature, okay, so moving on to the VMs tab now.

Speaker 1:

Now there's a lot of changes here. We've already seen at the top here where we can see various metrics about the running VMs, and if I was to start up another one, we can see metrics for each that are running. And if I was to start up another one, we can see metrics for each that are running. Now, one very nice feature as well is we can actually now create snapshots of our VMs. So here, if I click create snapshot, I can actually create a snapshot of this running VM. And when I have this little button checked here memory dump it will even make a snapshot, dumping the whole ram of this vm as well into the snapshot. So what that means is, when I start back up the vm, it will be in the exact same position as it is now. So if I had an excel spreadsheet open, for instance, that same spreadsheet would be open if I was halfway through watching a video. It would continue from that point in time where the snapshot was taken that I was watching the video.

Speaker 1:

So here we can see we've got a snapshot name and we can give it a description. So the description I'm just going to call it uncast. So you can see here that the native file system that I'm using is ZFS. So if I check this, it would use ZFS to make the snapshot. If it's unchecked, it will use QEMU to make the snapshot. Now I'm going to use QEMU myself here.

Speaker 1:

Now for this VM. I'm not going to do a memory dump and I'll explain why in a moment. So I'm going to click on proceed and take the snapshot. And now if we look at this VM, we can see this VM has got one snapshot. And if we look at the others here we can see it says they all have none. Well, in fact this NixOS one, it looks like I've made a snapshot before. Now I did say I wasn't going to actually take a snapshot of this VM and dump the memory at the same time. Now I'm going to show you why. So if we look at this VM here, I've got an RTX 3080 pass-through to it. So if I try and take a snapshot now let's call this Uncast 2 as the description and I leave Memory Dump enabled and click, click, proceed I'm going to get an error. You can see here the pass-through devices are mentioned here saying that VFIO migration is not supported in the kernel. Now, basically, this is because I'm trying to take a snapshot of the RAM in the VM and pass-through hardware does complicate matters. So if you get that error, just keep that in mind. You won't get that error if you take a snapshot of a VM with no hardware pass-through.

Speaker 1:

Okay, so let's create a brand new VM and we'll take a snapshot of that in a moment because it will give me a good opportunity to show you some of the cool new features in Unraid 7 regarding VMs. So obviously I'm going to click on to add VM here and I'm going to choose a template. I'm going to install a Linux VM, so I'm going to choose Linux, so I'm going to give it a name and I'm going to call it PopOS. So just as we to we can put in a description here. So let's call it test1vm. But underneath here we can see there's something different. Now I'm going to go back across, I think, to Unraid 6 quickly so we can see the VM template there. So here's the exact same template on Unraid 6. So we can see, here we've only got description. Then we come to CPU mode and then our logical CPU. So let's go back to 7.

Speaker 1:

And here we have a web UI that we can actually start from, and clicking here on the little question mark here it expands information that we can see about the various sections. And here under Web UI, we can see here that we can actually change the Web UI for which we want, when we click on it, for the VM to go to. And a great example of this if we look on my other VM here where I run Home Assistant, on my other VM here where I run Home Assistant, clicking onto here and then going Open Web UI here I've specified the web UI of the Home Assistant VM then clicking here it goes straight into Home Assistant. But also one thing to notice is we can still also just go into the normal VNC console as well. So in my opinion a really useful feature for VMs that may be running various services, because we can just click on the open web UI and go straight in. So basically any OS that's managed by a web UI. Well, we can actually specify it here, and it makes it super easy for us to be able to access and manage it.

Speaker 1:

Okay, so next we can choose the host CPU or an emulated QMU CPU. But if we see here, there's this extra part here where it says migratable, and this can be turned on and off. Okay, so just what is this migratable? Well, in QMU, migratable is actually by default turned on, and what it's used for is for live migration. It lets you move a virtual machine between two different physical servers without actually shutting them down. Now, here's one thing, though, is Unraid doesn't actually support live migration, so we don't need this feature, and actually leaving it on can limit the CPU's performance in VMs, because it hides some advanced features that keeps things compatible across different hardware. So, since we don't need it, my advice is to actually turn it off, and you're going to get better performance, because your VM then can take advantage of the full power and all of the features of your CPU.

Speaker 1:

Okay, so, obviously, I'm going to leave this turned off. Now I'm going to choose some logical virtual CPUs here. I think that's going to be enough. Now. Personally, I never have the first CPU ever pinned with my VMs, because Unraid tends to want to use that one, so I think it's best actually to leave that unpin. Okay, so memory I think eight gigs is going to be enough.

Speaker 1:

And machine type here we can choose QMU or I440FX and we can see we can go up to 9.1 here. That's because it's related to what we saw earlier when we looked at the QMU version here. So whatever the QEMU version is, you're always going to have that version that we choose in our VMs here. So I'm going to use Q35 here. Now, everything else here is the same as it was in Unraised 6.

Speaker 1:

But there's one thing I'm going to show you in a moment is connecting an install ISO to the VM. Now, of course, course, we can do it as we used to, by clicking here, and then that brings up the ISO share on the server and we can choose our ISOs this way. And now I'm actually going to choose the wrong ISO. I'm going to choose BASITE here and I'm not actually going to be installing BASITE, because this is Pop OS, and I'm going to come back and show you how we can easily change the ISO in a moment. So let's scroll down now and add a virtual disk. So let's make this 50 gigs and I'm going to select a QCAL 2 disk because I prefer QCAL 2 personally. Now there's one new thing here we can see here discard. We can basically turn the trim on and off on the VM if the VDISC is on an SSD. Now we didn't have this in Unraid 6, as you can see here. So by default this is actually on Now, just as in Unraid 6, we can specify a serial number for the VDISC if we want to.

Speaker 1:

But let's move down to the next new feature. Here is if we pass through a GPU. So here, say, I was going to pass through my RTX 3080 and obviously, passing through a GPU, you should always pass through the sound counterpart. But what we can do now in Unraid 7 is we can, when we're passing through a GPU, we can enable multi-function here, and what this does is it basically makes the GPU appear like a real piece of hardware, because obviously with a GPU, the VGA part and the sound part are on the same piece of silicon that's plugged into your PCI slot. So if we don't enable multifunction, it can make the GPU look like it's two separate pieces of hardware. So by enabling multifunction. This gives us better performance in the VM when we're passing through a GPU. Now, the only way to do that in Unraid 6 was to actually edit the XML, which could be rather tricky. So now we've got the option here to be able to turn this on and off from inside of a GUI. So I think that's a really, really nice improvement.

Speaker 1:

But I'm not actually going to be using a pass-through GPU for this VM. I'm going to be using a virtual GPU, and let's get rid of the sound as well. So here, nothing really that's been added. So let's scroll down to the next new feature Now in Unraid 6, this would have been the end of the actual template, but can you see here it says advanced tuning options. Well, let's go to Unraid 6, and at the bottom we don't see that at all. So if I scroll down even further, we can see there's some more advanced features that for most users you probably won't need, but they can be really, really cool. Now, something I really like here is this QMU command line part where we can put in basically special XML that allows us to do fancy things with our VM.

Speaker 1:

Now I'm going to show you one thing on another VM where I use this and that's this VM here which is called Razer Laptop. Now, this VM actually, I never actually created it as a VM specifically. This actually used to be my Razer laptop before I was drinking a cup of tea and I managed to drop it on the laptop and destroy the machine. It was a really sad day for me. But so what I did is I took out the NVMe and I popped it into the Unraid server and I copied the NVMe. That's why we can see here it's about a terabyte.

Speaker 1:

Now I had a lot of software on there and various kind of licensed software that I wanted to make sure would run properly. So what I did is, if we look at the template of this one and I scroll down, you can see here I've put in some extra QMU command line. So what I've done is I've specified various things to make this look like it was my Razer laptop, and so this is actually the VM running Now, the Razer laptop. It had a 3080 inside of it, so that was quite lucky because it was the same GPU I had inside of my server. So we can see that's here. But if I open up the settings here and look about system, we can see here that it thinks it's a Razer Blade 15 advanced model mid-2021. Now, also, I made sure that the device ID and product ID matched for this Windows VM so that all of my software would work.

Speaker 1:

Now, something else that is pretty cool is if we check what the UUID of this actual VM is. That's the UUID here. Now, I specified this inside the QMU command line as well. Inside the QMU command line as well, and putting it here overrides what's in the XML when the VM is first created. Now I'm going to show you something pretty cool which I didn't show you earlier is we've also, in Unraid 7, got this little button here where we can show inline XML. So what this can do is show us the XML alongside the graphical GUI display of the VM. So I think that's really cool that we can see both together. So here you can see the UUID of this actual VM that the hypervisor specified. Well, I've overridden it with what the UUID was really on my Razer laptop at the time. So what this did is allow me to basically keep running my laptop, and Windows wasn't any the wiser. Now, using this in combination with other techniques can be a good way of making an OS think that it's not running actually as a VM Now I might do a detailed video in to do that at some time. So if you want me to, if enough people ask, then we might make a video on that. Okay, so that's what the QMU command line part of the VM templates for here Now. Also here we can change the clock offset and the timer source.

Speaker 1:

And a very interesting thing at the bottom here is something called EvDev. Now this is another way of passing through things like keyboard and mice. Now I don't actually have anything plugged into my server at the moment, but what it does is it allows you to pass it through in a different way, basically, than when you plug in a USB keyboard or mouse into your system and have it pass through that way. So EvDev can have two advantages. It's got slightly better performance and lower latency than even if you pass through your USB keyboard and mouse as a USB device here. It also allows us to pass through USB keyboard and mice to really old systems. You know things like Windows 95 that don't even have USB drivers and still be able to pass through keyboards and mice to systems like that. Okay, so that is all of the new things in VMs in Unraid 7.

Speaker 1:

So I better not click on Create because, remember, I've got the Bazite ISO here. So I'm going to untick Start VM After Creation and I'm going to click on to Create. Okay, so here's the VM. Now, if we move across here, we can see this column here where it says V disks and VCDs. So V disks well, we all know what that is virtual disk, and here VCDs is basically ISO images, virtual CDs and DVDs.

Speaker 1:

Now, remember, I had the BASITE image in and if I click here we can actually see that. Now, notice this little eject button. So I could remove it this way. So now we can see no CD is inserted. So I could either click here or here now, and now we can select an ISO image. But the good thing is is we can select it from anywhere on the server. Now, when we're doing it in the template, it only allows us to select from the ISO share. So say, you've just downloaded an ISO and it's in your download share on the server. Well, this would be the way you want to add it. Because here you see, I can browse through all of my shares here and add an ISO image from anywhere I want.

Speaker 1:

So anyway, I'm installing Pop OS, so let's install this one here, and I'm going to click on to insert, and so now you see that we've actually swapped over the image to the correct one. So now I can actually start up the VM and do the install. Oh, ok, so it's all installed. Let's log in. Ok, so I'm going to open up YouTube here. Oh God, no, I'm not a robot. Right, let's type youtubecom rather than assert. Now let's put on a cat video, okay. So as this is a short, it should just keep replaying and replaying. So I'm going to minimize this now.

Speaker 1:

So what I'm going to do is I'm going to try and take a snapshot now. But I'm actually going to have another error Now. I wanted to show you this so you can understand why the errors happen and not worry about it if you have the error. So I'm going to click on to create snapshot and, if you remember, I said that we can dump the memory when there's no hardware being passed through. I said that we can dump the memory when there's no hardware being passed through. So I'm just going to call this first install and I'm going to choose memory dump and I'm not going to check this. I want to use a QEMU snapshot and I'm going to click proceed. So three, two, one error.

Speaker 1:

So what we can see here is it cannot migrate the domain. So remember, when I was talking about the migratable setting in the VM template, I said Unraid doesn't actually use this feature to migrate a VM from one server to another, but if we want to make snapshots and dump the memory, we do kind of need this. So it's a choice that you have to make. Do you want slightly better performance and have migratable turned off, or do you want to be want to take snapshots of the VM and dump the memory at the same time? Okay, so let's turn off this VM and edit it, and we'll change the CPU to be migratable and start it back up and go straight back to YouTube and play our cat video. Let's minimize this now, and now let's try and take our snapshot. So, third time, lucky, hey. So I'm going to click on to create snapshot Again. I'm going to call it first, install, I'm going to dump the memory and I'm going to click proceed, okay. So finally, we've actually got our snapshot, along with the memory being done. Okay, so let's go back and the video is still playing. So let's close this, okay, so that's all closed and let's shut this down.

Speaker 1:

Okay, so how do we actually restore a snapshot? Well, what we do is we go to our VM and we click onto it. Now. Here we can see snapshots at the bottom here's the description I gave it, saying first install, the date and time the snapshot was made and the type of snapshot that it was. And here it tells us the parent, which is the base install when I actually installed the VM. And we can see here that the VDISC is no longer the VDISC one like we had earlier. So let's start back up the VM and let's go in here and just delete a whole bunch of files. So I'm going to delete these folders here, move them to trash and let's empty the trash and we'll imagine those folders were all full. So let's shut down the VM.

Speaker 1:

So, back at the VM, let's go ahead and restore the snapshot. So let's click onto the VM here and at the bottom here we can see snapshots and here, if we click on these little two lines here here I can click revert snapshot and go back to this snapshot. So I'm going to click onto that now and what we can see here is it's going to remove this vdisk here, which is what we're pointing to at the moment. So I'm going to click on to proceed and do that. And so now do you see that we've gone back to vdisk1.qcal2. So that was the base image of what the snapshot was actually using.

Speaker 1:

So it's kind of slightly different to what you might be used to with ZFS snapshots, where you actually roll back to the snapshot. When you roll back with a QMU snapshot, you kind of roll back to the snapshot. When you roll back with a QMU snapshot, you kind of roll back to the moment just before you took the snapshot. So we can see the VM is running here now. So let's open up the VNC window and, hey, the cat video is playing, absolutely awesome. So because we actually dumped the memory as well, the VM is just continuing from that moment in time. And if we close this now and we go and look at our files and folders, well, they're all back as well. So it's very easy to be able to take snapshots and restore VMs. But just remember the caveats when it comes to snapshots and the criteria you need if you want to make a snapshot and dump the memory at the same time.

Speaker 1:

Okay, so I think that's pretty much everything in the new Unraid. I can't think of anything else to tell you, but I'm pretty sure after publishing this video I'll think of something. But hey, I'm going to show you guys something before I go. I've just got a new hard drive this morning. This is actually the largest one I've ever had 22 terabytes and I'm going to be using this in an upcoming uncast video that should be out very soon, where I'm going to build an extremely low power server. It's going to be sipping electricity in the same way as a politician sips the truth sparingly, if at all. Anyway, everyone, this brings us to the end of this Uncast show. Now, remember, if you want to watch about Talescale, then check out the Talescale episode on the Uncast show and, as always, I'll catch you all in the next one.

People on this episode