The Uncast Show
Owning an Unraid server gives you the ultimate control over your data and reduces your dependence on The Cloud. Join host Ed Rawlings to learn how to get the most out of your Unraid server, stay up to date on relevant news and topics, and get to know members of the Unraid community!
The Uncast Show
Pub Chat and ZFS build planning with Stefano from SPXLabs
Stefano from SPXLabs joins Ed at a pub in the UK for a cold pint and some tech talk about Redhat, PCIe lanes, CPU pricing vs. performance, and more!
They also plan out an upcoming Unraid ZFS build.
Part 2 will be video only and can be watched here on the Uncast Show youtube channel.
Other Ways to Connect with the Uncast Show
Music Stefano. What are you doing here, man? I'm just here to have a pint. Oh wow, true pint. Let me get you a pint. Let's sit down and have a chat. Yeah, let's do it, cool man. Hi there guys. So welcome to another episode of the Uncarte Show. I happen to bump into Stefano in the pub, so I thought let's have a chat, yeah. So what are you doing here in the UK, man?
Stefano:Mainly here to drink beer Sounds good and to know the locals and I guess talk about Unrayed.
Ed:Yeah, yeah, cool. So what are you drinking there, stefano?
Stefano:I already forgot the name of it, but it is a cider.
Ed:Yeah, a British cider from Somerset. That's just gold. That's just gold. That's the one that's awesome. Well, I'm drinking a lager. So how long have you been over here in the UK? Three days now.
Stefano:Yeah, first time, yep, first time, first time for everything, first pub, first time in the UK, also to firsts. And how are you enjoying it? So far, so good. The weather has been absolutely lovely. It was a little hot on the first day here, but it was mostly just an adjustment phase that we had to go through, because in the United States we're so used to AC everywhere and everything, so we got there, I saw when you posted on Twitter that you had a fan in your hotel room.
Ed:so I was thinking to myself oh no, that's going to kill him with no AC. The fan laid in bed with us while we slept. Great. So I was going to ask you something, as you're the man who probably know. Red Hat has recently changed its licensing. I believe that's correct, and I wondered how is that going to affect the clones such as Rocky and other clones of that kind of stuff? Right?
Stefano:so we're not entirely sure how that will affect them in the long term, because the licensing just changed. So now, for those of you that don't know, the open source licenses have been pay walled, or the licenses, the open source software, has been pay walled now. So to access that, you have to pay Red Hat to access it. And then how does that affect AlmaLinux and RockyLinux? Well, potentially they may not be able to clone Red Hat anymore, and so that's important because AlmaLinux and RockyLinux are one-to-one binary compatible with Red Hat. So it's a pretty big deal if you want a Red Hat clone so you have the same stability that the enterprises would. But you won't have that now if Red Hat decides to just completely shut out the clones.
Ed:Wow, so when did this happen then? When was this announced?
Stefano:Oh, you got me there Maybe four days ago. Oh wow, very recent.
Ed:Very recent yeah. I thought it was pretty recent.
Stefano:Yeah, and so what's actually mostly bad about it is everyone was afraid when IBM bought Red Hat. But IBM's done a good job of building goodwill with the community. Things looked great, kind of like Reddit has existed for so long. They've built some goodwill with most people and then now it's almost like a run pool and we're unsure of what the future might be like. Like Reddit could disappear, almalinux and RockyLinux could disappear. We just don't know. It's too early, we're not going to indicator with IBM. Okay, let's go.
Ed:And I believe they have done one good thing with the developer license. With how many free? Oh?
Stefano:Yeah, yeah, that's right. So previously if you had a free developer license, you could have up to 16 Licenses for red hat, and I believe they've upgraded that to 254 or 250 so that's like pretty much 20 acts. Yeah, it is. That's pretty generous. It's great for a cluster.
Ed:Yeah, sure. Anyway, as everyone knows, in the unraid world we're on, I believe, 6.12 at the moment and that's introduced CFS. Yeah, well, we're saying it the wrong way around, you see? Correct to me in England. Yeah, I should be saying that.
Stefano:I should be speaking English.
Ed:I should be speaking English English, shouldn't I? But yeah, so we've got ZFS in unraided, so when are you gonna be putting that on your server? Any plans for that?
Stefano:soon, probably not in terms soon, but hopefully we can maybe work on a video in the future together. Maybe you could remote in and do some cool things with virtual machines. Sounds good and back up to my servers.
Ed:You know it's it's yeah, yeah, because CFS replication is better. Yeah, back at my VMs to you, that's probably pretty, pretty cool.
Stefano:That'll be the kick.
Ed:I need to actually deploy ZFS, otherwise I would do it anyway, guys, what I'm, stefan and I are going to be doing in In the latter part of this uncast episode is we're actually going to build an unraid server from scratch. We're going to build it with the new 6.12 and it's going to be a purely ZFS server, all with SSDs and NVMEs, and it's going to be just for VMs. And we had a kind of chat about things Before doing this, like online, about what hardware to use. We were choosing between using an AMD platform. We wanted to use just consumer parts, really to be a build that More kind of in reach of every single user, rather than using very expensive server hardware, very expensive like thread ripper pros with 128 PCIe lanes. So we actually settled on an i9 13 9000 and as the 690 motherboard.
Ed:Now, if you can explain some of the problems with PCIe lanes Because we wanted to use a lot of NVME drives Maybe you can explain to the audience something called bifurcation, right, right so CPUs typically have an allotment of PCIe lanes.
Stefano:So I believe the 1300k has 40 PCIe lanes. No, that's too many. 20.
Ed:And then it has four for the chipset right and so the motherboard itself has four lanes.
Stefano:It's a. There's a separate kind of lanes that are available for things like your networking, usb, sata or other on board Hardware that you may want to plug in, and then the CPU itself also has its own lanes that are separate from the Motherboard itself. And so what you can do is, or what you can think of, is you have the PCIe slots. Some will be a full 60 next slot, some will be by 8 or some will be by 4, or you'll have like a different kind of mix in there, and they all have to. The motherboard has to figure out how to share those lanes with the CPU and other USB ports or other ports on the motherboard itself.
Stefano:So it can be kind of complex, and the reason why it's important to know how many PCIe lanes you have is because, let's say, you're trying to have a graphics card, several NVMe drives, multiple USB devices, maybe even some on board SATA devices. All that bandwidth has to be shared amongst all those devices, and when you run a PCIe lanes, some devices may not work at all, and this is pretty common when people try to bifurcate their 16 X slot into four by four lanes, if that makes sense. So by four by four, by four by four, when you divide that 16 slot right and so what could happen is One of those NVMe drives or two of those NVMe drives may not work at all, because you've run out of PCIe lanes.
Ed:Yeah, and with PCIe lanes as well. We've got gen 3, gen 4 and gen 5 and the thing with them PCIe lanes, is the different gens. They have different amounts of bandwidth. So you can actually run, say, a GPU, that's PCIe, pcie 3 in a PCIe 4 Times out and it will run at the same speed as what a PCIe 3 motherboard would run it at times 16, right? So although sometimes you can have fewer PCIe lanes, as far as I understand it, the higher the PCI version is, the more bandwidth you get, and so you can run older devices and still get the full speed, right, exactly.
Stefano:And a lot of people are concerned about oh, you know, my bandwidth. I'm not going to be able to fully utilize my graphics cards potential because I'm not giving enough bandwidth, because it's now on a by 8 slot versus a by 16.
Stefano:And usually graphics cards don't even get close to using the amount of theoretical bandwidth that's available, you know, to a by 8 slot or by 16 slot, especially a by 16 slot, even with PCIe 5.0 nowadays. So you're not really going to lose performance. Now you may. You may not visibly see or visibly notice that performance loss, but maybe like a very specific benchmarking tool, yeah, but generally the common person will never see that in gaming or transcoding or anything of that nature.
Ed:Yeah, yeah, how I look at it. It's like if I'm driving a car and we've got a speedometer and I'm breaking the speed limit in the UK, going up the motorway taking you to see Stonehenge, and we're doing 180 miles an hour, and you look at the speedo and you go hey, ed, you're doing 180. But if we slowed down to 170, you as a passenger would notice unless you looked at the speedo Right. So real life, you don't notice, but if you've got a gauge measuring things, you do Exactly. You know, that's a very kind of basic, probably stupid way of describing it.
Ed:It wasn't terrible, but I'll give you a kind of comment, but you caught me when I've already had a few beers, so you know you're not going to get much.
Stefano:This is my first. This is your second or third? It is yeah.
Ed:So, as well you know, a lot of people sometimes worry about having multiple GPUs with a system with lower amounts of PCIe lanes, but the chances of like hammering them out is pretty kind of slim. You're not going to be using, like, all your NVMe drives, all your SATA ports, all your GPUs, your networking, everything at once.
Stefano:Yeah, modern motherboards, they're pretty good at managing the bandwidth and honestly, the bandwidth is going to be the least of your concern. If you only have limited PCIe lanes, the price is straight up. My network Entire slots could completely be disabled, depending on how you have things configured. Yeah for sure.
Ed:And sometimes it depends on the bias of the actual motherboard. When you enable certain NVMe's, for instance some motherboards you put the third NVMe in and you'll find the motherboard will disable the fourth PCIe slot. So that's pretty common. It's give and take.
Stefano:And it's funny too, because you look at CPUs from 2012 and I'll use the 5960X as a terrible example, because that was a $1200 CPU, but back then it had 42 PCIe lanes. I know it's crazy and you can't buy that today on consumer hardware. You basically have to upgrade to Xeons and it's just like why have you taken away this amazing thing? And I believe there is a market of people who are willing to pay premium dollars to have access to those PCIe lanes again.
Stefano:I for one would love to have that. I'd run the. I currently have a 5960X. I'd love to run it, but unfortunately it's just completely power inefficient by today's standards and it's also. The single-thunder performance is also very slow and it's not something that you would want to run. And that's why I've converted to AMD, because a single-thunder performance is actually great and you get better performance at lower wattages. But unfortunately you still have to give up those PCIe lanes also on AMD.
Ed:I think what it is is. You get the kind of server like Apex CPUs. They've got plenty of PCIe lanes, official support for ECC memory, and then you've got the consumer CPUs. I think in AMD, for instance, the X570 chipset has again 20 PCIe lanes to the CPU and four that are used for the motherboard chipset. They think for kind of most people that's going to be enough for the GPU and maybe one NVMe, whereas in the past we'd kind of get more hardware.
Stefano:Well, you had your extreme series for prosumers Like the people who wanted the extreme of everything.
Ed:I think what they've done now is they've thought okay, we've got the top tier for servers, we've got this kind of consumer, so let's make another more expensive tier. And that's where we get the thread ripper, for example. I can't think of the equivalent for Intel at the moment the thread ripper, where you've got 64 lanes and you haven't got official support for ECC, but ECC works. And so then they go ah, let's have the thread ripper pro. So we're going to give you 128 PCIe lanes and official support for ECC and thread ripper pro. They were like kind of in the UK, like £2000 for, I think, a 16 core CPU.
Stefano:So when it's 2012 we would get for 1200 US dollars you could get a really somewhat efficient CPU, the PCIe lanes and the speed, but now you're overpaying for an inefficient CPU with 64 lanes, potentially ECC.
Ed:You're never going to get the full clock speed of what you get in the consumer CPU. So if you want the fastest gaming CPU, well, forget the PCIe lanes. It's like, ah, why can't we just have something that has? I just want a CPU where I can run lots of VMs and I can have all of the devices I want with the highest clock speed.
Stefano:It's a tough world to live in today, I think, and you know, it's even worse because Intel makes these workstations CPUs that you would assume are targeted at the business oriented people that need higher clock speeds, kind of like die 7s, but they want the sustainability and the ECC memory support, but even those are excluded from having additional PCIe lanes.
Stefano:So it's like what's going on here. It's like they're specifically saying giving the finger to any prosumers out there Like you either have to choose our consumer lineup or you have to move to Xeon and pay the premium there.
Ed:Yeah. So you know what they want us to have is they don't want us to have any fun, any fun with any fun with our hardware. So if you want to do kind of the best gaming, you have to have a separate machine for that and you know you're going to have to buy both.
Stefano:That's honestly, that's how it operates in my lab now, Like I have a dedicated gaming computer so I can get top tier gaming and then my unread server. It is a 5800X, but I mainly got that specifically for the single third performance at the time for gaming servers.
Ed:But still there's so many treatments.
Stefano:So the 5800X is that a faster single thread than the 5950X, or I can't remember so maybe on paper the 5960X is faster, but I think real world the amount of money that you pay for you wouldn't see an actual difference. It's been a while since I've actually looked at that.
Ed:Have you thought of upgrading to the new AMDs? What is it? The 7900 series?
Stefano:Yeah, I've considered it, but I don't know if it would be worthwhile at this point. There's not too much to be gained from a server perspective. No.
Ed:However, so we still got the same 24 PCI.
Stefano:Right, we still have the limitations.
Ed:Nothing better with that.
Stefano:Exactly so honestly I think, unfortunately I'm going to be pushed back into the Xeon's world and I've been looking at dual like 70R720XDs. You can find those fairly cheap in the US market and you get 20 cores with those and you get the 48 PCIe lanes.
Ed:Yeah, that's good, so that's pretty healthy.
Stefano:Yeah, especially for my needs I don't need anything too crazy like 128 PCIe lanes, or even 64, for that matter. But then you know I'm back to the same problems that we had before Inefficiency, slow, and is that really worth having additional PCIe lanes and it's just a really rough area to be in right now.
Ed:Yeah, so I'm interested to see how our build goes tomorrow and see how much we can actually push the CFS array, the NVMEs and the GPU that we pop into it. Yeah, it'll be interesting.
Ed:But I think you know, as long as we're not running multiple GPUs with GPU pass-through, that we're not going to be hitting all of the NVMEs at once, we're not going to be hitting the GPU at once. So I think that we're going to get a really good performance out of the system running again at NVM Right and it'll be interesting using the i9 with the performance and economy cores, and see Economy cores Efficiency cores. Efficiency cores. I keep calling them. Yeah, I keep calling them. That don't I yeah?
Stefano:But you know.
Ed:Intel, you should call them economy cores, because then I won't be wrong and it will make stuff no-wrong yeah and also it actually kind of makes sense that you know, you want your economy. Yeah, it fits in line with the risk of the world thinking, I think, I think, so I think the same tower of Robin naming it.
Stefano:I agree, Just like have you seen a new naming schema for their CPUs? No, it looks like it's going to be terrible. I don't know the specifics, but it looks bad. Yeah, Not looking forward to having to relearn their naming scheme again. Can you expand on that a bit? Yeah, so I think they're going to get rid of, like the i9-13900K, and it's going to be replaced with something even worse. I wish that I would have looked it up now, but I didn't think we'd actually get into this part of this discussion.
Ed:So now look, for example, Would you like me to coogle it to you? My coogle might be broken, but feel free to. Yeah, this is better than chatGPT there's coogle, is it?
Stefano:Yeah.
Ed:You know, ask a question. How do I?
Stefano:ask a question, you just talk to it. Coogle, can you look up? I forgot what I was supposed to ask. The alcohol started to kick in. Can you look up what the new naming convention is of Intel processors when they come out in the next year or the year after?
Ed:No, you're too lazy, do it yourself. Nice, sorry about that. You know Bad jokes and beer, they go together.
Stefano:Yeah, it's more fun in the moment, anyway, for sure.
Ed:Anyway, before we get too drunk, Stefano.
Stefano:I'm already drunk. How many are you talking about? I think?
Ed:why don't we wrap this part up and go and build the server? Can I take my beer to go, huh?
Stefano:Can I have this as takeaway?
Ed:as you say, Well, I think off camera we can just like drink as many as we like until we fall over, and then, when we're throwing out the pub, we're going to build the server. How about that? That sounds spectacular, okay, okay, thanks for watching this part, guys. We'll catch you in a moment.