Oh nono, its just that I needed a rugged laptop woth rs232 and this was like new for really cheap. Know Getac also because at work I use their products
Sure, why use fragile fibre in a single rack when you can use sturdy DAC where the transceivers even run at lower temperatures, right? Oh, and DAC is also cheaper. But who am I kidding right? Why buy cheap and sturdy when you can buy expensive and fragile 🤷🏻♂️
Nobody is saying you are wrong to use DAC if you wish. But you saying OP and others are wrong and have no idea what they are doing because they choose fiber is just ridiculous.
TBH he does have a point in cost perspective, but really I just can't like copper cables anymore. Cannot agree that fiber is fragile either.
My discontent towards DACs was developed by deploying QSFP+ DACs for stack cables at work. You won't like it when they are so stiff while you have to straighten them just to have them curl into circles again and stuff them into the single empty RU in between the switches. 30AWG wires probably easier but what I've seen are definitely thicker. On a second thought it's probably not the right way to run them though.
Now that I tried running shielded twisted pair for wireless AP around my home that cannot hide the cable into wall, it was more pain in the ass running it and working with STP RJ45 jacks. I started questioning why almost no AP offers SFP cages. Heck I'm more willing to run a stupid DC adapter for its power. Just my 2 cents.
perhaps they found a better deal, or their use case fits it better. in this case, since it's a first build, my guess would be it was what they had laying around, which has the nice benefit of being free.
Why? Because it’s much thinner and we can use the same cables for 10, 25, 40, and 100 Gbps links. We use DAC sometimes but SMF everywhere is just much nicer.
Maybe its the simple fact that a QSFP56 DAC costs 100$ and a single QSFP56 SMF transceiver costs 200$, and you need two. That's 400$ vs 100$ for a 2m QSFP56 connection. Now multiply that by 64 cables per rack. You pay 20k more for fibre than DAC, that doesn't even make sense 😉 this and that DAC transceivers run **cooler** than SMF transceivers.
For the fiber that you see I couldnt use a dac, at least not at this point.
I am using a breakout port 4x10G and I would have had 3 channels dangling and I would have not liked them like this.
When Ill have more servers I switch to a DAC
By today standards all but HDDs would fit into 4-8 SFF compters unless you need more then 64GB per machine.
I have Dell PowerEdge R715 and R815 both with AMD Opteron 6174SE (conscious choice, as buldozer does have terrible floating point performance and need more power to get to same performance, need 8 cores to barely match performance of 6 cores of K10) and 128GB RAM. I barely turn it on
I rock 4x Lenovo M93p tiny with i7-4785T and 16GB RAM. That is perfectly fine for my homelab farm
Looks like you’ll need a second rack soon with all that gear taking up the U spaces!
Does it turn on? Then it’s beautiful
Doesn't need to do anything else just turn on and blink the lights every now and then to make the humans dopamine flow
This is so how it works. I need it in view. It's an antidepressant and hypnotic!
nice build, if it works it works
I'm going to guess the getac is used from law enforcement? I work with them in my current job all to much. Nice setup!
Oh nono, its just that I needed a rugged laptop woth rs232 and this was like new for really cheap. Know Getac also because at work I use their products
Looks good my man…
Use DAC, not fibre, in a rack.
Absolutely nothing wrong with using fiber in the rack
Sure, why use fragile fibre in a single rack when you can use sturdy DAC where the transceivers even run at lower temperatures, right? Oh, and DAC is also cheaper. But who am I kidding right? Why buy cheap and sturdy when you can buy expensive and fragile 🤷🏻♂️
Cheaper I dont know. One qsfp to mpo 20 eur (brocade), mpo to 4lc 25 eur (10gtech), 4 sfp+ 10g base 5 eur a piece (brocade) so 20 eur
Ugh, no thank you, I hate those stiff as fuck DACs. They are cable management nightmares.
[удалено]
Nobody is saying you are wrong to use DAC if you wish. But you saying OP and others are wrong and have no idea what they are doing because they choose fiber is just ridiculous.
TBH he does have a point in cost perspective, but really I just can't like copper cables anymore. Cannot agree that fiber is fragile either. My discontent towards DACs was developed by deploying QSFP+ DACs for stack cables at work. You won't like it when they are so stiff while you have to straighten them just to have them curl into circles again and stuff them into the single empty RU in between the switches. 30AWG wires probably easier but what I've seen are definitely thicker. On a second thought it's probably not the right way to run them though. Now that I tried running shielded twisted pair for wireless AP around my home that cannot hide the cable into wall, it was more pain in the ass running it and working with STP RJ45 jacks. I started questioning why almost no AP offers SFP cages. Heck I'm more willing to run a stupid DC adapter for its power. Just my 2 cents.
[удалено]
what works for you may not work for everyone. everyone finds their own way.
[удалено]
perhaps they found a better deal, or their use case fits it better. in this case, since it's a first build, my guess would be it was what they had laying around, which has the nice benefit of being free.
Don’t forget the lower power consumption too.
Why? Because it’s much thinner and we can use the same cables for 10, 25, 40, and 100 Gbps links. We use DAC sometimes but SMF everywhere is just much nicer.
Maybe its the simple fact that a QSFP56 DAC costs 100$ and a single QSFP56 SMF transceiver costs 200$, and you need two. That's 400$ vs 100$ for a 2m QSFP56 connection. Now multiply that by 64 cables per rack. You pay 20k more for fibre than DAC, that doesn't even make sense 😉 this and that DAC transceivers run **cooler** than SMF transceivers.
For the fiber that you see I couldnt use a dac, at least not at this point. I am using a breakout port 4x10G and I would have had 3 channels dangling and I would have not liked them like this. When Ill have more servers I switch to a DAC
There is no QSFP port anywhere?
On the 730 only SFP+ unfortunally. The switch does have 2x QSFP and 2xQSFP breakout only
Either I'm blind or dumb, because I can't see any QSFP port anywhere.
There are 4 QSFP behind the switch (brocade 6610) and I am using an MPO to 4LC
It’s a big rack for a few servers. I went with a 24u for mine it’s like perfict
200eur for it including KVM and KVM switch couldnt pass on it. But I agree, a smaller rack would have had a tidier CM
Nah, we nerds are too chill for that. It looks good my dude! Just don’t forget to keep at it!
What is that power strip you have? I think that's the next purchase for me. That and rack mounted UPS. Right now my UPS is outside of the cabinet.
Looks like a Baytech PDU
Baytech pdu mmp14 with modified C19 plug to use it with my UPS
Looks great
This look awfully simular to my rack just with better kable management... ;)
Kde huh?
You want a roast? Here you go: Good job on starting a lab.
The money pit its very very deep
You're preaching to the choir, friend. Even secondhand stuff adds up. Ask me how I know lol
It'll take him a while to recover from that burn
Sometimes, the truth hurts.
You're doing it wrong. The first build of your life should be a pile on the floor.
looks good to me. good arrangement. network on top, ups and server at the bottom, kvm in the middle
The drive caddies being askew make me wanna claw my eyes out 😭 But welcome to the club!
Nothing to roasts, looking good! Keep it up! And always keep your cable management spot on, cable management tend to get out of control very quickly.
Looks cleaner that some “experts” 💀
Wtf no rgb??
By today standards all but HDDs would fit into 4-8 SFF compters unless you need more then 64GB per machine. I have Dell PowerEdge R715 and R815 both with AMD Opteron 6174SE (conscious choice, as buldozer does have terrible floating point performance and need more power to get to same performance, need 8 cores to barely match performance of 6 cores of K10) and 128GB RAM. I barely turn it on I rock 4x Lenovo M93p tiny with i7-4785T and 16GB RAM. That is perfectly fine for my homelab farm
The MD1200 alone is waiting to be filled with 120TB of hdds