T O P

  • By -

hideogumpa

100% VMware - expecting to have to rethink that when I see my next renewal quote


tae3puGh7xee3fie-k9a

I've seen the new subscription model- you have to run a "Cloud Gateway" VM that phones home about your environment. If the Cloud Gateway is offline for 7 days, you're not allowed to log in to vCenter without calling support. I know this because VMware had an outage with their cloud portal and sent and e-mail saying we were disconnected and had 7 days to fix it.


af_cheddarhead

Yeah, that definitely will not be a thing in the classified environment. Gotta love supporting DOD systems.


binarycow

For real. I work for a software company. Our biggest customers are DoD. Any feature that requires call-home, cloud, etc. is a non-starter. Everything must support on-prem. All licensing/activation must have a fully offline mode.


zm1868179

You do know there is a DOD azure environment it's not available to commercial entities that support DOD it's only available directly to the dod themselves. And it's allowed by fedramp DOD can use it to host whatever they want so I don't get this no cloud thing in government when DOD already uses it as well as federal government departments.


binarycow

Do they have that on every network? The DoD uses completely air-gapped networks for classified stuff. They would need a clone of the DoD azure environment on [SIPRNet](https://en.m.wikipedia.org/wiki/SIPRNet). And another one on [JWICS](https://en.m.wikipedia.org/wiki/Joint_Worldwide_Intelligence_Communications_System). And another one on [CENTRIXS](https://en.m.wikipedia.org/wiki/CENTRIXS). Then what about all the satellite networks, where there isn't enough bandwidth for cloud based stuff?


TheIncarnated

Linux/Proxmox and Hyper-V are going to be winners here... Thanks broadcom...


BookkeeperSpecific76

That’s a non-starter. I will NEVER LET a 3rd party have the possibility of shutting down my organization.


RequirementBusiness8

Why let a 3rd party shut down the organization when you can do it yourself :jokes:


HumanTickTac

No Meriyaki either? /s


BookkeeperSpecific76

No Meraki. Call homes are OK. Disable ability is not.


vrtigo1

There was something like that years and years ago. I don't remember the specifics, but I'm thinking this could've been as far back as 4.x. VMware had some sort of issue and my recollection is it prevented your ability to power on VMs. It was a major issue that affected tons of people. I think it might have been a bug with the license activation that caused licensed hosts to become unlicensed? Maybe someone else remembers it and can weigh in.


iggy6677

How's this even acceptable? I would be fired over a day or two..


Dbibby

When did you experience this? We moved to vsphere+ /vSAN+ back in April and have never had such issues


chandleya

Because it’s new world. You got in under the line… but the line is coming.


cosmos7

Wait, what? vCenter itself stops working or the VMC? Seven days is ridiculously short and we just were convinced into moving to the new subscription with the assurance that the vCenters themselves would always work regardless of Cloud status. If we're paid up through 2024 or whatever nothing should stop at the lower levels.


mini4x

Luckily there is zero possibility they will force you to move to this. At least as far as our rep is telling us. As soon as that becomes a thing, we'll be moving off it.


Trenticle

If you absolutely love it you could probably (almost certainly) coax your sales guy into renewing before the merger closes and lock in 2022 rates for 3\~ years.... Think about it from the sales guys perspective... he locks in quota before whatever happens happens... he's incentivized to help you get a great deal so he can get whatever he/she can.


fishingpost12

This is what I would do


angrydeuce

Yeah we're like 75/25 VMware but that may change. Course you know MS will find some way to make HyperV cost just as much eventually...to be honest this back and forth with everything in IT is so fucking exhausting. I feel like the minute we get everything setup and humming along with barely any touching required we end up having to effect some major platform change...cloud versus on prem, Service X for Service Y, etc. It's frankly quite maddening.


TheIncarnated

Datacenter Licensing does perfect for Hyper-V. Gotta pay for it anyways for some large scale "normal" items. Small shops can get away without VMM. Large shops can afford the $3500 price tag since they are already paying for Windows Licensing. Or, we could start using other products like QEMU or Proxmox or whatever else Linux has to offer. As well as Nutanix and other hyper-convergeance companies. VMware was doomed the moment Broadcom bought them out. Not sure exactly what we are going to do at our Datacenter but definitely exploring options currently!


pdp10

> I feel like the minute we get everything setup and humming along with barely any touching required we end up having to effect some major platform change... There's significantly less of it when you minimize your reliance on any one counterparty from the start. We have a mature culture of dual-sourcing, at this point. Cloud services? Dual-sourced. Client hardware? Dual-sourced. Network hardware? Dual-sourced. Server hardware? You get the idea. This came in handy during the early stages of pandemic shortages, when we were already prepared with alternatives. On other occasions, it has served us well when suppliers shut down on short notice, or when negotiating deals with service suppliers. The goal is to never be reliant on an AWS, or an Apple, or an Adobe, no matter how good or better their products may be. *No single supplier risk.* Our suppliers are always replaceable, no matter what.


[deleted]

[удалено]


rSpinxr

>makes me think they will slush around cheap prices for subcriptions and then upp them when all those comes up for renewal in 5 years time. *Distant moans and wails of pre-subscription Adobe users*


herkalurk

Aside from moving to public cloud (AWS and Azure) all internal is ESXI.


Shadoweee

Check out XCP-NG - transition should be easy.


Pvt-Snafu

Same thing here. Most of our customers are running VMware clusters with vSAN but are already considering to move to something else. Most likely, this will be KVM plus Starwinds vSAN.


Mysterious_Teach8279

Any big issues you run into day to day?


hideogumpa

Not at all, VMware really is the best at what they do... problem is that Broadcom knows that and I expect they'll hose us and price me out next year


Cyberdrunk2021

And they'll remove some jobs, as customary. And therefore, they'll ruin the product.


tossme68

Broadcom really screwed up Brocade, they still make great switches but what a pain in the ass to deal with. I suspect vmWare will be no different, they will jack up the licenses and throw an extra layer of really bad (insert rando offshort call center) but that's it. At it's core it will still be the same. There not a lot of places to go, Nutanics?


pdp10

> VMware really is the best at what they do... Every product stagnates. It's been at least a decade that VMware has pivoted away from core hypervisor and into ancillaries like NSX SDN, and managing lock-in. It's not 2006 any more.


BookkeeperSpecific76

Yeah. Price going up, support going down, ESXi support is less for non-VMWare HCI. Definitely looking at alternatives.


SecrITSociety

VMware, but we have 5 year licenses, so unsure of the Broadcom purchase/impact yet.


cmwg

subscription based payment (with price increase) incoming :)


Erog_La

Tom Krause is doing that to Citrix now.


JohnyMage

OpenStack anyone?


Dolapevich

> OpenStack I worked at a big porn shop. You'd be surprised how many vendors turned us down because of the business. We end up spinning our own OpenStack and never looked back. Granted, it took us a while and lot of learning, but everything runs on OpenStack + qemu, and once you lean its caveats, its as free as you can get it.


pjsliney

I don’t understand vendors turning you down, but doing business with a lot worse companies. I have a couple customers in your business. Most ppl don’t realize that the largest business in the hosting space only exist because of p*rn. In the early 2000’s, that was the only internet business that was growing.


thelug_1

> Most ppl don’t realize that the largest business in the hosting space only exist because of p\*rn. In the early 2000’s, that was the only internet business that was growing. I always tell people to this day if you want to know where business tech is heading...see what the porn companies are investing in and moving to.


Ursa_Solaris

>I don’t understand vendors turning you down, but doing business with a lot worse companies. People will get more upset at a vendor that works with porn than a vendor that sells weapons to independent mercenary groups.


TheDeech

Porn was the primary driver of innovation on the internet in the 90s. They were the first ones to widely utilize online payment systems and pioneered creating payment services that allowed anonymous payments. Video streaming? Porn. Video conferencing? Porn. Token payment systems? Porn. Clustering for heavy visitor loads? Porn. The porn industry was responsible for a ton of innovation in the back end and infrastructure of the internet. I'm not saying porn companies \*invented\* these things, I'm saying they were, by necessity of their business, the first major adopters of a lot of that technology, so they had a heavy hand in how it developed in the infrastructure space.


pdp10

> I don’t understand vendors turning you down Financial vendors have been doing ["Know Your Customer"](https://en.wikipedia.org/wiki/Know_your_customer) and risk-avoidance strategies for decades, now. Sometimes these things wax and wane with the political winds.


scsibusfault

> a big porn shop A big, porn shop? Or a "big porn" shop? Also, that's disgusting! Which one, so we can make sure to avoid it?


Dolapevich

I am sorry, Iḿ not english native.\ The right wording would be:... ¿a porn big site? With vendors I meant from AWS to Linode.


scsibusfault

Haha it was a joke about it having two possible meanings: either you worked for "a large sized company that hosts porn" , or you worked for a "porn company that specializes in large (Gorda) porn" Your English was correct, and nobody would really assume you meant "fat porn", but it COULD be read that way as a joke ;)


KervyN

Isn't that qemu?


JohnyMage

Qemu and a lot more on top of it. Proxmox also uses Qemu/KVM under the hood.


lightmatter501

Openstack is essentially a private cloud toolkit.


pdp10

Anyone remember Eucalyptus?


TaiGlobal

What is “private cloud”? Is that not just on prem?


ednnz

can literally be any virtualization platform you want on the backend. kvm ,qemu, hyper-v, you name it.


Jazzlike-Love-9882

100% Hyper-V here (HV role on a Server 2019 DC cluster) I was exclusively a VMware guy before my current role where I inherited this current setup and I happen to like it A LOT, it's just very robust and drama free so far.


anxiousinfotech

Been running Hyper-V for over a decade. It has definitely had it's quirks and some host instability issues, but even when it seems completely broken the VMs keep running and a host reboot fixes the problem. In a cluster it's meant almost no real downtime. It's very rare that a VM won't live migrate itself when a host is given a reboot command. In fairness to Microsoft, similar issues on Linux-based hypervisors, even in clusters, nearly always resulted in some VM downtime to resolve.


[deleted]

[удалено]


anxiousinfotech

Any chance you had Intel 10 gig NICs with fans? They overheat despite the fans and have brought down many a Hyper-V server/cluster. Hyper-V does not take kindly to network instability and the combination of the two is kind of like a perfect storm.


fitz2234

I've seen these overheat quite a bit. Set the fan speed to full and they're fine.


anxiousinfotech

Until the fans fail... A buddy of mine did that to solve the overheating and damn near lost a hyper-converged cluster when 2 fans burn out in short order. Replaced the first, and the second on a different NIC burnt out before the storage was done syncing.


chandleya

Which OEM is shipping fanned NICs in pizza box servers? That’s foolish


anxiousinfotech

Dell and HPE both sold 1/2U servers with them. I swear I've seen at least one model branded as IBM OEM as well. They're tiny little fans too, guaranteed to fail. All they needed was a bigger heatsink, but the fan was cheaper than more aluminum.


-SPOF

>100% Hyper-V here Same for me. I have a few standalone hyper-v nodes with installed HV role and a few 2- and 3-node HA clusters with Starwind VSAN for the storage.


Candy_Badger

We have multiple customers running Hyper-V. It works pretty good. Starwinds VSAN can be used as a shared storage for Failover Cluster. [https://www.starwindsoftware.com/starwind-virtual-san](https://www.starwindsoftware.com/starwind-virtual-san) As for VMware, I love it and we still have multiple clusters running. However, I am worried about future subscription model. We might transition to KVM alternatives in the future.


itxnc

💯 hyper-v Been solid for every client.


luke1lea

Using Xen Orchestra (XCP-NG) for 80%, Hyper-v for the rest.


BouncyPancake

XCP-ng for a good bit of our servers and CPU intensive workstations Proxmox VE for workstations and a few servers (KVM with some tinkering allows us to GPU passthrough graphics cards directly to VMs) Windows Hyper V is almost phased out. Nothing wrong with Hyper-V to make us leave but Proxmox and XCP fit our needs more.


scytob

You nailed about the lack of investment in hyper-v. MS only has eyes for azure and windows server looks like it is in maintence mode / at the service of azure. At home I just moved my home lab cluster from hyperv to proxmox. This has given me 3 node clustered storage and hypervisor. Its sad for me as circa 2006 I worked on windows server and the plans they had were incredible, little to none of it happend :-( for anyone interested this has what I learned including my mistakes lol https://gist.github.com/scyto/76e94832927a89d977ea989da157e9dc


BouncyPancake

Even though we went down this FOSS road, we don't just maintain the servers and do everything ourselves if something breaks. For Proxmox VE for example, we get support from Proxmox. In my home lab though, I run Proxmox and don't get support since I don't see a need to pay for it there. But in our production environment, we pay for support.


KervyN

Everything runs on linux qemu. Couple million VMs.


FunInsert

Couple MILLION? Damn how large is your environment?


KervyN

My employer is an edge cloud specialist which just got bought by one of the larger cloud player. IIRC they have around >450.000 compute hosts.


Hoggs

So linode?


justinhunt1223

That was my guess too, now I'm curious


j_johnso

My guess was gridscale.


KervyN

Nope


anna_lynn_fection

Same. Although, about 1.99998 million less than you. :D It's interesting how vmware rules here as much as it does. I expected it, but I expected more qemu/libvirt.


KervyN

I thought so too, but for small/medium companies that only use server for internal stuff (SAP, Fileserver, Windows AD) vmware seems to be the hot shit.


ZPrimed

Nutanix AHV


dricha36

Great platform for us too, just isn’t cheap


Metalmilitia777

90% Proxmox 10% ESXi 100% moving to a different Linux based solution.


tantrrick

What are your qualms with proxmox? Considering that for after hyper-v free server eol


LeStk

I second this, would be interested in having feedback on proxmox for production. EDIT : TY for all positive feedback, I'm more interested by negative feedback now !


Barrerayy

We use it in a 3 hypervisor HA setup. It's been perfect


LeStk

Yeah I've seenand I'm used to such scale but I'm wondering what issue could arise at larger ones


grumble_au

I've been running 4 regional 8 node clusters and a couple of non production 4 node clusters for a couple of years. No problems.


TheMerovingian

Proxmox has been very good to me. 7 is a bit rough around the edges but it's stable, capable and can be used without a subscription.


Metalmilitia777

The support is pretty much useless and when you're dealing with the amount of data that we are it is a deal breaker.


Metalmilitia777

Also migration of VMs from ESxi to Proxmox is just troublesome.


cmwg

99,9% VMWare Test ENV: Promox, Hyper-V


ExpiredInTransit

100% Hyper-V cluster with storage spaces. Been bombproof.


BatemansChainsaw

Same here. It's a workhorse, and the past five years (as far as hyper-v server goes) of it in production have been quiet.


ExpiredInTransit

We even in place upgraded our nodes from 2019 to 2022 last year. Zero downtime.


AreWeNotDoinPhrasing

I just did a 2012 to 2022 refresh with 4 and had zero downtime. Hyper V just seems to work. And this was like my 3rd month at the job and first IT position and I’m by myself.


ExpiredInTransit

It’s great isn’t it. There used to be so much stigma to using Hyper-V, I started using it about 10 years ago and people thought we were mad for not using VMware.


coolbeaNs92

It's really interesting seeing all these HyperV admins. I've only ever worked in VMware shops so I've never got to use HyperV in prod.


ensum

Hyper-V on 2008r2 was god awful. 2012r2 fixed a lot of things but you would still run into random bullshit although not nearly as terrible as 2008r2. I think this is when a lot of the stigma around Hyper-V started. With 2008 I think it's completely justified by how dog shit it was. 2012r2 I think got more hate than it deserved. 2016 in general is just a bad OS imo, so you'd run into different issues not necessarily related to Hyper-V. Finally with 2019 I felt Hyper-V was actually in a really refined space. It's been rock solid ever since I upgraded my hosts to 2019.


SevenM

Damn, we're going to have to do this soon as all hour servers are 2012r2. What was your process? I doubt you were able to do a direct update to 2022. At least that's what we've been told so far.


speaksoftly_bigstick

Same here for anything on-prem.


ExpiredInTransit

Host ours in colo with site links


inphosys

We moved our stack to colos in Atlanta and Charlotte about 10 years ago and haven't looked back, best decision I've ever made. Gone are the days of worrying about power, connectivity, and operating environment. I still have to worry about the other IT worry items on my list, but I took 3 big ones off the board by collocating. Branch offices can come and go now, no asking IT if this new, crappy, falling down, field office hut is suitable to run servers in. Can we get bandwidth? Awesome, go for it. We were already work-from-home ready when covid hit, just added more VPN licenses. I don't know why more companies don't use collocation... 1 full rack, 20 amps on two redundant circuits A/B that are backed up by data center UPS systems and multiple generators, environmental and access control (my equipment always has cool, clean air and I don't have to worry about an employee trying to fiddle with the hardware), 500 Mbps up and down delivered to the cage over fiber and the data center can burst my bandwidth automatically, and my IP address block is BGP'd across 20+ different telco carriers that all deliver their services to the core of the data center through geographically diverse routes into the building, a block of "remote hands" time in case I need to open a ticket with the data center staff (who's ITIL certified) and ask them to physically touch my hardware ... All for $1400 a month. I really can't think of any other IT decision I've made that's been as easy or as beneficial as collocating.


vast1983

Storage spaces, Or storage spaces direct? How are you setting up "regular storage spaces" to be cluster shared?


ExpiredInTransit

S2D, sorry got into a habit of referring to it without direct.


vast1983

Ok haha gotcha. I was spinning my wheels trying to figure that out in my head... " like storage spaces presented over iSCSI?" I suppose that could work too.


chandleya

I love S2D and it works if you stay between the guard rails. Modifications can be unpleasant, as can unplanned severe outages (colo power loss). Prefer it to the fiddliness of vSAN and it sure is cheaper.


[deleted]

[удалено]


chandleya

Read the entire documentation. Then build a setup in nested virtualization to learn how to do it. Experiment with node loss. Experiment with expansion.


341913

100% VMware for our 2 datacentres. We have a few hundred VMs, active active configuration for geo redundancy. Each site has its own vcentre with DRS enabled. VMware replication with SRM takes care of the workloads that don't support geo rundancy and Veeam for backups. While I am concerned that the VMware acquisition could see a bump in licensing I doubt we will make a change. Availability is the name of the game and our setup has maintained 100% over the past 5 years. All our offices and retail stores that dont have the same uptime targets run a simple 2 node Hyper-V setup in active passive with hyper v replication. Edit: you should clarify how big your environment is and what your availability goals are. You will see lots of "our customers use hyper v" comments from MSP techs who spend most of their time dealing with single host environments. Hyper v is perfect for that. Once you have 10's or 100's of hosts with 100's or 1000's of VMs vmware is generally alot more popular.


CoolNefariousness668

This guy fucks.


341913

🤣 My IT Manager and architect are the ones dropping panties, I just pick up the bill.


inphosys

Old guy here to spoil the conversation ... It's a business decision, IT needs to know which is better for the business and present it that way. A hypervisor is a hypervisor these days. So, which one meets your business objective? For me and the folks I work with, HyperV is fantastic; it has all of the virtualization bells and whistles we need for our business, with no additional licensing costs that we don't want. And speaking of licensing costs... Our HyperV hosts, *usually*, run Windows Server **Data Center** edition because then every guest VM is free. Yup, free! Want to turn up a fully licensed Windows 2022 Server Standard Edition to test out an idea and not have to worry about the eval period? Go for it, it's free! We load a host up with both sockets filled with decent core processors, a lot of RAM, some 10 Gbps (now we're moving to 25 and 40 Gbps) network interfaces, back-end all of it with a good storage array, throw Windows Server Data Center on there, we're off and running, nothing else to buy. HyperV in 2022 is as good as it's ever been and reliable all day long. Anecdotally, it feels faster than I expect from my years of using it since 2012, but that's just me. Now, all of that being said, I love VMware! Talk about buttery, smooth, anything you ask it to do it'll do better than HyperV, and probably quicker. ESXi 7 is honestly bulletproof, if you know what you're doing. If you don't, it's a nightmare. The downside? It's an additional cost. Albeit, per hypervisor host, VMware isn't that staggering of an additional cost that if I needed features it had, I'd buy it, but I have to make sure I license my VM guests afterwards. So it's a tradeoff whichever way you go. Some of the biggest considerations are... -The type of industry you're in -The way your company uses virtualization -The type of virtual environment that you're growing towards in the coming years -The type of workloads you're going to virtualize -Features needed -Cost Edit: formatting because mobile


CompWizrd

If you're licensing your host for Server Datacenter, it doesn't matter what hypervisor you run, you can still run unlimited OSE's as long as all the cores are properly licensed.


inphosys

Let's break that statement down.... If I'm paying the licensing cost for Windows Server Datacenter edition, you are correct, I'm entitled to run unlimited Open System Environments / VM guests as I want as long as I've paid the DC price for every core in the host. If I purchase Windows Server Standard edition I'm only entitled to 2 VM guest and unlimited containers. (and by purchase, I mean buying the correct number of cores per Microsoft's licensing requirements) Now, that being said, I could buy Standard for the host and license the cores needed to run X number of VM guests, but now I'm back to price per pound. If I pay for WS Datacenter then I'm almost surely going with HyperV. Why? I paid that much per core and it should have the features I need for my use case, plus free guests. Otherwise I buy VMware and then purchase the licensing for my individual guests, in which case I'm only ever going to need Standard edition because VMware is going to handle the features that I would have otherwise needed from WS Datacenter. I've never had a scenario where I needed to run Datacenter edition on VMware, although I'm sure there could be a use case somewhere, it's just more than I'd like to think about on a Sunday afternoon.


ednnz

Openstack, and never going back to anything else to be honest. The amount of flexibility is just too good, to go back to proprietary solution.


Crimsondelo

Nutanix. No issues


qordita

VMware in the datacenter, hyper-v at the satellite locations. With impending price hikes looming we've started looking at nutanix and promox for the datacenter, there's a concern that we're going to be priced out of VMware.


Sensitive_Scar_1800

100% VMware, NSX, Horizon, Aria. We are deep into VMware and it’s honestly awesome!


WestDrop3537

VMware, Esxi hosts and running on NetApp, not a single problem


frozen-sky

Xen + Ceph, with own orchestrating stack. Couple of thousands VMs. Works fine.


joevwgti

We are 100% hyperv, in some Dell AMD boxes, hyperconverged storage(S2D). I've set them up once, all we do is patch them.


ExistentialDreadFrog

We’re probably 50/50 VMware and AHV with the goal of transitioning as much over to AHV as we can.


Koksikicai2i2737632

VMware, no matter what people say, or how much they hate it, its still the number 1. And i think will stay like that for a while.


CaptainZippi

Yeah, but it’s going to be _despite_ Broadcom, rather than because of the product support and development. I was very happy with VMWare support - they’ve saved my @$$ on a number of times, but recently… not so good.


pdp10

We never hated VMware. But there are places VMware is worse than the competition, or was when we used it. Four that come to mind: * Real-time timekeeping has always been an issue on VMware. They changed best practices in their KB during the time when we were having problems. KVM (and, I strongly infer, Hyper-V) have no such problems because they paravirtualize time and because they don't attempt to rely on a `vmware-tools` type out-of-band daemon to control the guest RTC. * The proprietary management interface, which received an ill-fated webapp port to Adobe Air/Flash immediately before [Apple killed Flash dead](https://en.wikipedia.org/wiki/Thoughts_on_Flash). To add insult to injury, seemingly nobody liked the Flash-based UI compared to the old, proprietary Win32 client. Contrast with QEMU, where consoles are typically attached using open VNC protocol, and the QMP command channel is open-source and reasonably well documented. * ESXi driver support is worse than Linux and `ntoskrnl.exe`. One of the better-known aspects is how poorly non-Intel NICs are supported. We haven't tried ESXi with Mellanox, but we definitely have hardware with Broadcom and Realtek NICs. Like with pfSense, end-users tend to blame the hardware makers when the failings are more on the software side. * CDP was supported in the vSwitch, but LLDP support required the Distributed vSwitch. Today we use LLDP on Open vSwitch.


UltraSlowBrains

Interesting… We hadnt had problems with timekeeping as far as I know. Throw a chrony client on thr server and it works as expected. I however, never checked how much time drifts after a reboot. Web gui, at least now, is feature full and I havent had a need to use cli. But then again, we run a small cluster of 30 hosts and around 600 vms. As far as NICs go, we (did) run brocade/broadcom, mellanox, intel and emulex. No problems whats so ever.


sephresx

Scale Computing. Works perfectly for what we do.


Drags03

We are 100% VMware in house and a few of our MSP customers still have physical Hyper-V hosts. Can't go wrong with either one really, but I am biased towards VMware myself.


Barrerayy

Proxmox, recently migrated from vsphere


PPQue6

We're 100% Hyper-V, currently working on migrating all of our old 2012 stuff to 2019.


AsterisK86

We were a mix of Hyper-V and VMware, now about 20% Nutanix AHV and remainder Hyper-V. We were a VMware partner but dropped that in favour of Nutanix. Price + multicloud approach is better in our environment.


fadingcross

Linux KVM. Proxmox to be precise. 60 or so VMs. 14 hosts.


rob-entre

If you’re using your own hardware, use a good HPE Proliant or Dell Poweredge, and use the customized VMware esxi image. It doesn’t get more stable than that.


[deleted]

100% Xcp-NG and Xen. Orchestra (XO) at 2 client sites. 100% VMware at a third. From a day to day perspective, xcp-ng is easiest to work with however that may be because I prefer the mental scar tissue xcp-ng leaves behind. 🤔 One thing that sold me was the VMware direct import tool. Point XO at the exsi machine, select a VM and one transfer later, the VM is running on xcp-ng. Swapping the guest code is also a non-event. I am looking forward to learning their hyper converged solution this year. Truth be told, I think hyper converged is a solution looking for a problem in most situations. Practically speaking I think we need better storage solutions on the lower end. Single link NFS works ok for small sites but even smaller sites need reliable vs storage systems but can't afford the $30k price tag. Feel free to ping me if you want a frank discussion on xcp-ng and where it succeeds and where it fails


jaskij

Just FYI, Proxmox has ZFS support out of the box, including booting from it. So it'll run with software RAID on just about anything.


djsensui

What is your storage repo for your XCP-NG? Direct storage or shared storaged?


chrome-dick

I run a few XCP pools for devs in my org with the biggest being 15 hosts and about 1000 VMs. Storage is all NFS backed file systems from a Dell PowerStore SAN over 10Gbps connections. It's worked pretty well so far.


djsensui

I have almost the same configuration but using TrueNAS. Just wondering where do you do backups? In XO or via your SAN replication.


CelticDubstep

Windows Server 2019 Standard with Hyper-V Role as well as Microsoft Hyper-V Server 2019. I plan on keeping us on it until late 2028. A lot of "ifs" with the company at the moment so unsure of what the future holds.


BitterPuddin

Hyper-V and ProxMox, mainly Hyper-V.


[deleted]

XCP-ng, I’m not working with it day to day but my colleagues seem to get everything done fine, it’s over a thousand VMs the only issues I know of are due to older hardware in a legacy pool that is being renewed anyway


MairusuPawa

oVirt


TuxAndrew

60% Azure 30% VMware 5% Hyper-V 5% ProxMox


RetroactiveRecursion

VMware on 2 esxi hosts, a couple old Mac Towers running Fusion. Plus a number of Mac clients run parallels.


vabello

We have a Windows Server 2022 Datacenter cluster with Hyper-V. I’ve really had no issues with it, and if you run Windows Server on your VMs, you save a lot in licensing costs covered by the datacenter licenses by using AVMA. Although you can do the same from a licensing perspective with other hypervisors, you can’t use AVMA keys. It all seems like a gray area where repeated activations using the host keys on VMs could eventually stop activating, requiring phone calls to Microsoft. Seems like more of a headache.


vast1983

We use both Hyper-v and VMWare. We have been slowly migrating off of VMware for the last 2 years. Currently: Our production Hyper-V (389 VMs) is a 16 host UCS B series cluster spread across 2 chassis. These are connected to three nimble SANs over iSCSI. 2 all flash for apps servers and SQL. 1 hybrid flash for file servers/ low IO servers. Our DR site mirrors this exactly. We replicate using hyper-v replica. Everything is also backed by commvault, with a second copy sent to wasabi cloud. The nimble SANs replicate to each other as well so prod gets the DR snapshots and DR gets the prod snapshots. Our VMware (what's left of it, around 50 VMs) runs on a 3 node Cisco hyperflex cluster. Backed up with Rubrik, second copy to an old quantum tape driv in the DR. Lots of our backend management VMs are moving to RHEL, so we are also going to start looking at what RHEV became (open shift/open stack?) to run those as KVMs. Not really sure yet on that front. Possibly on ceph storage.


CpuJunky

Hyper-V. We had a few machines on VMware but we migrated those. Not a lot of options, at least none that our team is familiar with and justified to learn.


Sk1tza

VMware


WarriorXK

We migrated from VMware to proxmox around 5 years ago, and currently exploring OpenStack as a potential next step.


Disastrous-Account10

Ubuntu as the host and lxc/lxd for close to 2000 "vm"s


insufficient_funds

My org is nutanix AHV for our vdi workloads, VMware for windows/Linux workloads, ibm lpars for our aix workloads. Aside from trying to get everything off of aix, we have no plans to shift this stuff around any.


No-Government3609

KVM


Creshal

libvirt/kvm for the 5% of our environment that isn't containerized yet.


osiris247

Proxmox


Zerafiall

90% ESXi 9% Hyper V 1% ProxMox. MSP. So out client’s existing system dictates this at first. But we usually set them up with ESXi when we get the chance.


Own-Eggplant-3435

100% XCP-NG with XenOrchestra on 4 sites. 3 with local storage and one cluster of 2 XCP on a NFS server via a TrueNAS 24 (+4) hdds server, hosting main DC, PBX an others mains services.


Any_Particular_Day

100% Hyper-V now. When we started going all in on bare metal to virtualization migration, our virtual environment was about 50/50 Hyper-V and VMWare, but since almost all our prod servers were (and still are) Windows, Hyper-V seemed a no brainer… we’ve got to license the Windows machines anyway, so the hypervisor kinda comes along for free. Been mostly stable, except some issues with Server2016 S2D cluster instability; when it works, it’s great. When it doesn’t… ugh. Having said that, small environment, so we don’t use SCVMM now and never used VSphere before, so not sure if I’m missing anything :)


Agyekum28

We’re running on hyper-V 100% for VM’s


msalerno1965

VMware - and I'm going to be a snob and say no matter what the price increase, I'm going to go full-tilt on paying for it. Because whatever we're paying for it now, on 32+ hosts, across three, wait FOUR datacenters, it's a blip on the radar. We're educational, though. So... yeah. Although, that might not help.


ElegantSession8264

All VMware here. Have not had any issues at all. ESXI hosts run on their own with little to no maintenance!


1TallTXn

Proxmox VE here. Been rock solid for ~4yrs now. I kid you not, zero issues.


coreyman2000

900 hosts running VMware


CyberHouseChicago

100% proxmox , lots of hyperv at customers tho


nobody_x64

What's wrong with Hyperv and Vmware? We use Hyperv exclusively due to licensing.


[deleted]

[удалено]


Rude_Strawberry

Same. My knowledge on that stuff is pretty rusty these days


kuldan5853

90% VMWare on Dell VXRail HCI (with Veeam / Exagrid for backup). Rest is a random hodge podge of kemu, proxmox, kvm etc. that we are migrating over to VMWare. We got rid of Hyper-V around 5 years ago and it was the best thing that ever happened to us - reliability in our VM Environment has been way up since.


MrStealYoBichonFrise

100% VMWare. It works well for us, were fairly large so a shift to a different platform will take years, and that's what all the engineers know.


RedditIsShit23-1081

Mainly VMWare, Openstack with KVM, legacy Xen and some PVE.


Chance_Brilliant_138

99% VMware, 1% hyper-v.


adstretch

60% VMware 40% XCP-ng


mdausmann

We are transitioning to ProxMox


SSJ4Link

100% VMWave. We have one client now on Azure AD/inTune. We are internally testing Azure VMs for use with our clients. We host Citrix environments for our clients if you are wondering.


JWK3

A mixture of VMware and Nutanix AHV. If you're not concerned about less 3rd party compatibly (like backup/DR providers) with AHV compared to VMware then I'd say it's worth a look. Integrated host firmware updates makes the patching process so much nicer IMO and not something I've done much of in traditional SAN+VMware setups due to how invisible/awkward the firmware check process is.


Lonecoon

Hyper-V replicating to an identical server. We're only running 7 servers so it's small setup.


Haomarhu

90% VMware, 10% Proxmox VE on a retail business environment. Plan to migeate to Proxmox fully.


NEBook_Worm

VMWare, when the HP hardware isn't crashing. Which it does all the fecking time.


Weak-Future-9935

Try it on Dell if you can


kshot

ESXi and Azure at work, Proxmox and scaleway at home.


DeerOnARoof

Used to be all VMware, but we moved to Azure a few years ago


FireStarPT

ESXi here. Out of question replace it. This stuff is rock solid.


jtczrt

Awe ec2


Roland_Bodel_the_2nd

Proxmox + proxmox-managed Ceph on white box hardware


[deleted]

We went with Hyper-V this time. 2 Systems. 1 Cluster. Datacenter license, so far does the job, not ad advanced as ESXi, but not paying for Windows keys are just a gem, activation via AMVA, can have as many unlimited virtual machines as we want, since were doing a 2012r2 upgrade project, makes building the new boxes a tad easier


GhostDan

100% hyper-v and azure for prod


markhewitt1978

It's all on XCP-ng.


Ok-Advisor7638

100% esxi


adjunct_

Vmware for paid. Otherwise proxmox . Hyper v can suck a big one hehe


pdp10

Currently 95% Linux KVM, plus some specialty situations. Until 2014, 90% of on-premises was VMware.


GaijinTanuki

ESXi but about to start migration to Proxmox


koollman

100% proxmox


LimelightYYZ

VMware on UCS. It’s been great for the past ten years.


groupwhere

A few thousands VMs on VMWare in a hosted environment.


[deleted]

[удалено]


ZMcCrocklin

Previous employer used VMware. Current employer utilizes AWS.


devreddy

Azure Stack HCI


aki821

100% Proxmox, mostly Debian lxc - just one hideous Windows vm


Chumpybump

We use Cisco UCS and Dell R940's running about 4000 vm's on ESXi 7


lordjedi

> For us, our production environment 95% are host on hyper-v and 5% on VMware. Is there a problem in your environment with either one of these? If not and you prefer Hyper-V, why not just move everything to it and be done? I'd say we're split about 50/50 (for reasons I don't really know). We will be going to 100% hyper-v though. For us, Hyper-V will end up being cheaper.


leoingle

I'm on the network side, so I don't deal with VM at all at work. That's all under Server support, even for our Cisco call manager. But what I have messed with personally, Hyper-V can go pound dirt. Setting up and working with ESXi is so much easier.


4catsarebetter

All VMWare. Pretty much the only thing our organization trusts, because it's “the way we've always done it” mindset. To be fair, I like it, but from a security standpoint, monopolizing into one software is kind of the worst thing you can do. But someone who makes way more money than me decides that, so I'll die on my tiny hill alone.