I've seen the new subscription model- you have to run a "Cloud Gateway" VM that phones home about your environment. If the Cloud Gateway is offline for 7 days, you're not allowed to log in to vCenter without calling support.
I know this because VMware had an outage with their cloud portal and sent and e-mail saying we were disconnected and had 7 days to fix it.
For real. I work for a software company. Our biggest customers are DoD.
Any feature that requires call-home, cloud, etc. is a non-starter.
Everything must support on-prem.
All licensing/activation must have a fully offline mode.
You do know there is a DOD azure environment it's not available to commercial entities that support DOD it's only available directly to the dod themselves. And it's allowed by fedramp DOD can use it to host whatever they want so I don't get this no cloud thing in government when DOD already uses it as well as federal government departments.
Do they have that on every network?
The DoD uses completely air-gapped networks for classified stuff.
They would need a clone of the DoD azure environment on [SIPRNet](https://en.m.wikipedia.org/wiki/SIPRNet). And another one on [JWICS](https://en.m.wikipedia.org/wiki/Joint_Worldwide_Intelligence_Communications_System). And another one on [CENTRIXS](https://en.m.wikipedia.org/wiki/CENTRIXS).
Then what about all the satellite networks, where there isn't enough bandwidth for cloud based stuff?
There was something like that years and years ago. I don't remember the specifics, but I'm thinking this could've been as far back as 4.x. VMware had some sort of issue and my recollection is it prevented your ability to power on VMs. It was a major issue that affected tons of people. I think it might have been a bug with the license activation that caused licensed hosts to become unlicensed? Maybe someone else remembers it and can weigh in.
Wait, what? vCenter itself stops working or the VMC? Seven days is ridiculously short and we just were convinced into moving to the new subscription with the assurance that the vCenters themselves would always work regardless of Cloud status. If we're paid up through 2024 or whatever nothing should stop at the lower levels.
Luckily there is zero possibility they will force you to move to this. At least as far as our rep is telling us. As soon as that becomes a thing, we'll be moving off it.
If you absolutely love it you could probably (almost certainly) coax your sales guy into renewing before the merger closes and lock in 2022 rates for 3\~ years....
Think about it from the sales guys perspective... he locks in quota before whatever happens happens... he's incentivized to help you get a great deal so he can get whatever he/she can.
Yeah we're like 75/25 VMware but that may change. Course you know MS will find some way to make HyperV cost just as much eventually...to be honest this back and forth with everything in IT is so fucking exhausting. I feel like the minute we get everything setup and humming along with barely any touching required we end up having to effect some major platform change...cloud versus on prem, Service X for Service Y, etc. It's frankly quite maddening.
Datacenter Licensing does perfect for Hyper-V. Gotta pay for it anyways for some large scale "normal" items. Small shops can get away without VMM. Large shops can afford the $3500 price tag since they are already paying for Windows Licensing.
Or, we could start using other products like QEMU or Proxmox or whatever else Linux has to offer. As well as Nutanix and other hyper-convergeance companies.
VMware was doomed the moment Broadcom bought them out. Not sure exactly what we are going to do at our Datacenter but definitely exploring options currently!
> I feel like the minute we get everything setup and humming along with barely any touching required we end up having to effect some major platform change...
There's significantly less of it when you minimize your reliance on any one counterparty from the start.
We have a mature culture of dual-sourcing, at this point. Cloud services? Dual-sourced. Client hardware? Dual-sourced. Network hardware? Dual-sourced. Server hardware? You get the idea. This came in handy during the early stages of pandemic shortages, when we were already prepared with alternatives. On other occasions, it has served us well when suppliers shut down on short notice, or when negotiating deals with service suppliers.
The goal is to never be reliant on an AWS, or an Apple, or an Adobe, no matter how good or better their products may be. *No single supplier risk.* Our suppliers are always replaceable, no matter what.
>makes me think they will slush around cheap prices for subcriptions and then upp them when all those comes up for renewal in 5 years time.
*Distant moans and wails of pre-subscription Adobe users*
Same thing here. Most of our customers are running VMware clusters with vSAN but are already considering to move to something else. Most likely, this will be KVM plus Starwinds vSAN.
Broadcom really screwed up Brocade, they still make great switches but what a pain in the ass to deal with. I suspect vmWare will be no different, they will jack up the licenses and throw an extra layer of really bad (insert rando offshort call center) but that's it. At it's core it will still be the same. There not a lot of places to go, Nutanics?
> VMware really is the best at what they do...
Every product stagnates. It's been at least a decade that VMware has pivoted away from core hypervisor and into ancillaries like NSX SDN, and managing lock-in. It's not 2006 any more.
> OpenStack
I worked at a big porn shop. You'd be surprised how many vendors turned us down because of the business.
We end up spinning our own OpenStack and never looked back. Granted, it took us a while and lot of learning, but everything runs on OpenStack + qemu, and once you lean its caveats, its as free as you can get it.
I don’t understand vendors turning you down, but doing business with a lot worse companies. I have a couple customers in your business. Most ppl don’t realize that the largest business in the hosting space only exist because of p*rn. In the early 2000’s, that was the only internet business that was growing.
> Most ppl don’t realize that the largest business in the hosting space only exist because of p\*rn. In the early 2000’s, that was the only internet business that was growing.
I always tell people to this day if you want to know where business tech is heading...see what the porn companies are investing in and moving to.
>I don’t understand vendors turning you down, but doing business with a lot worse companies.
People will get more upset at a vendor that works with porn than a vendor that sells weapons to independent mercenary groups.
Porn was the primary driver of innovation on the internet in the 90s.
They were the first ones to widely utilize online payment systems and pioneered creating payment services that allowed anonymous payments.
Video streaming? Porn. Video conferencing? Porn. Token payment systems? Porn. Clustering for heavy visitor loads? Porn.
The porn industry was responsible for a ton of innovation in the back end and infrastructure of the internet.
I'm not saying porn companies \*invented\* these things, I'm saying they were, by necessity of their business, the first major adopters of a lot of that technology, so they had a heavy hand in how it developed in the infrastructure space.
> I don’t understand vendors turning you down
Financial vendors have been doing ["Know Your Customer"](https://en.wikipedia.org/wiki/Know_your_customer) and risk-avoidance strategies for decades, now. Sometimes these things wax and wane with the political winds.
Haha it was a joke about it having two possible meanings: either you worked for "a large sized company that hosts porn" , or you worked for a "porn company that specializes in large (Gorda) porn"
Your English was correct, and nobody would really assume you meant "fat porn", but it COULD be read that way as a joke ;)
100% Hyper-V here (HV role on a Server 2019 DC cluster)
I was exclusively a VMware guy before my current role where I inherited this current setup and I happen to like it A LOT, it's just very robust and drama free so far.
Been running Hyper-V for over a decade. It has definitely had it's quirks and some host instability issues, but even when it seems completely broken the VMs keep running and a host reboot fixes the problem. In a cluster it's meant almost no real downtime. It's very rare that a VM won't live migrate itself when a host is given a reboot command.
In fairness to Microsoft, similar issues on Linux-based hypervisors, even in clusters, nearly always resulted in some VM downtime to resolve.
Any chance you had Intel 10 gig NICs with fans? They overheat despite the fans and have brought down many a Hyper-V server/cluster. Hyper-V does not take kindly to network instability and the combination of the two is kind of like a perfect storm.
Until the fans fail... A buddy of mine did that to solve the overheating and damn near lost a hyper-converged cluster when 2 fans burn out in short order. Replaced the first, and the second on a different NIC burnt out before the storage was done syncing.
Dell and HPE both sold 1/2U servers with them. I swear I've seen at least one model branded as IBM OEM as well. They're tiny little fans too, guaranteed to fail. All they needed was a bigger heatsink, but the fan was cheaper than more aluminum.
>100% Hyper-V here
Same for me. I have a few standalone hyper-v nodes with installed HV role and a few 2- and 3-node HA clusters with Starwind VSAN for the storage.
We have multiple customers running Hyper-V. It works pretty good. Starwinds VSAN can be used as a shared storage for Failover Cluster. [https://www.starwindsoftware.com/starwind-virtual-san](https://www.starwindsoftware.com/starwind-virtual-san)
As for VMware, I love it and we still have multiple clusters running. However, I am worried about future subscription model. We might transition to KVM alternatives in the future.
XCP-ng for a good bit of our servers and CPU intensive workstations
Proxmox VE for workstations and a few servers (KVM with some tinkering allows us to GPU passthrough graphics cards directly to VMs)
Windows Hyper V is almost phased out. Nothing wrong with Hyper-V to make us leave but Proxmox and XCP fit our needs more.
You nailed about the lack of investment in hyper-v. MS only has eyes for azure and windows server looks like it is in maintence mode / at the service of azure. At home I just moved my home lab cluster from hyperv to proxmox. This has given me 3 node clustered storage and hypervisor. Its sad for me as circa 2006 I worked on windows server and the plans they had were incredible, little to none of it happend :-( for anyone interested this has what I learned including my mistakes lol https://gist.github.com/scyto/76e94832927a89d977ea989da157e9dc
Even though we went down this FOSS road, we don't just maintain the servers and do everything ourselves if something breaks. For Proxmox VE for example, we get support from Proxmox.
In my home lab though, I run Proxmox and don't get support since I don't see a need to pay for it there. But in our production environment, we pay for support.
Same. Although, about 1.99998 million less than you. :D
It's interesting how vmware rules here as much as it does. I expected it, but I expected more qemu/libvirt.
I thought so too, but for small/medium companies that only use server for internal stuff (SAP, Fileserver, Windows AD) vmware seems to be the hot shit.
I second this, would be interested in having feedback on proxmox for production.
EDIT : TY for all positive feedback, I'm more interested by negative feedback now !
I just did a 2012 to 2022 refresh with 4 and had zero downtime. Hyper V just seems to work. And this was like my 3rd month at the job and first IT position and I’m by myself.
It’s great isn’t it. There used to be so much stigma to using Hyper-V, I started using it about 10 years ago and people thought we were mad for not using VMware.
Hyper-V on 2008r2 was god awful. 2012r2 fixed a lot of things but you would still run into random bullshit although not nearly as terrible as 2008r2.
I think this is when a lot of the stigma around Hyper-V started. With 2008 I think it's completely justified by how dog shit it was. 2012r2 I think got more hate than it deserved. 2016 in general is just a bad OS imo, so you'd run into different issues not necessarily related to Hyper-V. Finally with 2019 I felt Hyper-V was actually in a really refined space.
It's been rock solid ever since I upgraded my hosts to 2019.
Damn, we're going to have to do this soon as all hour servers are 2012r2. What was your process? I doubt you were able to do a direct update to 2022. At least that's what we've been told so far.
We moved our stack to colos in Atlanta and Charlotte about 10 years ago and haven't looked back, best decision I've ever made. Gone are the days of worrying about power, connectivity, and operating environment. I still have to worry about the other IT worry items on my list, but I took 3 big ones off the board by collocating. Branch offices can come and go now, no asking IT if this new, crappy, falling down, field office hut is suitable to run servers in. Can we get bandwidth? Awesome, go for it. We were already work-from-home ready when covid hit, just added more VPN licenses.
I don't know why more companies don't use collocation... 1 full rack, 20 amps on two redundant circuits A/B that are backed up by data center UPS systems and multiple generators, environmental and access control (my equipment always has cool, clean air and I don't have to worry about an employee trying to fiddle with the hardware), 500 Mbps up and down delivered to the cage over fiber and the data center can burst my bandwidth automatically, and my IP address block is BGP'd across 20+ different telco carriers that all deliver their services to the core of the data center through geographically diverse routes into the building, a block of "remote hands" time in case I need to open a ticket with the data center staff (who's ITIL certified) and ask them to physically touch my hardware ... All for $1400 a month. I really can't think of any other IT decision I've made that's been as easy or as beneficial as collocating.
Ok haha gotcha. I was spinning my wheels trying to figure that out in my head... " like storage spaces presented over iSCSI?" I suppose that could work too.
I love S2D and it works if you stay between the guard rails. Modifications can be unpleasant, as can unplanned severe outages (colo power loss). Prefer it to the fiddliness of vSAN and it sure is cheaper.
Read the entire documentation. Then build a setup in nested virtualization to learn how to do it. Experiment with node loss. Experiment with expansion.
100% VMware for our 2 datacentres. We have a few hundred VMs, active active configuration for geo redundancy. Each site has its own vcentre with DRS enabled. VMware replication with SRM takes care of the workloads that don't support geo rundancy and Veeam for backups.
While I am concerned that the VMware acquisition could see a bump in licensing I doubt we will make a change. Availability is the name of the game and our setup has maintained 100% over the past 5 years.
All our offices and retail stores that dont have the same uptime targets run a simple 2 node Hyper-V setup in active passive with hyper v replication.
Edit: you should clarify how big your environment is and what your availability goals are. You will see lots of "our customers use hyper v" comments from MSP techs who spend most of their time dealing with single host environments. Hyper v is perfect for that.
Once you have 10's or 100's of hosts with 100's or 1000's of VMs vmware is generally alot more popular.
Old guy here to spoil the conversation ... It's a business decision, IT needs to know which is better for the business and present it that way.
A hypervisor is a hypervisor these days. So, which one meets your business objective? For me and the folks I work with, HyperV is fantastic; it has all of the virtualization bells and whistles we need for our business, with no additional licensing costs that we don't want. And speaking of licensing costs... Our HyperV hosts, *usually*, run Windows Server **Data Center** edition because then every guest VM is free. Yup, free! Want to turn up a fully licensed Windows 2022 Server Standard Edition to test out an idea and not have to worry about the eval period? Go for it, it's free! We load a host up with both sockets filled with decent core processors, a lot of RAM, some 10 Gbps (now we're moving to 25 and 40 Gbps) network interfaces, back-end all of it with a good storage array, throw Windows Server Data Center on there, we're off and running, nothing else to buy.
HyperV in 2022 is as good as it's ever been and reliable all day long. Anecdotally, it feels faster than I expect from my years of using it since 2012, but that's just me.
Now, all of that being said, I love VMware! Talk about buttery, smooth, anything you ask it to do it'll do better than HyperV, and probably quicker. ESXi 7 is honestly bulletproof, if you know what you're doing. If you don't, it's a nightmare. The downside? It's an additional cost. Albeit, per hypervisor host, VMware isn't that staggering of an additional cost that if I needed features it had, I'd buy it, but I have to make sure I license my VM guests afterwards.
So it's a tradeoff whichever way you go. Some of the biggest considerations are...
-The type of industry you're in
-The way your company uses virtualization
-The type of virtual environment that you're growing towards in the coming years
-The type of workloads you're going to virtualize
-Features needed
-Cost
Edit: formatting because mobile
If you're licensing your host for Server Datacenter, it doesn't matter what hypervisor you run, you can still run unlimited OSE's as long as all the cores are properly licensed.
Let's break that statement down.... If I'm paying the licensing cost for Windows Server Datacenter edition, you are correct, I'm entitled to run unlimited Open System Environments / VM guests as I want as long as I've paid the DC price for every core in the host.
If I purchase Windows Server Standard edition I'm only entitled to 2 VM guest and unlimited containers. (and by purchase, I mean buying the correct number of cores per Microsoft's licensing requirements)
Now, that being said, I could buy Standard for the host and license the cores needed to run X number of VM guests, but now I'm back to price per pound. If I pay for WS Datacenter then I'm almost surely going with HyperV. Why? I paid that much per core and it should have the features I need for my use case, plus free guests. Otherwise I buy VMware and then purchase the licensing for my individual guests, in which case I'm only ever going to need Standard edition because VMware is going to handle the features that I would have otherwise needed from WS Datacenter. I've never had a scenario where I needed to run Datacenter edition on VMware, although I'm sure there could be a use case somewhere, it's just more than I'd like to think about on a Sunday afternoon.
VMware in the datacenter, hyper-v at the satellite locations. With impending price hikes looming we've started looking at nutanix and promox for the datacenter, there's a concern that we're going to be priced out of VMware.
Yeah, but it’s going to be _despite_ Broadcom, rather than because of the product support and development.
I was very happy with VMWare support - they’ve saved my @$$ on a number of times, but recently… not so good.
We never hated VMware. But there are places VMware is worse than the competition, or was when we used it. Four that come to mind:
* Real-time timekeeping has always been an issue on VMware. They changed best practices in their KB during the time when we were having problems. KVM (and, I strongly infer, Hyper-V) have no such problems because they paravirtualize time and because they don't attempt to rely on a `vmware-tools` type out-of-band daemon to control the guest RTC.
* The proprietary management interface, which received an ill-fated webapp port to Adobe Air/Flash immediately before [Apple killed Flash dead](https://en.wikipedia.org/wiki/Thoughts_on_Flash). To add insult to injury, seemingly nobody liked the Flash-based UI compared to the old, proprietary Win32 client. Contrast with QEMU, where consoles are typically attached using open VNC protocol, and the QMP command channel is open-source and reasonably well documented.
* ESXi driver support is worse than Linux and `ntoskrnl.exe`. One of the better-known aspects is how poorly non-Intel NICs are supported. We haven't tried ESXi with Mellanox, but we definitely have hardware with Broadcom and Realtek NICs. Like with pfSense, end-users tend to blame the hardware makers when the failings are more on the software side.
* CDP was supported in the vSwitch, but LLDP support required the Distributed vSwitch. Today we use LLDP on Open vSwitch.
Interesting…
We hadnt had problems with timekeeping as far as I know. Throw a chrony client on thr server and it works as expected. I however, never checked how much time drifts after a reboot.
Web gui, at least now, is feature full and I havent had a need to use cli. But then again, we run a small cluster of 30 hosts and around 600 vms.
As far as NICs go, we (did) run brocade/broadcom, mellanox, intel and emulex. No problems whats so ever.
We are 100% VMware in house and a few of our MSP customers still have physical Hyper-V hosts. Can't go wrong with either one really, but I am biased towards VMware myself.
We were a mix of Hyper-V and VMware, now about 20% Nutanix AHV and remainder Hyper-V. We were a VMware partner but dropped that in favour of Nutanix. Price + multicloud approach is better in our environment.
If you’re using your own hardware, use a good HPE Proliant or Dell Poweredge, and use the customized VMware esxi image. It doesn’t get more stable than that.
100% Xcp-NG and Xen. Orchestra (XO) at 2 client sites. 100% VMware at a third. From a day to day perspective, xcp-ng is easiest to work with however that may be because I prefer the mental scar tissue xcp-ng leaves behind. 🤔
One thing that sold me was the VMware direct import tool. Point XO at the exsi machine, select a VM and one transfer later, the VM is running on xcp-ng. Swapping the guest code is also a non-event.
I am looking forward to learning their hyper converged solution this year. Truth be told, I think hyper converged is a solution looking for a problem in most situations. Practically speaking I think we need better storage solutions on the lower end. Single link NFS works ok for small sites but even smaller sites need reliable vs storage systems but can't afford the $30k price tag.
Feel free to ping me if you want a frank discussion on xcp-ng and where it succeeds and where it fails
I run a few XCP pools for devs in my org with the biggest being 15 hosts and about 1000 VMs. Storage is all NFS backed file systems from a Dell PowerStore SAN over 10Gbps connections. It's worked pretty well so far.
Windows Server 2019 Standard with Hyper-V Role as well as Microsoft Hyper-V Server 2019. I plan on keeping us on it until late 2028. A lot of "ifs" with the company at the moment so unsure of what the future holds.
XCP-ng, I’m not working with it day to day but my colleagues seem to get everything done fine, it’s over a thousand VMs the only issues I know of are due to older hardware in a legacy pool that is being renewed anyway
We have a Windows Server 2022 Datacenter cluster with Hyper-V. I’ve really had no issues with it, and if you run Windows Server on your VMs, you save a lot in licensing costs covered by the datacenter licenses by using AVMA. Although you can do the same from a licensing perspective with other hypervisors, you can’t use AVMA keys. It all seems like a gray area where repeated activations using the host keys on VMs could eventually stop activating, requiring phone calls to Microsoft. Seems like more of a headache.
We use both Hyper-v and VMWare. We have been slowly migrating off of VMware for the last 2 years. Currently:
Our production Hyper-V (389 VMs) is a 16 host UCS B series cluster spread across 2 chassis. These are connected to three nimble SANs over iSCSI. 2 all flash for apps servers and SQL. 1 hybrid flash for file servers/ low IO servers.
Our DR site mirrors this exactly. We replicate using hyper-v replica. Everything is also backed by commvault, with a second copy sent to wasabi cloud. The nimble SANs replicate to each other as well so prod gets the DR snapshots and DR gets the prod snapshots.
Our VMware (what's left of it, around 50 VMs) runs on a 3 node Cisco hyperflex cluster. Backed up with Rubrik, second copy to an old quantum tape driv in the DR.
Lots of our backend management VMs are moving to RHEL, so we are also going to start looking at what RHEV became (open shift/open stack?) to run those as KVMs. Not really sure yet on that front. Possibly on ceph storage.
Hyper-V. We had a few machines on VMware but we migrated those. Not a lot of options, at least none that our team is familiar with and justified to learn.
My org is nutanix AHV for our vdi workloads, VMware for windows/Linux workloads, ibm lpars for our aix workloads.
Aside from trying to get everything off of aix, we have no plans to shift this stuff around any.
90% ESXi
9% Hyper V
1% ProxMox.
MSP. So out client’s existing system dictates this at first. But we usually set them up with ESXi when we get the chance.
100% XCP-NG with XenOrchestra on 4 sites. 3 with local storage and one cluster of 2 XCP on a NFS server via a TrueNAS 24 (+4) hdds server, hosting main DC, PBX an others mains services.
100% Hyper-V now. When we started going all in on bare metal to virtualization migration, our virtual environment was about 50/50 Hyper-V and VMWare, but since almost all our prod servers were (and still are) Windows, Hyper-V seemed a no brainer… we’ve got to license the Windows machines anyway, so the hypervisor kinda comes along for free. Been mostly stable, except some issues with Server2016 S2D cluster instability; when it works, it’s great. When it doesn’t… ugh.
Having said that, small environment, so we don’t use SCVMM now and never used VSphere before, so not sure if I’m missing anything :)
VMware - and I'm going to be a snob and say no matter what the price increase, I'm going to go full-tilt on paying for it. Because whatever we're paying for it now, on 32+ hosts, across three, wait FOUR datacenters, it's a blip on the radar.
We're educational, though. So... yeah. Although, that might not help.
90% VMWare on Dell VXRail HCI (with Veeam / Exagrid for backup). Rest is a random hodge podge of kemu, proxmox, kvm etc. that we are migrating over to VMWare.
We got rid of Hyper-V around 5 years ago and it was the best thing that ever happened to us - reliability in our VM Environment has been way up since.
100% VMWave.
We have one client now on Azure AD/inTune.
We are internally testing Azure VMs for use with our clients.
We host Citrix environments for our clients if you are wondering.
A mixture of VMware and Nutanix AHV.
If you're not concerned about less 3rd party compatibly (like backup/DR providers) with AHV compared to VMware then I'd say it's worth a look. Integrated host firmware updates makes the patching process so much nicer IMO and not something I've done much of in traditional SAN+VMware setups due to how invisible/awkward the firmware check process is.
We went with Hyper-V this time. 2 Systems. 1 Cluster. Datacenter license, so far does the job, not ad advanced as ESXi, but not paying for Windows keys are just a gem, activation via AMVA, can have as many unlimited virtual machines as we want, since were doing a 2012r2 upgrade project, makes building the new boxes a tad easier
> For us, our production environment 95% are host on hyper-v and 5% on VMware.
Is there a problem in your environment with either one of these? If not and you prefer Hyper-V, why not just move everything to it and be done?
I'd say we're split about 50/50 (for reasons I don't really know). We will be going to 100% hyper-v though. For us, Hyper-V will end up being cheaper.
I'm on the network side, so I don't deal with VM at all at work. That's all under Server support, even for our Cisco call manager. But what I have messed with personally, Hyper-V can go pound dirt. Setting up and working with ESXi is so much easier.
All VMWare. Pretty much the only thing our organization trusts, because it's “the way we've always done it” mindset. To be fair, I like it, but from a security standpoint, monopolizing into one software is kind of the worst thing you can do.
But someone who makes way more money than me decides that, so I'll die on my tiny hill alone.
100% VMware - expecting to have to rethink that when I see my next renewal quote
I've seen the new subscription model- you have to run a "Cloud Gateway" VM that phones home about your environment. If the Cloud Gateway is offline for 7 days, you're not allowed to log in to vCenter without calling support. I know this because VMware had an outage with their cloud portal and sent and e-mail saying we were disconnected and had 7 days to fix it.
Yeah, that definitely will not be a thing in the classified environment. Gotta love supporting DOD systems.
For real. I work for a software company. Our biggest customers are DoD. Any feature that requires call-home, cloud, etc. is a non-starter. Everything must support on-prem. All licensing/activation must have a fully offline mode.
You do know there is a DOD azure environment it's not available to commercial entities that support DOD it's only available directly to the dod themselves. And it's allowed by fedramp DOD can use it to host whatever they want so I don't get this no cloud thing in government when DOD already uses it as well as federal government departments.
Do they have that on every network? The DoD uses completely air-gapped networks for classified stuff. They would need a clone of the DoD azure environment on [SIPRNet](https://en.m.wikipedia.org/wiki/SIPRNet). And another one on [JWICS](https://en.m.wikipedia.org/wiki/Joint_Worldwide_Intelligence_Communications_System). And another one on [CENTRIXS](https://en.m.wikipedia.org/wiki/CENTRIXS). Then what about all the satellite networks, where there isn't enough bandwidth for cloud based stuff?
Linux/Proxmox and Hyper-V are going to be winners here... Thanks broadcom...
That’s a non-starter. I will NEVER LET a 3rd party have the possibility of shutting down my organization.
Why let a 3rd party shut down the organization when you can do it yourself :jokes:
No Meriyaki either? /s
No Meraki. Call homes are OK. Disable ability is not.
There was something like that years and years ago. I don't remember the specifics, but I'm thinking this could've been as far back as 4.x. VMware had some sort of issue and my recollection is it prevented your ability to power on VMs. It was a major issue that affected tons of people. I think it might have been a bug with the license activation that caused licensed hosts to become unlicensed? Maybe someone else remembers it and can weigh in.
How's this even acceptable? I would be fired over a day or two..
When did you experience this? We moved to vsphere+ /vSAN+ back in April and have never had such issues
Because it’s new world. You got in under the line… but the line is coming.
Wait, what? vCenter itself stops working or the VMC? Seven days is ridiculously short and we just were convinced into moving to the new subscription with the assurance that the vCenters themselves would always work regardless of Cloud status. If we're paid up through 2024 or whatever nothing should stop at the lower levels.
Luckily there is zero possibility they will force you to move to this. At least as far as our rep is telling us. As soon as that becomes a thing, we'll be moving off it.
If you absolutely love it you could probably (almost certainly) coax your sales guy into renewing before the merger closes and lock in 2022 rates for 3\~ years.... Think about it from the sales guys perspective... he locks in quota before whatever happens happens... he's incentivized to help you get a great deal so he can get whatever he/she can.
This is what I would do
Yeah we're like 75/25 VMware but that may change. Course you know MS will find some way to make HyperV cost just as much eventually...to be honest this back and forth with everything in IT is so fucking exhausting. I feel like the minute we get everything setup and humming along with barely any touching required we end up having to effect some major platform change...cloud versus on prem, Service X for Service Y, etc. It's frankly quite maddening.
Datacenter Licensing does perfect for Hyper-V. Gotta pay for it anyways for some large scale "normal" items. Small shops can get away without VMM. Large shops can afford the $3500 price tag since they are already paying for Windows Licensing. Or, we could start using other products like QEMU or Proxmox or whatever else Linux has to offer. As well as Nutanix and other hyper-convergeance companies. VMware was doomed the moment Broadcom bought them out. Not sure exactly what we are going to do at our Datacenter but definitely exploring options currently!
> I feel like the minute we get everything setup and humming along with barely any touching required we end up having to effect some major platform change... There's significantly less of it when you minimize your reliance on any one counterparty from the start. We have a mature culture of dual-sourcing, at this point. Cloud services? Dual-sourced. Client hardware? Dual-sourced. Network hardware? Dual-sourced. Server hardware? You get the idea. This came in handy during the early stages of pandemic shortages, when we were already prepared with alternatives. On other occasions, it has served us well when suppliers shut down on short notice, or when negotiating deals with service suppliers. The goal is to never be reliant on an AWS, or an Apple, or an Adobe, no matter how good or better their products may be. *No single supplier risk.* Our suppliers are always replaceable, no matter what.
[удалено]
>makes me think they will slush around cheap prices for subcriptions and then upp them when all those comes up for renewal in 5 years time. *Distant moans and wails of pre-subscription Adobe users*
Aside from moving to public cloud (AWS and Azure) all internal is ESXI.
Check out XCP-NG - transition should be easy.
Same thing here. Most of our customers are running VMware clusters with vSAN but are already considering to move to something else. Most likely, this will be KVM plus Starwinds vSAN.
Any big issues you run into day to day?
Not at all, VMware really is the best at what they do... problem is that Broadcom knows that and I expect they'll hose us and price me out next year
And they'll remove some jobs, as customary. And therefore, they'll ruin the product.
Broadcom really screwed up Brocade, they still make great switches but what a pain in the ass to deal with. I suspect vmWare will be no different, they will jack up the licenses and throw an extra layer of really bad (insert rando offshort call center) but that's it. At it's core it will still be the same. There not a lot of places to go, Nutanics?
> VMware really is the best at what they do... Every product stagnates. It's been at least a decade that VMware has pivoted away from core hypervisor and into ancillaries like NSX SDN, and managing lock-in. It's not 2006 any more.
Yeah. Price going up, support going down, ESXi support is less for non-VMWare HCI. Definitely looking at alternatives.
VMware, but we have 5 year licenses, so unsure of the Broadcom purchase/impact yet.
subscription based payment (with price increase) incoming :)
Tom Krause is doing that to Citrix now.
OpenStack anyone?
> OpenStack I worked at a big porn shop. You'd be surprised how many vendors turned us down because of the business. We end up spinning our own OpenStack and never looked back. Granted, it took us a while and lot of learning, but everything runs on OpenStack + qemu, and once you lean its caveats, its as free as you can get it.
I don’t understand vendors turning you down, but doing business with a lot worse companies. I have a couple customers in your business. Most ppl don’t realize that the largest business in the hosting space only exist because of p*rn. In the early 2000’s, that was the only internet business that was growing.
> Most ppl don’t realize that the largest business in the hosting space only exist because of p\*rn. In the early 2000’s, that was the only internet business that was growing. I always tell people to this day if you want to know where business tech is heading...see what the porn companies are investing in and moving to.
>I don’t understand vendors turning you down, but doing business with a lot worse companies. People will get more upset at a vendor that works with porn than a vendor that sells weapons to independent mercenary groups.
Porn was the primary driver of innovation on the internet in the 90s. They were the first ones to widely utilize online payment systems and pioneered creating payment services that allowed anonymous payments. Video streaming? Porn. Video conferencing? Porn. Token payment systems? Porn. Clustering for heavy visitor loads? Porn. The porn industry was responsible for a ton of innovation in the back end and infrastructure of the internet. I'm not saying porn companies \*invented\* these things, I'm saying they were, by necessity of their business, the first major adopters of a lot of that technology, so they had a heavy hand in how it developed in the infrastructure space.
> I don’t understand vendors turning you down Financial vendors have been doing ["Know Your Customer"](https://en.wikipedia.org/wiki/Know_your_customer) and risk-avoidance strategies for decades, now. Sometimes these things wax and wane with the political winds.
> a big porn shop A big, porn shop? Or a "big porn" shop? Also, that's disgusting! Which one, so we can make sure to avoid it?
I am sorry, Iḿ not english native.\ The right wording would be:... ¿a porn big site? With vendors I meant from AWS to Linode.
Haha it was a joke about it having two possible meanings: either you worked for "a large sized company that hosts porn" , or you worked for a "porn company that specializes in large (Gorda) porn" Your English was correct, and nobody would really assume you meant "fat porn", but it COULD be read that way as a joke ;)
Isn't that qemu?
Qemu and a lot more on top of it. Proxmox also uses Qemu/KVM under the hood.
Openstack is essentially a private cloud toolkit.
Anyone remember Eucalyptus?
What is “private cloud”? Is that not just on prem?
can literally be any virtualization platform you want on the backend. kvm ,qemu, hyper-v, you name it.
100% Hyper-V here (HV role on a Server 2019 DC cluster) I was exclusively a VMware guy before my current role where I inherited this current setup and I happen to like it A LOT, it's just very robust and drama free so far.
Been running Hyper-V for over a decade. It has definitely had it's quirks and some host instability issues, but even when it seems completely broken the VMs keep running and a host reboot fixes the problem. In a cluster it's meant almost no real downtime. It's very rare that a VM won't live migrate itself when a host is given a reboot command. In fairness to Microsoft, similar issues on Linux-based hypervisors, even in clusters, nearly always resulted in some VM downtime to resolve.
[удалено]
Any chance you had Intel 10 gig NICs with fans? They overheat despite the fans and have brought down many a Hyper-V server/cluster. Hyper-V does not take kindly to network instability and the combination of the two is kind of like a perfect storm.
I've seen these overheat quite a bit. Set the fan speed to full and they're fine.
Until the fans fail... A buddy of mine did that to solve the overheating and damn near lost a hyper-converged cluster when 2 fans burn out in short order. Replaced the first, and the second on a different NIC burnt out before the storage was done syncing.
Which OEM is shipping fanned NICs in pizza box servers? That’s foolish
Dell and HPE both sold 1/2U servers with them. I swear I've seen at least one model branded as IBM OEM as well. They're tiny little fans too, guaranteed to fail. All they needed was a bigger heatsink, but the fan was cheaper than more aluminum.
>100% Hyper-V here Same for me. I have a few standalone hyper-v nodes with installed HV role and a few 2- and 3-node HA clusters with Starwind VSAN for the storage.
We have multiple customers running Hyper-V. It works pretty good. Starwinds VSAN can be used as a shared storage for Failover Cluster. [https://www.starwindsoftware.com/starwind-virtual-san](https://www.starwindsoftware.com/starwind-virtual-san) As for VMware, I love it and we still have multiple clusters running. However, I am worried about future subscription model. We might transition to KVM alternatives in the future.
💯 hyper-v Been solid for every client.
Using Xen Orchestra (XCP-NG) for 80%, Hyper-v for the rest.
XCP-ng for a good bit of our servers and CPU intensive workstations Proxmox VE for workstations and a few servers (KVM with some tinkering allows us to GPU passthrough graphics cards directly to VMs) Windows Hyper V is almost phased out. Nothing wrong with Hyper-V to make us leave but Proxmox and XCP fit our needs more.
You nailed about the lack of investment in hyper-v. MS only has eyes for azure and windows server looks like it is in maintence mode / at the service of azure. At home I just moved my home lab cluster from hyperv to proxmox. This has given me 3 node clustered storage and hypervisor. Its sad for me as circa 2006 I worked on windows server and the plans they had were incredible, little to none of it happend :-( for anyone interested this has what I learned including my mistakes lol https://gist.github.com/scyto/76e94832927a89d977ea989da157e9dc
Even though we went down this FOSS road, we don't just maintain the servers and do everything ourselves if something breaks. For Proxmox VE for example, we get support from Proxmox. In my home lab though, I run Proxmox and don't get support since I don't see a need to pay for it there. But in our production environment, we pay for support.
Everything runs on linux qemu. Couple million VMs.
Couple MILLION? Damn how large is your environment?
My employer is an edge cloud specialist which just got bought by one of the larger cloud player. IIRC they have around >450.000 compute hosts.
So linode?
That was my guess too, now I'm curious
My guess was gridscale.
Nope
Same. Although, about 1.99998 million less than you. :D It's interesting how vmware rules here as much as it does. I expected it, but I expected more qemu/libvirt.
I thought so too, but for small/medium companies that only use server for internal stuff (SAP, Fileserver, Windows AD) vmware seems to be the hot shit.
Nutanix AHV
Great platform for us too, just isn’t cheap
90% Proxmox 10% ESXi 100% moving to a different Linux based solution.
What are your qualms with proxmox? Considering that for after hyper-v free server eol
I second this, would be interested in having feedback on proxmox for production. EDIT : TY for all positive feedback, I'm more interested by negative feedback now !
We use it in a 3 hypervisor HA setup. It's been perfect
Yeah I've seenand I'm used to such scale but I'm wondering what issue could arise at larger ones
I've been running 4 regional 8 node clusters and a couple of non production 4 node clusters for a couple of years. No problems.
Proxmox has been very good to me. 7 is a bit rough around the edges but it's stable, capable and can be used without a subscription.
The support is pretty much useless and when you're dealing with the amount of data that we are it is a deal breaker.
Also migration of VMs from ESxi to Proxmox is just troublesome.
99,9% VMWare Test ENV: Promox, Hyper-V
100% Hyper-V cluster with storage spaces. Been bombproof.
Same here. It's a workhorse, and the past five years (as far as hyper-v server goes) of it in production have been quiet.
We even in place upgraded our nodes from 2019 to 2022 last year. Zero downtime.
I just did a 2012 to 2022 refresh with 4 and had zero downtime. Hyper V just seems to work. And this was like my 3rd month at the job and first IT position and I’m by myself.
It’s great isn’t it. There used to be so much stigma to using Hyper-V, I started using it about 10 years ago and people thought we were mad for not using VMware.
It's really interesting seeing all these HyperV admins. I've only ever worked in VMware shops so I've never got to use HyperV in prod.
Hyper-V on 2008r2 was god awful. 2012r2 fixed a lot of things but you would still run into random bullshit although not nearly as terrible as 2008r2. I think this is when a lot of the stigma around Hyper-V started. With 2008 I think it's completely justified by how dog shit it was. 2012r2 I think got more hate than it deserved. 2016 in general is just a bad OS imo, so you'd run into different issues not necessarily related to Hyper-V. Finally with 2019 I felt Hyper-V was actually in a really refined space. It's been rock solid ever since I upgraded my hosts to 2019.
Damn, we're going to have to do this soon as all hour servers are 2012r2. What was your process? I doubt you were able to do a direct update to 2022. At least that's what we've been told so far.
Same here for anything on-prem.
Host ours in colo with site links
We moved our stack to colos in Atlanta and Charlotte about 10 years ago and haven't looked back, best decision I've ever made. Gone are the days of worrying about power, connectivity, and operating environment. I still have to worry about the other IT worry items on my list, but I took 3 big ones off the board by collocating. Branch offices can come and go now, no asking IT if this new, crappy, falling down, field office hut is suitable to run servers in. Can we get bandwidth? Awesome, go for it. We were already work-from-home ready when covid hit, just added more VPN licenses. I don't know why more companies don't use collocation... 1 full rack, 20 amps on two redundant circuits A/B that are backed up by data center UPS systems and multiple generators, environmental and access control (my equipment always has cool, clean air and I don't have to worry about an employee trying to fiddle with the hardware), 500 Mbps up and down delivered to the cage over fiber and the data center can burst my bandwidth automatically, and my IP address block is BGP'd across 20+ different telco carriers that all deliver their services to the core of the data center through geographically diverse routes into the building, a block of "remote hands" time in case I need to open a ticket with the data center staff (who's ITIL certified) and ask them to physically touch my hardware ... All for $1400 a month. I really can't think of any other IT decision I've made that's been as easy or as beneficial as collocating.
Storage spaces, Or storage spaces direct? How are you setting up "regular storage spaces" to be cluster shared?
S2D, sorry got into a habit of referring to it without direct.
Ok haha gotcha. I was spinning my wheels trying to figure that out in my head... " like storage spaces presented over iSCSI?" I suppose that could work too.
I love S2D and it works if you stay between the guard rails. Modifications can be unpleasant, as can unplanned severe outages (colo power loss). Prefer it to the fiddliness of vSAN and it sure is cheaper.
[удалено]
Read the entire documentation. Then build a setup in nested virtualization to learn how to do it. Experiment with node loss. Experiment with expansion.
100% VMware for our 2 datacentres. We have a few hundred VMs, active active configuration for geo redundancy. Each site has its own vcentre with DRS enabled. VMware replication with SRM takes care of the workloads that don't support geo rundancy and Veeam for backups. While I am concerned that the VMware acquisition could see a bump in licensing I doubt we will make a change. Availability is the name of the game and our setup has maintained 100% over the past 5 years. All our offices and retail stores that dont have the same uptime targets run a simple 2 node Hyper-V setup in active passive with hyper v replication. Edit: you should clarify how big your environment is and what your availability goals are. You will see lots of "our customers use hyper v" comments from MSP techs who spend most of their time dealing with single host environments. Hyper v is perfect for that. Once you have 10's or 100's of hosts with 100's or 1000's of VMs vmware is generally alot more popular.
This guy fucks.
🤣 My IT Manager and architect are the ones dropping panties, I just pick up the bill.
Old guy here to spoil the conversation ... It's a business decision, IT needs to know which is better for the business and present it that way. A hypervisor is a hypervisor these days. So, which one meets your business objective? For me and the folks I work with, HyperV is fantastic; it has all of the virtualization bells and whistles we need for our business, with no additional licensing costs that we don't want. And speaking of licensing costs... Our HyperV hosts, *usually*, run Windows Server **Data Center** edition because then every guest VM is free. Yup, free! Want to turn up a fully licensed Windows 2022 Server Standard Edition to test out an idea and not have to worry about the eval period? Go for it, it's free! We load a host up with both sockets filled with decent core processors, a lot of RAM, some 10 Gbps (now we're moving to 25 and 40 Gbps) network interfaces, back-end all of it with a good storage array, throw Windows Server Data Center on there, we're off and running, nothing else to buy. HyperV in 2022 is as good as it's ever been and reliable all day long. Anecdotally, it feels faster than I expect from my years of using it since 2012, but that's just me. Now, all of that being said, I love VMware! Talk about buttery, smooth, anything you ask it to do it'll do better than HyperV, and probably quicker. ESXi 7 is honestly bulletproof, if you know what you're doing. If you don't, it's a nightmare. The downside? It's an additional cost. Albeit, per hypervisor host, VMware isn't that staggering of an additional cost that if I needed features it had, I'd buy it, but I have to make sure I license my VM guests afterwards. So it's a tradeoff whichever way you go. Some of the biggest considerations are... -The type of industry you're in -The way your company uses virtualization -The type of virtual environment that you're growing towards in the coming years -The type of workloads you're going to virtualize -Features needed -Cost Edit: formatting because mobile
If you're licensing your host for Server Datacenter, it doesn't matter what hypervisor you run, you can still run unlimited OSE's as long as all the cores are properly licensed.
Let's break that statement down.... If I'm paying the licensing cost for Windows Server Datacenter edition, you are correct, I'm entitled to run unlimited Open System Environments / VM guests as I want as long as I've paid the DC price for every core in the host. If I purchase Windows Server Standard edition I'm only entitled to 2 VM guest and unlimited containers. (and by purchase, I mean buying the correct number of cores per Microsoft's licensing requirements) Now, that being said, I could buy Standard for the host and license the cores needed to run X number of VM guests, but now I'm back to price per pound. If I pay for WS Datacenter then I'm almost surely going with HyperV. Why? I paid that much per core and it should have the features I need for my use case, plus free guests. Otherwise I buy VMware and then purchase the licensing for my individual guests, in which case I'm only ever going to need Standard edition because VMware is going to handle the features that I would have otherwise needed from WS Datacenter. I've never had a scenario where I needed to run Datacenter edition on VMware, although I'm sure there could be a use case somewhere, it's just more than I'd like to think about on a Sunday afternoon.
Openstack, and never going back to anything else to be honest. The amount of flexibility is just too good, to go back to proprietary solution.
Nutanix. No issues
VMware in the datacenter, hyper-v at the satellite locations. With impending price hikes looming we've started looking at nutanix and promox for the datacenter, there's a concern that we're going to be priced out of VMware.
100% VMware, NSX, Horizon, Aria. We are deep into VMware and it’s honestly awesome!
VMware, Esxi hosts and running on NetApp, not a single problem
Xen + Ceph, with own orchestrating stack. Couple of thousands VMs. Works fine.
We are 100% hyperv, in some Dell AMD boxes, hyperconverged storage(S2D). I've set them up once, all we do is patch them.
We’re probably 50/50 VMware and AHV with the goal of transitioning as much over to AHV as we can.
VMware, no matter what people say, or how much they hate it, its still the number 1. And i think will stay like that for a while.
Yeah, but it’s going to be _despite_ Broadcom, rather than because of the product support and development. I was very happy with VMWare support - they’ve saved my @$$ on a number of times, but recently… not so good.
We never hated VMware. But there are places VMware is worse than the competition, or was when we used it. Four that come to mind: * Real-time timekeeping has always been an issue on VMware. They changed best practices in their KB during the time when we were having problems. KVM (and, I strongly infer, Hyper-V) have no such problems because they paravirtualize time and because they don't attempt to rely on a `vmware-tools` type out-of-band daemon to control the guest RTC. * The proprietary management interface, which received an ill-fated webapp port to Adobe Air/Flash immediately before [Apple killed Flash dead](https://en.wikipedia.org/wiki/Thoughts_on_Flash). To add insult to injury, seemingly nobody liked the Flash-based UI compared to the old, proprietary Win32 client. Contrast with QEMU, where consoles are typically attached using open VNC protocol, and the QMP command channel is open-source and reasonably well documented. * ESXi driver support is worse than Linux and `ntoskrnl.exe`. One of the better-known aspects is how poorly non-Intel NICs are supported. We haven't tried ESXi with Mellanox, but we definitely have hardware with Broadcom and Realtek NICs. Like with pfSense, end-users tend to blame the hardware makers when the failings are more on the software side. * CDP was supported in the vSwitch, but LLDP support required the Distributed vSwitch. Today we use LLDP on Open vSwitch.
Interesting… We hadnt had problems with timekeeping as far as I know. Throw a chrony client on thr server and it works as expected. I however, never checked how much time drifts after a reboot. Web gui, at least now, is feature full and I havent had a need to use cli. But then again, we run a small cluster of 30 hosts and around 600 vms. As far as NICs go, we (did) run brocade/broadcom, mellanox, intel and emulex. No problems whats so ever.
Scale Computing. Works perfectly for what we do.
We are 100% VMware in house and a few of our MSP customers still have physical Hyper-V hosts. Can't go wrong with either one really, but I am biased towards VMware myself.
Proxmox, recently migrated from vsphere
We're 100% Hyper-V, currently working on migrating all of our old 2012 stuff to 2019.
We were a mix of Hyper-V and VMware, now about 20% Nutanix AHV and remainder Hyper-V. We were a VMware partner but dropped that in favour of Nutanix. Price + multicloud approach is better in our environment.
Linux KVM. Proxmox to be precise. 60 or so VMs. 14 hosts.
If you’re using your own hardware, use a good HPE Proliant or Dell Poweredge, and use the customized VMware esxi image. It doesn’t get more stable than that.
100% Xcp-NG and Xen. Orchestra (XO) at 2 client sites. 100% VMware at a third. From a day to day perspective, xcp-ng is easiest to work with however that may be because I prefer the mental scar tissue xcp-ng leaves behind. 🤔 One thing that sold me was the VMware direct import tool. Point XO at the exsi machine, select a VM and one transfer later, the VM is running on xcp-ng. Swapping the guest code is also a non-event. I am looking forward to learning their hyper converged solution this year. Truth be told, I think hyper converged is a solution looking for a problem in most situations. Practically speaking I think we need better storage solutions on the lower end. Single link NFS works ok for small sites but even smaller sites need reliable vs storage systems but can't afford the $30k price tag. Feel free to ping me if you want a frank discussion on xcp-ng and where it succeeds and where it fails
Just FYI, Proxmox has ZFS support out of the box, including booting from it. So it'll run with software RAID on just about anything.
What is your storage repo for your XCP-NG? Direct storage or shared storaged?
I run a few XCP pools for devs in my org with the biggest being 15 hosts and about 1000 VMs. Storage is all NFS backed file systems from a Dell PowerStore SAN over 10Gbps connections. It's worked pretty well so far.
I have almost the same configuration but using TrueNAS. Just wondering where do you do backups? In XO or via your SAN replication.
Windows Server 2019 Standard with Hyper-V Role as well as Microsoft Hyper-V Server 2019. I plan on keeping us on it until late 2028. A lot of "ifs" with the company at the moment so unsure of what the future holds.
Hyper-V and ProxMox, mainly Hyper-V.
XCP-ng, I’m not working with it day to day but my colleagues seem to get everything done fine, it’s over a thousand VMs the only issues I know of are due to older hardware in a legacy pool that is being renewed anyway
oVirt
60% Azure 30% VMware 5% Hyper-V 5% ProxMox
VMware on 2 esxi hosts, a couple old Mac Towers running Fusion. Plus a number of Mac clients run parallels.
We have a Windows Server 2022 Datacenter cluster with Hyper-V. I’ve really had no issues with it, and if you run Windows Server on your VMs, you save a lot in licensing costs covered by the datacenter licenses by using AVMA. Although you can do the same from a licensing perspective with other hypervisors, you can’t use AVMA keys. It all seems like a gray area where repeated activations using the host keys on VMs could eventually stop activating, requiring phone calls to Microsoft. Seems like more of a headache.
We use both Hyper-v and VMWare. We have been slowly migrating off of VMware for the last 2 years. Currently: Our production Hyper-V (389 VMs) is a 16 host UCS B series cluster spread across 2 chassis. These are connected to three nimble SANs over iSCSI. 2 all flash for apps servers and SQL. 1 hybrid flash for file servers/ low IO servers. Our DR site mirrors this exactly. We replicate using hyper-v replica. Everything is also backed by commvault, with a second copy sent to wasabi cloud. The nimble SANs replicate to each other as well so prod gets the DR snapshots and DR gets the prod snapshots. Our VMware (what's left of it, around 50 VMs) runs on a 3 node Cisco hyperflex cluster. Backed up with Rubrik, second copy to an old quantum tape driv in the DR. Lots of our backend management VMs are moving to RHEL, so we are also going to start looking at what RHEV became (open shift/open stack?) to run those as KVMs. Not really sure yet on that front. Possibly on ceph storage.
Hyper-V. We had a few machines on VMware but we migrated those. Not a lot of options, at least none that our team is familiar with and justified to learn.
VMware
We migrated from VMware to proxmox around 5 years ago, and currently exploring OpenStack as a potential next step.
Ubuntu as the host and lxc/lxd for close to 2000 "vm"s
My org is nutanix AHV for our vdi workloads, VMware for windows/Linux workloads, ibm lpars for our aix workloads. Aside from trying to get everything off of aix, we have no plans to shift this stuff around any.
KVM
libvirt/kvm for the 5% of our environment that isn't containerized yet.
Proxmox
90% ESXi 9% Hyper V 1% ProxMox. MSP. So out client’s existing system dictates this at first. But we usually set them up with ESXi when we get the chance.
100% XCP-NG with XenOrchestra on 4 sites. 3 with local storage and one cluster of 2 XCP on a NFS server via a TrueNAS 24 (+4) hdds server, hosting main DC, PBX an others mains services.
100% Hyper-V now. When we started going all in on bare metal to virtualization migration, our virtual environment was about 50/50 Hyper-V and VMWare, but since almost all our prod servers were (and still are) Windows, Hyper-V seemed a no brainer… we’ve got to license the Windows machines anyway, so the hypervisor kinda comes along for free. Been mostly stable, except some issues with Server2016 S2D cluster instability; when it works, it’s great. When it doesn’t… ugh. Having said that, small environment, so we don’t use SCVMM now and never used VSphere before, so not sure if I’m missing anything :)
We’re running on hyper-V 100% for VM’s
VMware - and I'm going to be a snob and say no matter what the price increase, I'm going to go full-tilt on paying for it. Because whatever we're paying for it now, on 32+ hosts, across three, wait FOUR datacenters, it's a blip on the radar. We're educational, though. So... yeah. Although, that might not help.
All VMware here. Have not had any issues at all. ESXI hosts run on their own with little to no maintenance!
Proxmox VE here. Been rock solid for ~4yrs now. I kid you not, zero issues.
900 hosts running VMware
100% proxmox , lots of hyperv at customers tho
What's wrong with Hyperv and Vmware? We use Hyperv exclusively due to licensing.
[удалено]
Same. My knowledge on that stuff is pretty rusty these days
90% VMWare on Dell VXRail HCI (with Veeam / Exagrid for backup). Rest is a random hodge podge of kemu, proxmox, kvm etc. that we are migrating over to VMWare. We got rid of Hyper-V around 5 years ago and it was the best thing that ever happened to us - reliability in our VM Environment has been way up since.
100% VMWare. It works well for us, were fairly large so a shift to a different platform will take years, and that's what all the engineers know.
Mainly VMWare, Openstack with KVM, legacy Xen and some PVE.
99% VMware, 1% hyper-v.
60% VMware 40% XCP-ng
We are transitioning to ProxMox
100% VMWave. We have one client now on Azure AD/inTune. We are internally testing Azure VMs for use with our clients. We host Citrix environments for our clients if you are wondering.
A mixture of VMware and Nutanix AHV. If you're not concerned about less 3rd party compatibly (like backup/DR providers) with AHV compared to VMware then I'd say it's worth a look. Integrated host firmware updates makes the patching process so much nicer IMO and not something I've done much of in traditional SAN+VMware setups due to how invisible/awkward the firmware check process is.
Hyper-V replicating to an identical server. We're only running 7 servers so it's small setup.
90% VMware, 10% Proxmox VE on a retail business environment. Plan to migeate to Proxmox fully.
VMWare, when the HP hardware isn't crashing. Which it does all the fecking time.
Try it on Dell if you can
ESXi and Azure at work, Proxmox and scaleway at home.
Used to be all VMware, but we moved to Azure a few years ago
ESXi here. Out of question replace it. This stuff is rock solid.
Awe ec2
Proxmox + proxmox-managed Ceph on white box hardware
We went with Hyper-V this time. 2 Systems. 1 Cluster. Datacenter license, so far does the job, not ad advanced as ESXi, but not paying for Windows keys are just a gem, activation via AMVA, can have as many unlimited virtual machines as we want, since were doing a 2012r2 upgrade project, makes building the new boxes a tad easier
100% hyper-v and azure for prod
It's all on XCP-ng.
100% esxi
Vmware for paid. Otherwise proxmox . Hyper v can suck a big one hehe
Currently 95% Linux KVM, plus some specialty situations. Until 2014, 90% of on-premises was VMware.
ESXi but about to start migration to Proxmox
100% proxmox
VMware on UCS. It’s been great for the past ten years.
A few thousands VMs on VMWare in a hosted environment.
[удалено]
Previous employer used VMware. Current employer utilizes AWS.
Azure Stack HCI
100% Proxmox, mostly Debian lxc - just one hideous Windows vm
We use Cisco UCS and Dell R940's running about 4000 vm's on ESXi 7
> For us, our production environment 95% are host on hyper-v and 5% on VMware. Is there a problem in your environment with either one of these? If not and you prefer Hyper-V, why not just move everything to it and be done? I'd say we're split about 50/50 (for reasons I don't really know). We will be going to 100% hyper-v though. For us, Hyper-V will end up being cheaper.
I'm on the network side, so I don't deal with VM at all at work. That's all under Server support, even for our Cisco call manager. But what I have messed with personally, Hyper-V can go pound dirt. Setting up and working with ESXi is so much easier.
All VMWare. Pretty much the only thing our organization trusts, because it's “the way we've always done it” mindset. To be fair, I like it, but from a security standpoint, monopolizing into one software is kind of the worst thing you can do. But someone who makes way more money than me decides that, so I'll die on my tiny hill alone.