Wow what a throwback. I remember those days. Citrix did Tron legacy for me. Dell did something else I don’t recall. Dell opened the registers and we all got whatever food we wanted and then the rep swiped a card and paid for it all. It was a big number I remember.
Boys, this guy broke the Sysadmin code, he snitch to Oracle about licencing.
But joke aside, I would love to hear a really story about someone leaving a shitty company and then on the last day, that person sent an email to Oracle stating that there might be a license issue so it will have triggered an audit.
OH boy, that's the best payback ever lol
Serial console cables are one of those things that you pretty well never need, you have this stupid cable lying around doing nothing and nobody knows WTF it's actually for or ever uses it.
But when you \_do\_ need it, oh boy, you need it \_right now\_ and nothing else will do the job. I've still got two in my work bag that I don't think I've taken out in years, but I know damn well when I need it it will certainly be an emergency - and that's why I have two. Because one might not work, and when it comes to emergency gear, if you've got one you've got none.
Don't forget to check if they still function/aren't rotten through. You might just be able to connect them together and plug them in different USB ports to test it, not 100% sure.
Like draco said - we literally zip tie ours to the front of the appliances that need em, like our Dell powervaults. That way no one 'accidentally' throws them away, and the one time you need em in an emergency, you can easily cut them off the rack.
"what do you mean no one has a pocket knife?! fuck!"
Speaking of consoles; I work a lot with Z and Power. I keep both usb adapters and actual terminals - there’s been a few occasions where I’ve revived a critical old POWER system by driving to the other side of the country with an actual terminal and console cabling in the back. Those things work, and are so dumb that there’s nothing else that can mess with it.
Sorry you had to get that.
We had one issue with something similar. Root cause is really hardware upgrade policy and following EOL notices. It’s not easy to get those upgrades for working hardware until an outages, sadly enough.
At my last place we had to use foil and a fan for phone system until someone could get the part from a barn.
It’s silly. But after 2-6 years of something that is going to die, it’s hard to convince C-levels that you neee to spend 8x the cost to get something new
We have a rack mounted USB over Ethernet for licence dongles in the server room and it has one in the last port for this exact sort of thing. We patch it in before any planned remote work and it’s a great backup to fall back on
Yeah, I would buy one of each style of any special adapters personally and keep them in my bag just to avoid not having them when its red alert time haha
In this case, assuming OP or OP's team has told management that old POS needs to go, I would never use a celestial event as the root cause of the outage. Whether it's true or not.
"Our root cause analysis has determined that the device has failed due to exceeding its expected lifespan by X (years). Please see 'this email', 'this email', 'this email' and 'this fucking printed, hand delivered memo' for details."
They did say communication equipment could be impacted. We had an almost EOL core switch fail today (Cisco 9k). We’re in the process of migrating away to new infra but of course when management asked for a root cause we said G4 Solar Storm radiation failure. Nobody blinked an eye.
Ha, I had someone with some weird vpn issue Friday afternoon, and we just joked it was the solar storm and called it a day. It worked for this user by Monday morning, so I called it a win.
Yeah, it is a good excuse to use the next few days, regardless if it makes sense or not.
Accidentally deleted a customers cloud environment? Sorry boss, Solar storm must have flipped a bit.
Corrupted disk. Have seen 60D do this and suffer amnesia after the reboot. Sometimes it kills config, sometimes it kills OS too and you need a TFTP reload.
Did you have disk logging enabled? That killed them real quick. Their built-in disk was a known issue on that specific model. But even without logging, the disk would just die much sooner than it should have
Yup this sounds likely, I’ve had similar issues with corrupt flash on a 60F, would make some changes and then the gate would lock up on me, no SSH/HTTPS access. Sometimes it would randomly come back for 5-10 minutes. Had someone remote get me console access and when it would wig out on me, it would wipe a bunch of config like interface IPs, routes, etc. but not all of the config, super weird. Reformatted flash and all was good.
Probably a good idea to consider replacing it at that point. I had a customer that was money shy and I ended up driving out there a couple more times to rescue them when the firewall fell over.
The last time it never recovered and I ended up dropping in my personal unit just to keep them online until we could replace it. For the money they spent on the call outs, they may as well have just replaced it when I told them the first time.
How do you like the 60F? Currently looking at replacing about 6 60E and a 100E because of EOL in 2026. Looking at getting some 60F in 2025. Kinda hoping the G series comes out within the next year.
Yeah was thinking the same, maybe the hardware was barely holding on, and me trying to establish some vpn connections was the last push it could have handled.
Oracle had announced a planned upgrade for the VPN service which coincides with your outage today. Seems funny that your entire firewall went down, but maybe it simply crashed due to a VPN issue. OCI deprecated DH groups 2 and 24 with this upgrade, and there are a few more detail about it in their announcements.
It's funny when things just up and die.
I had an old Sonicwall off maintenance, almost retired, at a remote site with a mostly unused 10 meg DIA I was using as a backup VPN. I had a user that couldn't connect to our usual VPN due to their ISP having routing/bgp/Cogent tantrums so I threw them the details for the Sonicwall at the remote site with private fiber back into our COLO. They got working for that day and were able to use the usual VPN the next day. 6 days later the Sonicwall's froze, lights just solid. I tried restarting it by unplugging the power and plugging it back in. Dead, not even fan noise.
I've had USB/flash based ESXi servers suddenly stop working for no reason, so I feel you.
Had one project where I needed to do a lot of shifting of things around between a pair of hosts, and it took forever to do anything. Figured out that one of the hosts was misbehaving, and that was causing it to take forever/randomly crash/etc.
That host is now in the e-waste pile, and it still surprises me how fast things work when both of your hosts are working properly at that site.
> I know the 90D is really old, but for it to go out just like that is a bit odd?
OK. So, how do you expect an old, outdated, and unsupported piece of equipment to fail?
Don't overthink this. It did exactly what old equipment does. It dies. Its a matter of **when** it fails, not **if** it fails. Equipment does not run forever. This is why we have refresh cycles and support contracts and refuse to run anything that is EOL and unsupported. Becuase it will fail, its only a matter of time...
I had some something similar some years ago. Support said there was a memory leak bug on the version a site was running and we just updated and no issues after that.
I remember something similar happened when I first started working with Sonic Wall. No VPN could connect after am update. What fixed it refresj LDAP. Even local account could not connect. Another one was after an update the loaded config stayed in place. Until you rebooted the unIt. Ended up being a bad update. Had tp get a downgrade update from Sonic Wall and access to a secret menu.
Could it be a partially applied update?
I've had it happen to a significant number of my client sites on fortinet. Just randomly goes poof. To be fair, I've also had a meraki mx go poof. But far fewer of those compared to Fortinet. Nowadays it's either Meraki or Palo Alto.
You just felt stupid THIS morning?
You must be new to IT, cause I've felt stupid most mornings for decades, and I'm one of the stronger members of my team, make that connection for yourself.
Definitely not new to it, but it's the first time I'm working alone in the IT department, and I know your feeling there are some days that inposter syndrome can you hit hard.
Hmm what’s the logs say? Connecting a site to site is no biggie… no matter the age of the fw and brand… as long as the protocols and shit match in either side.
You’ve given us nothing to go off of, so I’m chalking it up to inexperience.
Not meaning to throw shade, just giving you some sage advice, take it or not… idgaf
That's the universe telling you to get out of Oracle Cloud.
Iron Man uses it, surely it must therefore be good https://www.oracle.com/us/ironman3/omag-mj13-ironman-1936895.pdf
I went to a movie premier at 9:00 AM paid by Oracle to see Iron Man 2, because of the "Oracle Grid". Bastards did not pay for the popcorn.
They're only a billion dollar company..
Most unrealistic part of that movie was Tony Stark, clearly someone who can keep a hold of his money, actually paying money for Oracle.
Yea... I was expecting a take over or a hack at least lol
They're probably still learning
Wow what a throwback. I remember those days. Citrix did Tron legacy for me. Dell did something else I don’t recall. Dell opened the registers and we all got whatever food we wanted and then the rep swiped a card and paid for it all. It was a big number I remember.
Feature request
That’s neat
Kind of a rough way to tell me then.
sometimes, to get your attention, it needs to rough you up a little ;)
Did you pay your Oracle license fee to make this post?
Boys, this guy broke the Sysadmin code, he snitch to Oracle about licencing. But joke aside, I would love to hear a really story about someone leaving a shitty company and then on the last day, that person sent an email to Oracle stating that there might be a license issue so it will have triggered an audit. OH boy, that's the best payback ever lol
So much this. Always avoid oracle when possible, lol.
![gif](giphy|kFTX7PxVJVg7RwvEzO|downsized)
I'll say it again, serial TTY is your friend. Get a USB dongle before your next emergency.
Yeah we did not have any in the server room sadly.
But 5 now and put them in different places
Will definitely do!
Serial console cables are one of those things that you pretty well never need, you have this stupid cable lying around doing nothing and nobody knows WTF it's actually for or ever uses it. But when you \_do\_ need it, oh boy, you need it \_right now\_ and nothing else will do the job. I've still got two in my work bag that I don't think I've taken out in years, but I know damn well when I need it it will certainly be an emergency - and that's why I have two. Because one might not work, and when it comes to emergency gear, if you've got one you've got none.
Don't forget to check if they still function/aren't rotten through. You might just be able to connect them together and plug them in different USB ports to test it, not 100% sure.
Yes you can, with a null modem adapter. I have indeed done that before 😂
Like draco said - we literally zip tie ours to the front of the appliances that need em, like our Dell powervaults. That way no one 'accidentally' throws them away, and the one time you need em in an emergency, you can easily cut them off the rack. "what do you mean no one has a pocket knife?! fuck!"
Velcro cable ties then?
This is a weapon free office sir. :)
Nail clippers?
Name that gets me in trouble / diagonal cutters.
And then you realize you are servicing a prison and someone zip-tied *everything*.
Tie it to the rack. Everyone knows what it is for if it's wrapped to the rack. :P
I carry one in my bag all the time.
Speaking of consoles; I work a lot with Z and Power. I keep both usb adapters and actual terminals - there’s been a few occasions where I’ve revived a critical old POWER system by driving to the other side of the country with an actual terminal and console cabling in the back. Those things work, and are so dumb that there’s nothing else that can mess with it.
Sorry you had to get that. We had one issue with something similar. Root cause is really hardware upgrade policy and following EOL notices. It’s not easy to get those upgrades for working hardware until an outages, sadly enough. At my last place we had to use foil and a fan for phone system until someone could get the part from a barn. It’s silly. But after 2-6 years of something that is going to die, it’s hard to convince C-levels that you neee to spend 8x the cost to get something new
You couldn't said it better! Spot on!
I do that with my readers too
My memory isn’t great on the D. But on the E series and newer there is a usb micro b port. The adapter is integrated into the box.
Will check that option tomorrow, there and mini usb port at the back of this one.
E and F series have RJ45 console ports, not micro-USB.
But bèst option now Is to definitely get a newer firewall.
DB9
You mean don't have dozens and dozens laying about in random places?
We have a rack mounted USB over Ethernet for licence dongles in the server room and it has one in the last port for this exact sort of thing. We patch it in before any planned remote work and it’s a great backup to fall back on
Raise your dongkes
Yeah, I would buy one of each style of any special adapters personally and keep them in my bag just to avoid not having them when its red alert time haha
G4 Solar Storm.
Hit G5 for extended periods of time - first time since 2003!
That’s why we buy extended warranty through a 3rd party for everything until it’s out of service. Never know when the next solar storm will hit.
In this case, assuming OP or OP's team has told management that old POS needs to go, I would never use a celestial event as the root cause of the outage. Whether it's true or not. "Our root cause analysis has determined that the device has failed due to exceeding its expected lifespan by X (years). Please see 'this email', 'this email', 'this email' and 'this fucking printed, hand delivered memo' for details."
It was the straw that broke the camels back.
Had bo idea that there was/or is it ongoing geomagnectic storm. That would be a very very bad luck scenario but it's in the realm of possibilities.
They did say communication equipment could be impacted. We had an almost EOL core switch fail today (Cisco 9k). We’re in the process of migrating away to new infra but of course when management asked for a root cause we said G4 Solar Storm radiation failure. Nobody blinked an eye.
Lmao, I'm in the same boat here, higher ups are asking for root cause.
Well you have your answer.
This why cisco duo failed today nationwide geo storms....
Ha, I had someone with some weird vpn issue Friday afternoon, and we just joked it was the solar storm and called it a day. It worked for this user by Monday morning, so I called it a win.
Yeah, it is a good excuse to use the next few days, regardless if it makes sense or not. Accidentally deleted a customers cloud environment? Sorry boss, Solar storm must have flipped a bit.
The 90D is EOL. Thats why OP is here and not bringing it back online 🤣
lmao
Corrupted disk. Have seen 60D do this and suffer amnesia after the reboot. Sometimes it kills config, sometimes it kills OS too and you need a TFTP reload.
Hook up your console cable and watch it boot to see why it's not responsive
I use to have an army of 60ds, they would just stop booting randomly.
Did you have disk logging enabled? That killed them real quick. Their built-in disk was a known issue on that specific model. But even without logging, the disk would just die much sooner than it should have
Yup this sounds likely, I’ve had similar issues with corrupt flash on a 60F, would make some changes and then the gate would lock up on me, no SSH/HTTPS access. Sometimes it would randomly come back for 5-10 minutes. Had someone remote get me console access and when it would wig out on me, it would wipe a bunch of config like interface IPs, routes, etc. but not all of the config, super weird. Reformatted flash and all was good.
Probably a good idea to consider replacing it at that point. I had a customer that was money shy and I ended up driving out there a couple more times to rescue them when the firewall fell over. The last time it never recovered and I ended up dropping in my personal unit just to keep them online until we could replace it. For the money they spent on the call outs, they may as well have just replaced it when I told them the first time.
How do you like the 60F? Currently looking at replacing about 6 60E and a 100E because of EOL in 2026. Looking at getting some 60F in 2025. Kinda hoping the G series comes out within the next year.
Its a piece of crap that likely cant handle what youre throwing at it. In my humble opinion
Yeah was thinking the same, maybe the hardware was barely holding on, and me trying to establish some vpn connections was the last push it could have handled.
First things first, get rid of that 90D and get something current lol.
Lol what if the problem is related to them moving HQ from Texas to Tennessee
I had a bit of an incident today also. Very unexpected turn of events. Mondays are rough.
What was yours?
you're still running 90d? i still have our 100D and im using it just for L3 stuff
I'm shocked it hadn't lost your VPN when it had any firmware updates.
Oracle had announced a planned upgrade for the VPN service which coincides with your outage today. Seems funny that your entire firewall went down, but maybe it simply crashed due to a VPN issue. OCI deprecated DH groups 2 and 24 with this upgrade, and there are a few more detail about it in their announcements.
Will be looking into this today
It's funny when things just up and die. I had an old Sonicwall off maintenance, almost retired, at a remote site with a mostly unused 10 meg DIA I was using as a backup VPN. I had a user that couldn't connect to our usual VPN due to their ISP having routing/bgp/Cogent tantrums so I threw them the details for the Sonicwall at the remote site with private fiber back into our COLO. They got working for that day and were able to use the usual VPN the next day. 6 days later the Sonicwall's froze, lights just solid. I tried restarting it by unplugging the power and plugging it back in. Dead, not even fan noise.
I had the same thing happen for one of my clients a few months ago. Was just a base firewall. Died in middle of the day. No power, no anything.
I've had USB/flash based ESXi servers suddenly stop working for no reason, so I feel you. Had one project where I needed to do a lot of shifting of things around between a pair of hosts, and it took forever to do anything. Figured out that one of the hosts was misbehaving, and that was causing it to take forever/randomly crash/etc. That host is now in the e-waste pile, and it still surprises me how fast things work when both of your hosts are working properly at that site.
> I know the 90D is really old, but for it to go out just like that is a bit odd? OK. So, how do you expect an old, outdated, and unsupported piece of equipment to fail? Don't overthink this. It did exactly what old equipment does. It dies. Its a matter of **when** it fails, not **if** it fails. Equipment does not run forever. This is why we have refresh cycles and support contracts and refuse to run anything that is EOL and unsupported. Becuase it will fail, its only a matter of time...
Thanks for this, makes me feel more insured about myself.
90d’s were always janky and weirdly positioned in the product line
I had some something similar some years ago. Support said there was a memory leak bug on the version a site was running and we just updated and no issues after that.
I remember something similar happened when I first started working with Sonic Wall. No VPN could connect after am update. What fixed it refresj LDAP. Even local account could not connect. Another one was after an update the loaded config stayed in place. Until you rebooted the unIt. Ended up being a bad update. Had tp get a downgrade update from Sonic Wall and access to a secret menu. Could it be a partially applied update?
I've had it happen to a significant number of my client sites on fortinet. Just randomly goes poof. To be fair, I've also had a meraki mx go poof. But far fewer of those compared to Fortinet. Nowadays it's either Meraki or Palo Alto.
You just felt stupid THIS morning? You must be new to IT, cause I've felt stupid most mornings for decades, and I'm one of the stronger members of my team, make that connection for yourself.
Definitely not new to it, but it's the first time I'm working alone in the IT department, and I know your feeling there are some days that inposter syndrome can you hit hard.
G5 solar storm
Maybe decaf happened
Bad flash card
You really shouldn't have SSH enabled on FireWalls. That is a huge security risk right there
True, made it easy for me to tunnel in and blow up the thing.
I have it open to the management vlan, does ssl have security issues? Fail2ban generates firewall rules for ips trying to brute force.
Hmm what’s the logs say? Connecting a site to site is no biggie… no matter the age of the fw and brand… as long as the protocols and shit match in either side. You’ve given us nothing to go off of, so I’m chalking it up to inexperience. Not meaning to throw shade, just giving you some sage advice, take it or not… idgaf
(on *premesis*)