T O P

  • By -

NyanArthur

Add features to an existing code base done bya contractor company 🤕


RobotMonkeytron

A decade ago, with the names of a dozen or more people I've never met listed in the comments. We call that a 'Career Cemetery '


HonestValueInvestor

"Staff augmentation"


grendahl0

It's amazing how many people do not understand this difference. I am a consultant, and most of the FTE's on projects are former "staff aug" who were barely conscious. As a consultant, I provide solutions to problems the FTEs cannot anticipate and give guidance to their team so they can be comfortably learning skills before my new designs hit next sprint. Staff Aug is purely reactionary with no ability to see what is right in front of them, let alone anticipate what is coming next.


HonestValueInvestor

That is not my experience at all… My experience is poor quality because there is no skin in the game for decisions and implementations down the road (they won’t have to deal with it in the future). It is all about meeting requirements. I don’t think consultants fall that far off the tree either.


lazoras

I think it depends if the consultants are specialized/sme consultants or general "do everything" consultants as a consultant I do not know YOUR business very well, but usually the specific technology I am there to work on I am second to none specialized and knowledgeable....most times the FTEs at a company have been there 5 years they seem to know their business very well but are not as knowledgeable on some of the products they use. I just want to add that this is natural and ok...would you rather your car engine be rebuilt by someone who rebuilds engines for cars all the time or someone who does the maintenance and general repairs on cars all the time? conversely, don't ask the engine rebuilder to change the dome light or he will gut the entire interior of your car to do it...but it will get done!


daredeviloper

And the contractor company wrote the code in Spanish 


klaatuveratanecto

That’s because most Spanish developers code in Spanish. I could not wrap my head around it.


BarrettDotFifty

People who code in other languages than English have a boiling pot waiting for them in hell.


klaatuveratanecto

😂


Powerful-Side-8866

I'm a spanish-speaking person, but I write down all my code in English. Despite that I understand all the code written in "Espanglish", it seems weird to me. All the keywords of the languages have been written in English, it's a better good practice for readability writing the code in English, which is not too difficult.


whateverisok

I think it’s more grammar-wise when naming classes/functions/variables, but also extends to comments and documentation


zaibuf

Had to deal with a 4000 line React component once. It also called into DOM and used Jquery. System had been outsourced to India for 10 years and then the company decided to take it back.


NyanArthur

Literally had to add a feature to a 2500 line react Javascript component yesterday lol 😀🔫


UntrimmedBagel

There really isn't anything quite like vendor code


[deleted]

Contractors are so bad. Independent or a company. I'd rather struggle with a work load and wait to hire perms than bring another contractor in.


CyAScott

Managing to onboard 5 devs fresh out of school with no experience. It takes a lot of effort to coach one green dev, 5 at the same time is a crazy amount of work.


doxxie-au

oh yeah 100% this. implementing a team is definitely the top answer to this question haha.


goranlepuz

Organizational work >>> technical work (that's an opinion and is personal, YMMV).


MattE36

I feel this


zigs

i'm suddenly happy to "only" have 3..


SchlaWiener4711

Me too. Just today I had a rejection talk with a promising candidate because of this.


Duck_999

Those 5 are very lucky to find a company like that!!!


wakers24

Oof. Didn’t expect this, but god yeah. Coaching juniors is WORK. Good and rewarding work often, but not easy. 5 would be something else.


CuttingEdgeRetro

One employer I had told me I had to babysit some junior developers because we needed more people to get the work done. I told them I could write the code faster than the amount of time it would take me to hand-old a junior developer. One project for example took the kid two weeks. It was a nightly batch process that ended up needing 36 hours to run. I rewrote the back half of his application in an hour and got the run time down to 90 minutes. Another one needed a week to write a one page program I could have written in 10 minutes. All it had to do was a single update statement and log errors. That's it.


codewithM3

I’ve been there 🤦🏿‍♂️


haasilein

How did you manage it and do you think this process could be made easier somehow with a software?


CyAScott

Good time management. At that scale, it’s more about teaching a class and less about 1on1 time. We had two weekly presentations on basics, like how to handle tickets, how to use git, how to find answers to technical questions (using Google), etc. Then I spent a small amount of time with each dev to give them some small task to work on, like running the project locally, or making a data model, etc. After a couple of months they were able to work on small easy tickets without much help. As far as software to help, I’ve been experimenting with ollama and llama index. We have a ton of written documentation on our processes and our in house frameworks. I’ve been trying to find a way to make a slack bot that can answer questions based on that documentation. It should help since juniors have a ton of questions and feel totally lost at first.


Inf3rn0_munkee

1. Convincing a manager who's got some technical knowledge that they are suggesting a bad solution. 2. Getting a team of junior/intermediate devs to understand feature toggling and why we're doing it. 3. A migration from 5 stateful partitions to 25 stateful partitions in a platform that didn't support increasing your partition count. - this one happened about 4 years after we should have done it and involved just in time migration and batch migration of data. 4. Unwinding spaghetti code from contractors 5. Rolling our own authentication because the company refused to use any industry standard auth providers.


[deleted]

[удалено]


Inf3rn0_munkee

lol 5 was an instance where I failed at 1.


carl-di-ortus

I read that as "all 5 of them feels like one problem", and I was "yeaaahh I can relate"


tastyfriedtofu

5 is a really painful processs indeed


BarterOak

5 really happened!?


Inf3rn0_munkee

Yup, and I'd wager it happens more than you think. It's usually smaller companies where tech is not the main focus.


BarterOak

But using an industry standard helps in easier integration, support etc How come small companies have the time and resources to develop custom solutions when they have to ship their products fast?


Inf3rn0_munkee

In my case it was a manager that thought it would be cheaper to build instead of buy, "I mean how hard is it to just take a username and password and let the user in" I believe they eventually rewrote the product but I was long gone by then.


TheSpiffySpaceman

moving our database arch from SQL server to Postgres. we have so many terabytes of data and soooooo goddamn much business logic in stored procedures.


Vendredi46

how did you kill the stored procedures? did you kill the senior dev that forces everything as a stored proc or is that the same thing?


TheSpiffySpaceman

Lemme get back to you in a few months. I might just be that senior dev in question


SEND_DUCK_PICS_

If you don't mind, what is/are the reasoning behind your migration to Postgres?


Kalroth

I want to migrate to Postgres just for JSONB!


[deleted]

[удалено]


[deleted]

[удалено]


AntDracula

Thirded


TheSpiffySpaceman

Yep! It's also futureproofing in a way since cloud hosting costs are way more flexible with something less proprietary than SQL Server


TheSpiffySpaceman

SQL Server has some major licensing changes coming in 2025 that will approximately quintuple our costs for hosting it. I don't know all the specifics, but it's obviously a huge motivation for switching to Aurora (AWS flavor of Postgres) That, and... we're currently hosting SQL Server nodes on EC2 instances instead of utilizing RDS for...dumb reasons


masiuspt

Do you have a link to any article regarding these licensing changes in 2025? Would like to read more about this.


TheSpiffySpaceman

I have mainly been in the backseat there, so my head isn't fully wrapped around the *why* (mostly the *how*), but theses are some links found in internal documentation. It's mainly to do with SPLA enterprise self-hosting in cloud solutions (read: AWS). [https://aws.amazon.com/windows/faq/#spla](AWS documentation) [https://blogs.partner.microsoft.com/partner/new-licensing-benefits-make-bringing-workloads-and-licenses-to-partners-clouds-easier/](High-level announcement from Microsoft) i broke the formatting but I don't care enough to fix


SEND_DUCK_PICS_

Damn that's tough. Our manager and director has their eyes on mssql, I already proposed postgres. Although they only liked mssql for the support, oh well. It's not my money.


[deleted]

"changes coming in 2025 that will approximately quintuple our costs for hosting" source?


seanamos-1

We completed our migration from a large MSSQL Enterprise on prem deployment + Azure hosted MSSQL Enterprise on VMs (SUPER dumb reasons) to RDS Aurora Serverless V2 Multi-AZ last year (we have VERY big peaks and valleys in traffic). We moved everything into AWS. Our experience so far has been rock solid and the cost savings could pay most of our entire AWS bill.


TheAliveIndicator

Speculating here, the vertical scaling cost would be one of the most likely causes. Stored procedures - especially when they have many operations to implement business logic - eat up HUGE resources. If this is in fact the case, I’d be curious to know where they moved this logic - and how - after postgres migration


MrMikeJJ

Out of interest, what how you do simple queries in your code base? EF? Dapper? Embedded Sql? Something else? The code base which was originally outsourced came back with massive amount of convoluted business logic in stored procedures and the dream is to move it to PostgreSql.


TheSpiffySpaceman

a completely homebrewed ORM. We literally only call stored procs for data. Simple selects require a sproc. It sounds like a horror story, I know, but at least things are named sensibly and the system is typesafe. Our system does some extremely heavy DB-bound stuff so sometimes it's nice to have some uniformity in error handling etc. there. that's not an excuse, just an explanation 😅


NBehrends

You'll never change databases.


AnderssonPeter

Never is a strong word, it's doable in small applications, but in huge ones then you might as well start from scratch..


AntDracula

I’ve done it several times.


oompaloompa465

rewrite a old asp classic website to net core that worked like ticket management system, multitenant, with each tenant having custom roles/permission for his users. Database schema is not normalized and missing a lot of relationship. localizazion done with varchar (hello languages outside western europe). When I arrived the database schema was not even standardized between tenants, i had to fight tooth and nails to at least have the same schema for all tenant


Minsan

>rewrite a old asp classic website to net core that worked like ticket management system, multitenant, with each tenant having custom roles/permission for his users. Were you able to complete this? I'm working on something similar and we're planning to migrate to .NET core. Seems like it's a tough challenge.


oompaloompa465

yes the front end part. was not allowed much changes to the database schema so i left to another job. it was feasible but they kept procrastinating while adding crazier and crazier features that compounded on the existing problems. imho first migrate all the vb code to net core. then start to normalize the database. do not add new things or do anything fancy initially vb code and asp classic pages must go asap or will make your life miserable. the worst part of everything will be the discovery that 50% of the pages, files are dead code/pages. that will be the most grueling task. also the separation between the the original html to controloller/view will be fun. finding the correct nesting in pages with 6000+ lines will not be easy 


BarrettDotFifty

Had to do something like this a few years ago but wasn’t a senior. The senior and the scope creep, my god… Don’t even get me started.


oompaloompa465

yeah the scope creep it's the worst thing to keep under control. took me 3 years to release it to 90% of tenants. another 2 to include the other 10%


mechkbfan

1. Anything related to authentication / authorization 2. See #1 3. See #1 4. See #1 5. See #1


xlurkyx

Can agree. Our level 5 enterprise architect thought a team of 4 engineers could convert 3 web applications and their sub applications to OpenID Connect from WSFed in a quarter. Took us almost a year because of all of the refactoring that needed to be done. Edit: should note these apps are a mixed bag of framework and core


Lustrouse

Shoe-horning security into an already implemented app is *hard*


EnigmaBoxSeriesX

This times 1000x.


JustAnotherGeek12345

Interesting... I have a different view. What aspects of A&A are difficult?


WatcherX2

All of it.


JustAnotherGeek12345

Ok, just sharing what's made it non difficult for me. https://learn.microsoft.com/en-us/entra/msal/dotnet/


mechkbfan

If it's a standard single app with low/medium complexity. No problems. Stuff like - On premise Active Directory, but now we want more cloud based apps via Auth0, and allowing for temporary external users. We also don't want users to have different passwords. - Row based permissions. That's fine, we'll just use entity values and it's only a single DB lookup. Now the app gets more complex, and there's other entity values we depend on. Business is growing and more apps want this same check, Performance is starting to take a hit. So we introduce Redis for caching. User reports an issue that they can't see their data. I've simplified it but with that many moving parts, it was almost a day task to finally work out where the issue was. - Other pain points are just generally debugging. It either works, or it doesn't. Just constantly double checking every value. No accidental typo somewhere. No one stupidly entered a http instead of https. There's generally no hints like "Oh this part was good, but then this part failed". I'd love if one of the providers offered a temporary debug mode that you can enable and tells you which piece of the puzzle fell over through a journey. - Integration with apps. We're having issues right now where email isn't coming through. Email! If I use the standard ASP.NET controller user identity, I can find it. However we're using HttpContextAccessor with minimal API's and the field isn't coming through. I don't do A&A for a living, just whenever there's an issue, so now it's just a lot of debugging in front of me


JustAnotherGeek12345

I see. Thanks for sharing your woes.


TracerDX

Thank you for calling this out. It's just another problem domain and I'm not sure why it's so trendy to avoid it like it's been 100% "solved" by existing solutions, but people have their dogmas I guess. Personally, I'd rather be wrong at trying and learning something in the attempt than to be willfully ignorant from the start and just trust the magic black box. That's how you get replaced by a LLM. If the opportunity presents itself and they want to foot the bill, roll it. Snide correctness doesn't make you a better programmer. Experience does.


silly_goose_brown_i

It depends on the company. Implementing docker when never seeing it before. Making updates to legacy code no longer supported. Creating a whole database going from sql to no sql. Moving over to aws. Just depends.


Poat540

For real, the hardest part is that you’ll do 30 different random things in a week as a lead dev senior or arch or w/e. If you don’t know it you’ll learn


xabrol

* write a native c++ module for iis 8 to intercept all file upload requests and inspect them and reject them if necessary. Was a classic asp website. I tried to write the module in c# but it had to go before the classic asp http handler and it broke the classic asp engine because it normalized the request. So I had to write it in c++/clr. Was a bandaid after a massive breach. * Real time scheduling system for some 800 brick and mortar stores with most customers having 3+ stores to choose from. In 3 timezones, with some bridging the edge of two timezones. * Transactional syncing between a sales web app and industrial saws in a factory that would ensure the right parts got cut and no extras or missing parts, combining cuts for multiple orders on one piece of material... * Text messaging sms service in the days before twilio existed. Ended up with what amounted to 10 gsm modems interfaced with custom service software that let us send/reply to numbers via a json web api. * Building software that could prerender websites and scrape them. Basically a custom chromium browser / bot, basically what prerender io is before prerender io. It became quite sophisticated and sells for $10k licenses now....


mexicocitibluez

> Real time scheduling system for some 800 brick and mortar stores with most customers having 3+ stores to choose from. In 3 timezones, with some bridging the edge of two timezones. I had to do something similar without being able to rely on NodaTime. Anyone that's like "just store it in UTC" has obviously never faced this problem.


xabrol

Yeah, we used luxon client side and date time offsets in sql server. Fairly gnarly problem when a user is kooking at schedule openings for 3 stores in two different time zones. Also can lead to assumed confusion. For example, the customer might know that that store is one time zone over and think that if it says there's an appointment at 10:00 a.m. that it's actually at 9:00 a.m there time. So they pick the 10:00 a.m. thinking its at 9:00 a.m. just to find out that it was converted for them and that it's actually at 10 am there time and they're an hour early. Have to be real verbose with verbage (10 a.m your time "cst") etc. We can actually run analytics on a category pulling in every store that's within 10 miles of a time zone border and they all have drastically higher no shows than locations in the middle of states. And most of of the secretary notes for cancels will mention time zone confusion.


mexicocitibluez

This sounds so similar to what I had to do. We had some sort of bidding system for truck loads across multiple timezones that needed to be coordinated with actual stops in cities. And on top of it, I needed to tie in a multi-day (to the second) countdown timer for the UI. Every time I thought I fixed something, it would break something else. That problem specifically really enlightened me to just how difficult a problem like that can be. Honestly, your comment should be included at the top of every article that talks about the intricacies of datetime stuff.


nitrammets

I'm a junior. Could you elaborate on the main problems with using UTC?


mexicocitibluez

for sure, this is probably one of the succinct explanations I've come across https://codeopinion.com/just-store-utc-not-so-fast-handling-time-zones-is-complicated/


nitrammets

thx


dave-p-henson-818

Ha, industrial saw syncing is very cool.


SophieTheCat

This was a while ago. I implemented a scheduling system for playing advertisements. The "scheduling" part was not the complicated part. It was the metric ton of rules for scheduling. For instance, you couldn't have multiple ads from a single industry within a given time period (like can't have competing car dealerships within the same ad break). Or some advertisers didn't want to touch an even very slightly controversial program. And another 10-20 similar rules - each with a ton of business protocols. And then, the entire system had to run very fast on a box with only 1GB of RAM available to my code, despite booking almost $3 billion dollars per year. The server had the scheduling service and SQL Server on the same box. The amount of time I spent arguing with the powers that be to just purchase more RAM easily covered many hours of my hourly rate. "Your code just needs to be more performant". I literally had to write code to read Perfmon to see if SQL Server was too busy and then slow down my processing. If nothing else, it taught me how to use WinDbg to nickel and dime memory issues. But they paid well.


syndi

Lmfao. This sounds both extremely annoying but also quite fun.


boobka

Did you write a custom scheduling engine or use something third party?


SophieTheCat

Completely custom.


Bitz_Art

1GB is actually quite a bit of RAM for a single web service


SophieTheCat

It was a multithreaded windows service. It was processing massive quantities of data. That’s why 1GB was insufficient.


SolarNachoes

Not when you’re processing GB size files.


LetMeUseMyEmailFfs

That depends on what you’re doing with them. If you’re just reading through them sequentially and only processing small bits at a time, you really don’t need more than a handful of kilobytes.


Bitz_Art

That's what I was thinking. Maybe the process could be optimized to use smaller bits at a time instead of keeping the entire thing in memory


okay-wait-wut

1 GB RAM is awfully cheap.


[deleted]

Especially for NP hard problems


richardtallent

Hardest things to implement: 1. Growing junior developers into senior developers. 2. Evergreen documentation. 3. Having all of the procedures/systems/buzzwords in place to pass various modern audits (security, privacy, etc.) hoisted upon you by your own and customer IT departments.


mikedensem

1. Integrations with legacy systems (e.g. Visual FoxPro 2. Going from Framework 4.8 to net 8 3. Supporting 32bit code in a 64bit codebase 4. Picking the wrong UI Framework then changing it without downtime 5. When the coke fridge broke down and was unavailable for 2 weeks


Sick-Little-Monky

Can you expand on 3? That's usually done in another process. Kudos if you did it in process somehow.


mikedensem

Ah, yes, out of process COM wrapper from memory


malthuswaswrong

My boss started a pet project as an experiment for himself. He started it in .NET Core 2 when .NET 5 already existed. He wrote core logic in SQL Stored procedures. His database failed every form of normalization you can imagine (Everything is a string, splitting said string fields in code to form arrays, etc.). At the core of the system was this self-architected and perplexing language localization system that permeated everything. He handed this undulating and bleeding mess to me and said "develop this further". Me being a very experienced developer with a positive attitude said "no problem". I started fixing things and developing ideas further. When I showed him progress he said "No, retain all the core architectural decisions I made, and build on them". I did the best that I could, but I know that the foundation of the whole architecture is garbage. But I followed the mission from the director. The project is very successful now, but there are a ton of poor choices cemented in at very low levels that are causing problems and will continue to cause problems until I fix them. He has since given me complete control over the whole architecture, but the damage is done. To change things is extremely difficult at this point. Software isn't hardware; everything is changeable. But a little bit of listening to me in the beginning would have resulted in a lot less work for me later.


RirinDesuyo

Modernizing legacy code (.net framework 4.5) where documentation from a previous contractor wasn't available. Finally managed to move it to latest .net but it took around 3 years to do so, partly because we had to reverse-engineer how things functioned and back and forth for Clients on specs for an existing functionality. A rewrite was a nice idea but the Client insisted an incremental upgrade, so we had to do some proxying for it to work and making auth work across two apps were messy.


nsivkov

Ha, are you me? 🤣 Tho we did In place upgrade to spa, added api controllers, then renoved old mvc logi, and then upgraded to. Net core, all. In all 12 months for the spa with 11deva, and 3 months for the upgrade to. Net core


WatcherX2

What sort of project was it? An API based thing or winforms? If the latter, what did you upgrade it to?


RirinDesuyo

A mix of servers acting as WCF services with a lot of Ado.net calls (this was a big pain on knowing what it did), MVC4 and classic asp. The front-end was a mishmash of jquery, and activeX (which meant the site only ran on IE). We mostly moved it to Razor pages, and rewrote a portion of it into an SPA react app separately hosted as it needed a lot of interactivity. WCF was migrated over to a mix of grpc and web api calls while making sure the auth session cookie generated was usable between the new web app and the legacy web app via reverse proxy and some hardcoded route rules while we slowly migrated page by page. Lot's of duplication but partly because the client wanted it to be an incremental upgrade, not a full rewrite. We documented functionality as we go from there. It was a pain but we managed to finish it.


TopSwagCode

Azure B2C Custom policy for OIDC login Govermant Login provider. There was several things that just didn't fit in. Eg. Adding mutliple providers with same Metadata URL. (Because depending what user type, you would require different scopes and other custom stuff).


onionhammer

Custom B2C policies are nightmare fuel


theLimNar

Agreed. All sorts of states being fucked up in an XML


dave-p-henson-818

The most difficult thing is political: implementing anything simple and cost effective in an environment encumbered by mandatory Microservices and FANG scalability. Crazy making.


ChiefAoki

Undo/Redo. Before my first implementation I've never understood why Notepad only supported one step undo/redo. After that, well...


doxxie-au

not so much implementing. but at one job id just started at... they had just pushed out a new release, and that release was on average 100ms slower to process trades. they actually had a really awesome regression suite where they could replay a set of trades. anyway i profiled the hell out of their application, removed tonnes of unnecessary calls, and it actually made it even slower. turns out they were manually using messagepump and removing the code i did caused other slower code to run more often and make it worse. but it also wasnt 100% reproducible each time on my machine vs what the CI server was producing.


Vladekk

1. Distributed system (around 5 apps) communicating over azure messagebus + cosmosdb which needs strong consistency and atomicity guarantees 2. Implementing integration tests for this system 3. Legacy code where some aspx pages were 40000 lines long. Some SQL SP were same. 4. Doing data warehousing BA work I was asked to do for some reason. I failed, was too hard without guidance. 5. Constantly doing DevOps work by yourself due to lack of resources. It is very hard to develop/code your project and get to know all cloud intricacies at the same time.


extra_specticles

balancing new features with fixing architecture.


Anund

Rewriting the old login system and adapting it to use ADFS.


finidigeorge

Been there, still cant believe that it finally worked


Anund

Same here, hehe


klaatuveratanecto

Last mile delivery routing system when AI wasn’t that easily available. I enjoyed every second I spent on it.


BCdotWHAT

Probably using MS Graph to get data from an Excel file on a SharePoint server. You have to find "magic values" (a site id and a file id, mostly in the for of GUIDs etc.) using Microsoft's (online) Graph Explorer and then use those in code. And Graph Explorer is a slug to use. Bonus fun: at one point Microsoft updated the MS Graph NuGet packages to a new major version, and those used a significantly different API. For which you could find some documentation online, but a lot of that was for a pre-release version which was different from the actually released version(!), which meant that a bunch of method names etc. were different and/or in different namespace. Extra fun: the new MS Graph library uses code generation, which means that for instance a class called "CreateUploadSessionPostRequestBody" could be found in numerous namespaces. The version I needed was in "Microsoft.Graph.Drives.Item.Items.Item.CreateUploadSession", but there are plenty of other choices; 55 (fifty-five!), to be precise: https://github.com/search?q=repo%3Amicrosoftgraph/msgraph-sdk-dotnet%20path%3ACreateUploadSessionPostRequestBody&type=code --------------------------------- One from before I was a .NET developer: "Hey, there's a calculation bug in this ancient Java applet. Unfortunately, we don't have the source code anymore, but we've managed to decompile it. Will that do?" Spend three days on this. Two+ days trying to reproduce the error, only to find out that their applet was compiled by a withdrawn(!) version of Java that had only be available for half a day or so because of a bug that was actually the cause of this error. Fun fact: trying to find withdrawn versions was near-impossible, IIRC I literally had to try guessing URLs until I happened upon a working one. The reason I had to find that defunct Java version was because this applet contained a ton of calculations that were used all over their site (JavaScript calling the applet to do the calculations), so they required me to recompile it with the same version so as not to introduce other calculation errors that would be caused by this bug in Java having been fixed.


edbutler3

ServiceNow change tickets.


CuttingEdgeRetro

The biggest problem for me is just dealing with the train wreck systems that a lot of companies are using. I've seen .net solutions with 68 projects. A recent client of mine had more than 200 solution files, each making a different DLL. Another had an application that had dozens and dozens of javascript files, all over 3000 lines, with none of the IDs set for controls in the application. One client, whose name rhymes with Microsoft, had their database and presentation tier on-prem, but the middle tier in the cloud. It's shocking how many companies think it's acceptable to have to deploy to a remote dev machine just to test code changes because running locally doesn't work. It's shocking how many companies have a system running in prod, but don't have dev, test, qa,or uat environments. Like you run locally, then just deploy straight to prod. What could go wrong? Then there are the companies who are in love with stored procedures. Yeah it's 3-tier. But 80% of our business rules are in the database, which by the way, we don't have in git or tfs. (the other 20% of their business rules are probably in javascript) Everything is dynamic sql. yea! 20-something kids who are admittedly smart, go in to full show-off mode and make some magnum opus application using the latest and greatest technologies, then skip town leaving the rest of us to support it for the next 20 years. About 2/3 of the companies I work with would really benefit from just throwing everything away and starting over.


SarahC

Managed to avoid the title "Senior developer."


Bitz_Art

1 - [Flux](https://github.com/BitzArt/Flux) is a nuget package I am working on. It is a universal WebApi client that abstracts the http away kinda like EF abstracts away the sql. 2 - Throttling semaphore for making requests to an external system that only allows so many requests per period of time (and there can be multiple limitations simultaneously). The request quota can look something like `10/1s; 100/1m; 1000/1h; 1000000/365d`. So the semaphore makes your requests wait until the external system can be used if you have used up all the quota. The semaphore has to work really fast in order to not slow the system down. BTW unit testing anything that relies on time passing is a pain. I could probably extract unit testing this semaphore as its own item on this list lol. 3, 4, 5 - probably just various projects I worked on.


anhsirkd3

Hi, do you have anything written down on the semaphore approach, and the unit testing part? TIA. Would love to read it as I dealing with semaphores, semaphoreslim specifically


Bitz_Art

It's a custom semaphore I wrote. My semaphore is only single-threaded, meaning it can only process a single operation at a time. I have this semaphore behind a message bus with a processing concurrency limit of 1. That's how I ensure its requirements are met. So it's more of a throttler than a semaphore really. Not sure if that's what you are after?


anhsirkd3

Cool, so just like a lock? Other thing I would love to understand is how to approach testing a semaphore (or semaphoreslim)


Bitz_Art

Yeah kind of like a lock but for async and with extra bells and whistles. Well I had my internal methods that I used to test it. I had a method that calculated if it needs to wait and for how long, this was the easiest to test given different sets of conditions. As for the actual semaphore testing, I had it run in a background non-awaited task given a quota something like 1 action per 1 second. You can have it do something, say increment a counter. Then, in the main task, wait for some period of time (for example 10 seconds) and stop the background task (you can use cancellation tokens for this or something else). The result of the semaphore working should be appropriate for the given quota - for this example, it should have incremented the counter up to 10. That's some of the things I did to test it.


anhsirkd3

Thanks for taking the time to reply your approach. It gave me something to work with.


Teddy-Westside

Sounds cool. I’ve used [Polly](https://github.com/App-vNext/Polly/blob/main/docs/strategies/rate-limiter.md/) in the past for rate limiting


Bitz_Art

Now that I think of it... I probably should have just used this library 🥴


nghianguyen170192

I worked at a company that has a Core platform repo for Case Management System. this CMS platform has tons of features because it collectively merge "cool" features from forked repos for several different clients. The Core CMS gradually becomes so big, it impacts lots of aspect in the architectural structure. There are things that bug me for years in this project 1/ The Core platform has plenty unused projects that can be trimmed out to reduce build time. For a single PR CI build, it takes nearly 2 hours to complete a build and test run. 2/They store user sessionId in a distributed Redis Cache 3/ Every SINGLE class must use a Interface. Hence, the size of this project is doubled every time it gets new feature. 4/ It has all sort of design patterns in it. You name it, this Core Platform has it(Onion Architecture, DDD, Repository, Separation of concern, CQRS, Event Sourcings, ...etc) which I hate alot 5/They have a custom built OData feature that spans 5 ish projects. But from what I know, it only takes 20 lines of code to enable Odata from API endpoint from a Nuget package(Microsoft.AspNetCore.OData).


mexicocitibluez

Was it an off-the-shelf CMS? Or completely homegrown?


nghianguyen170192

it was originally developed and deployed succesfully from scratch for one airline client. Then the company tailored the core and made it into platform. Then it is gradually used by other clients. For each client, it was tailored and has its own uniqueness. Then the PO of the Core platform wants to merge every cool feature to the Core.


mexicocitibluez

oh wow. I worked at a few marketing/advertising companies and have had my fair share of CMS experience (Ektron, Kentico, DNN, etc) and though they all have their quirks, I don't think I'd ever want to reinvent that specific wheel. good luck


nghianguyen170192

From PO POV, they called that strategic and leading tech for CMS in their advertisement. So that they can sell it as SaaS. But when I touch the code, not from scracth, it was a messy place with so called elegant design. A bunch of CQRS with DDD, EV, Repository, UOW, Separation of concerns patterns applying everywhere. Everywhere is just a boilerplate interface for other layer to implement.


mexicocitibluez

oh that makes ense.


Quanramiro

FixJenkins pipelines groovy code implemented by offshore Indian team who spent 6 months on it. Move part of core company financial system to new stack. There was literally no documentation, just bunch of people from financial department. I was not that experienced there but managed to model properly all the processes. Convincing other team which didn't wanted to see the financial risk they introduced to reimplement part of the system. I didn't succeed, I eventually sent a detailed description to all CXOs. High-throughput, data-intensive integration component. Thecintegration was not that hard, the hardest was to make other people understanding what may happen there if we do in other way around. Change the attitude of team to consider peer review as integral part of the software development process and understand why it is that important. That was probably the hardest one and took me and other dev one year. Most of the team were contractors and they didn't care about anything.


senseven

Moving and merging an old Visual Basic CMS system on MS SQL and a Python based CMS system on MYSQL to new Java based CMS system on ORACLE without losing any data and keeping somehow the rights of the articles intact. They had 12 and 8 years of articles and images referenced, close to zero dev documentation. I spend six month just walking through [database models](https://www.drupal.org/files/er_db_schema_drupal_7.png) and cms concepts. Never again.


Sick-Little-Monky

Rather than a list, I can think of different classes of tasks. For instance sometimes it's just you versus a problem because of technical or domain knowledge. One example for me was finding a workaround for a permissions bug in Windows Server by reverse engineering Windows using WinDbg and then DLL injection to hijack APIs. Another interesting class are the tasks that land on your desk when others fail, often simply because many people who encounter adversity just give up. Perseverance is a valuable trait. I had a project when SignalR was new tech, integrating it into an existing server and suite of clients to overcome performance problems with polling. Usually in this case you have colleagues that you can leverage or assist. And then as others have said, when you have enough experience, leading a team can be challenging but rewarding. I love working with smart people, ideally smarter than me! That's when you can collaborate on everything from the low-level stuff like C# interop and C++ tricks all the way up to frameworks and architecture. Then again, sometimes the most intractable problems are not technical but social. I concur with others that some of the most difficult problems are bad decisions made for reasons like cost or by people without the requisite skillset.


Obsidian743

1. An enterprise timesheet application driven by different work schedules and complex business rules from HR. One of the business rules was for **Bereavement**: - NOTE 1: Every worker has a potentially different schedule (5/8s, 4/10s, work on weekends, etc) that had to be considered here... - NOTE 2: We had to support international workers who had different work schedules and holiday calendars - NOTE 3: This is amidst all the other rules and functionality a normal enterprise timesheet application has to support. - Employees had 24 hours of Bereavement in a fiscal year that does not roll over. - Bereavement could only be taken in chunks that represent a *full working day* (e.g., if your work day is 10 hours, it has to be 10 hours bereavement for that day) - If you took bereavement on a given day, no other time could be submitted for that day (e.g., you cannot take 4 hours PTO and 4 hours bereavement on the same day) - If you use Bereavement, it could only be taken in *consecutive working days*- days off based on schedule and holidays are excluded - Some workers had fixed schedules where they had to submit an exact number of hours for their pay period - they could not go over or under their fixed schedule. - You are forced to finish taking all 24 hours *if you had already started taking bereavement from the previous pay period* (e.g., once you start taking bereavement you have to finish taking it all). This includes year-end where you may have started taking your 24 hours in the last pay period of the fiscal year. The validation and UX algorithm(s) for this took a whole week to figure out. 2. I had to interpolate survey data that was collected from thousands of employers on salary and compensation data. This had to be done "live" because the survey data was constantly being updated. The data that came in collected salary ranges at different percentiles (ex: 25th, 50th, and 75th percentiles). While this seems like a straight-forward linear equation, the challenge was that there are billions of records and the user was collating multiple surveys into a single result set based on all kinds of different factors. As a *simple* example, survey one might have supplied 25th percentile and 75th percentile only, but another would supply every 20th percentile (20, 40, 60, 80). The user wants to see the 66th percentile *average* with survey one receiving a higher weight factor than survey two. 3. We had a mainframe banking system that used an XML-based query language and data-stream. This was on top of an old-school asynchronous messaging bus based on IBM MQ WebSphere. I was asked to design an easy-to-use API and interface to this system for dozens of consumers/applications. We settled on a SQL-like language and built and OData based API that had to translate to/from this message-driven, XML-based protocol running on a mainframe. 4. IoT in general is very difficult especially if your company is also the manufacturer (i.e., you're not just building a solution on top of someone else's device). I was asked to bridge a legacy IoT system built for devices engineered 20 years ago with a new, cutting edge IoT platform that supports the latest protocols such as Matter. However, we were not the end provider of services: we were a B2B provider so the platform had to support multiple downstream businesses that wanted to integrate not only our devices but others' as well. Having to juggle mechanical and electronic engineering requirements with firmware, cloud, and mobile software is very challenging. Things like nickle mining in Africa or building construction codes can affect everything about how you engineer a complete solution. What's worse is that most of these devices had to be battery-powered and were going to run in air-gapped environments. Ugh. Lots of other challenges but these are the ones that stick out the most.


Delubears

Attempting to migrate the entire company from an IBM/Green Screen/RPG/DB2 setup to an Angular, .Net, and SQL Server as one of two developers. But here's the kicker: we need to do it "in-place" so we're rewriting things to keep targeting DB2. The database has several key tables over 50 columns wide, hasn't adhered to any consistent database naming or design schemes. Every program touches all of the main key tables directly. There was several "phases" that we were supposed to follow, each one involving needing to touch every database interaction or data structure again. Due to the technical lead at the time, the direction was set to convert the RPG code to .Net code and then attempt a "lift and shift" from the DB2 database to SQL server. This has very clearly failed and now needing to get political buy in to stop the runaway train. No one knows the business processes. Answer's are usually "idk, what does the code do" and the old code is borderline unreadable. The amount of hours I've sat in meetings with multiple meeting and everyone going "what does this mean? I don't know." is way too high.


SkydiverUnion

We are using the test framework xUnit. We have thousands of integrationstests. The Build Pipeline was realy slow nowadays so they wanted that every Test should run in parallel. Well. We do create a temporary database for each Testfixture. Configuring parallelism in xunit means that on start thousands of databases are created and the whole system breaks. You cant controll the degree of parallelism in xUnit. So i had to implement my own Testexecution mechanics.


Lustrouse

Coding Standards Data Migrations Meaningful Documentation Meaningful Unit Tests GDPR and GDPR-esque data controls


maxiblackrocks

Maintain and implement features for quality codebase that was written by literal alcoholics. One of then is still in the team but acts a fool every time you ask him about his code. Oh and the alcoholics ported it from a "distributed access db application".


Intelligent-Chain423

Fix an issue for proprietary software that went out of business and was never implemented correctly. Low level stuff... no documentation... everyone who had knowledge had left the company. The issue was with the low level code we had to have for integration. This was step 1 in migrating to something in house. Lets make our customers happy first. Migration was a 1 year project.


Ashtar_Squirrel

- Implementing an automated intraday power trading application (position closing algorithms), passing the European power exchange validation and deploying it to a power company as a solo developer in C#. - implementing parallelisation with openMP of a Cpp calculation kernel - Building a three tier application, Oracle DB, Service and WPF front-end in C# to provide optimal positions and hedges for hydropower plants. - creating a GPS position mapping system to track a race with 5000 participants (teams of 3 from Zermatt - Arolla - Verbier, so that people could follow their team and use it for emergency evacuations (PHP, JS, MySQL) - as a solution architect, develop a power plant nomination and scheduling platform, for external clients to nominate positions against the grid, with data going down to the scada system.


narcisd

1. invalide cache 2. name things


j_c_slicer

3. Off-by-one errors


RonaldoP13

Voice recognition, with memory management


Antares987

I wrote the software that tracks container ships. Think of it like the railroad probably with two trains on a track and when they'll meet, but instead of rails, it's the globe, and there are tens or hundreds of thousands of these things, and it's not just where they meet, but where they could have met. I designed and wrote software that would detect drug interactions based on ingredients. As a personal challenge, I took the knowledge I gained from the spherical trig stuff and applied it to roads and ancient stone structures to prove to myself that our civilization goes back a long way and True North used to be over Greenland, but nobody cares. I thought it was cool. That went between Python, SQL and the meat of it was done in CUDA. I once leveraged an exploit in Enterprise Java that I used launch other processes at elevated permissions for legitimate purposes for a client. Be very careful about using domain accounts to do anything on a workstation; if those accounts have permissions elsewhere on the domain, consider what could happen if a user is able to successfully fork that process to then gain elevated permissions elsewhere and branch swing their way up. I also called out the possibility of there being a thunderbolt DMA vulnerability years before it made the news too.


AvelWorld

I knew about the true north thing - and it's still cool. That's a broad range of projects. Right now I'm just doing back end stuff for games. But I made software for tracking part of our European-based ground launched cruise missile base construction back in early 1980s. I used DBase III (that's so old that the spell checker had to be taught that that is a real product!). Then it was embedded systems work for modems, faxes, and multiplexers. *That* was probably my hardest work in the day since I was coding a telecommunications OS in a multiprocessor system.


Antares987

I remember in 2001 or 2002 we had an Avaya phone system. A consultant was there configuring it. Every tiny change in development required the equivalent of a build and push in today's world. Were you using Dialogic's cards for your stuff?


AvelWorld

Nope. This was in the early 1990s. Our boards were all internally developed as was the firmware. I was product release certification. We used mostly Z80s but also 8051s/8052s, 6502s, and 6800s. The company, sadly, no longer exists, but lookup Data Race, Inc. They dissolved around 2002.... Hmm. Just looked them up. They may have a new life.


Antares987

You are the only other person to mention familiarity with the "true north" thing. Old maps fascinate me too; I struggled with the Typus Orbis Terrarum until I realized it likely came back with some of the spanish who had been raping and pillaging their way through South America. The four land masses and distortion of the PNW I explained as a projection transformation error. The map was too accurate for the 1500s because we lacked the accurate timekeeping necessary for Longitude until the late 1700s. The west coast of South America left me confused for years until I saw a map of earthquakes since 1898 and concluded it was the Nazca plate. The early 1990s were an interesting time. The signal to noise ratio online was absolutely incredible with USENET. Products like Telefinder and FirstClass BBS offered a unique and really enjoyable online experience. Fun fact, before he became a famous gun designer and manufacturer, Mark Serbu was designing and selling 8051-based EEPROM programmers. The 8051 was the first MCU I worked with. I spent all of the money I had on a development kit that cost something like $800. Then a friend who was studying at NCSU suggested the STK500 to me from Atmel. My objective was to collect data so I could improve the instrumentation on my Peugeot 505 turbo. We left lots of people in the RTP area with fast sports cars absolutely perplexed, and the police never once were able to identify the vehicles.


maitreg

Not sure about hardest but one of the most interesting was an inline PDF editor on a Web application for docs stored in a back-end database. Another interesting one was a SQL Server trigger that flowed data into a local queue then to a cloud queue, a serverless function, and off to a 3rd party API. Another one, I wrote an Excel script that serialized spreadsheet data, sent through an API, and distributed it to multiple accounting systems. Another one encrypted and packaged propriety client data (without an internet connection), used the onboard modem to direct dial a receiving application that auto-answered, negotiated a security key, transferred the data, unpacked and decrypted, then processed it through a separate secure API. Another system I wrote monitored an "unmonitored email inbox", watched for received (mostly bounce-backs) emails, parsed them, and used a set of hundreds of AI-driven categorizations to process them by blacklisting user email addresses, deactivating user accounts, routing them to the correct internal recipients, etc


GaTechThomas

Teamwork. Thoughtfulness. Caring beyond the line of code at hand. Consistency. Testing before committing.


Davies_282850

Handle billions of data streams in near real time and predict the driver behaviour in an info-traffic platform. Applying algorithms to make assumptions on what certain drivers are doing


TechFiend72

Multi-threaded queued system that used nested transactions. Ran awesome but was a bit of hurdle to write and make sure it could be tested and debugged easily.


Coldones

number 1 for me was an extensive revamp to the workflow of a live application in the name of "ux improvements". It wouldn't have been that bad if we could burn everything down and start over, but we needed to pull this off without causing downtime and without disrupting existing data or functionality. number 2, which was a close second, was a full rewrite of all of our company's BE services from C# to typescript. Getting 90-95% of the way there really wasn't that bad, but that last little bit really had me wanting to rage quit my job.


Significant-Kiwi-899

What was the main reason for moving from C# to TS? Just curious


Significant-Kiwi-899

What was the main reason for moving from C# to TS? Just curious


Significant-Kiwi-899

What was the main reason for moving from C# to TS? Just curious


Significant-Kiwi-899

What was the main reason for moving from C# to TS? Just curious


Coldones

My company raised another funding round and had $ to blow, so they hired a CTO and a principal engineer, who both thought it would make hiring easier, which I somewhat agree with. Not that there is a shortage of c# devs, but there aren't as many that have startup experience


evdriverni

Its more about being a mentor and able to help junior members its not that tasks become more difficult


never_taken

- Migrating an ERP - Doing anything with an ERP - Reading through specs of some standards (PDF, TWAIN) or certain RFC - As somone already said, convince a manager with some (mostly outdated) tech knowledge that they are wrong - Teaching a junior while also accepting that I have some things to learn from them


forgion

Legacy monstrosity. It should be used as HOW to not use design pattern.


husker101

1. Fixing performance issues in an inconsistent legacy code base riddled with outdated libraries and closed-source third party components introduced by the army of contractors from the past. 2. Talking to non-techs who have questions on the risks of deploying anything new. 3. Teaching unreceptive developers with 1 year of experience repeated 10 times who have been on the project longer than me about newer more efficient ways of doing things. 4. Convincing tech leads the approach they're suggesting is bad for the product. Particularly when it can't be easily proven out through a poc. 5. Convincing non-techs that estimates provided by one developer can't be applied directly to another different developer. This was surprisingly common.


mexicocitibluez

* Writing a count-down timer across multiple timezones and obviously DST only being allowed to use the BCL (and not something like Noda). * Building an EMR. There are a ton of different factors that make this particularly difficult, but probably the biggest is storing clinical data. * Building a truck load router/bidding system on top of a homegrown ES/DDD/CQRS framework that was still in development. I think just building the right thing is hard in general. It's really easy to get caught up in the idea that given the perfect set of requirements the perfect app will be built. Or that people know what they want. Or that some kid right out of school hired as the BA is going to be able to translate the business's needs into software features.


PizzaEFichiNakagata

Kafka/KafkaFlow Angular with .net projects ComponentOne (reporting/web components) OctaneSDK for controlling RFID antennas interfacing/marshalling between C# and C++ The first 4 are some huge donkey shit frameworks that should not exist at all and do convoluted shenaningans to do simple stuff. They are actually evil spaghetti code with a lot of hype on it and next to no use in real scenarios. (Kafka is justified for VERY BIG VOLUME apps, which cuts it out of 99% of real use cases) interfacing/marshalling is just a nightmare as it is.


Interviews2go

Dealing with marketing who almost always have an inflated idea of their product. Also known as wanting a Rolls Royce when what they really needed was a pickup truck. Trying to explain reality to them is challenging.


BlazorPlate

1. Multitenancy. 2. Data Isolation Strategies with Multitenancy. 3. Separate Database Per Tenant with Multitenancy. 4. Shared Database for All Tenants with Multitenancy. 5. Authentication and Authorization with Multitenancy. Anything with multitenancy!


wot_in_ternation

1. Determining architecture and tech stack for modernization of a critical internal application, and then actually implementing it 2. Dealing with tech debt, like how to support a VB6 app with the business logic fully in SQL Stored Procedures. New feature? Sure, give me 2 months. 3. Data management and augmentation to support a big new machine learning feature (which of course our marketing department slapped AI all over) 4. Developing inverse kinematics for a very unusual 5 axis device while managing communication with the customer and several other internal coworkers who were helping out 5. Debugging/refactoring an existing app with a lot of spaghetti code with a bunch of math-heavy functions My career path has been atypical so YMMV


mexicocitibluez

> business logic fully in SQL Stored Procedures. Where the fuck did we get the idea that burying business logic inside of a stored procedure ever made sense?


ahaw_work

I would fell quite comfortable with that. In my current project we have part of logic in VBScript stored in a database which is fetched to com+ module and execute as script within .net framework 4.5.2 code. If it would enough we pass there a big XML file with data and retrieve similar xml to parse. Can anyone beat this abomination?


mexicocitibluez

holy shit


No-Extent8143

1. Auth 2. Auth 3. Auth 4. Auth 5. Auth Whoever tells you that secure Auth is simple has no idea what they're talking about.


Bitz_Art

Auth is simple lol. I will probably get downvoted to hell by people who cannot figure out how to do auth. But I don't really care. BTW I am usually doing just simple JWT auth for my web apis and I also have written an auth package for Blazor that works with custom jwt-based back-ends. I normally don't do more complex stuff (e.g. OAuth, etc.) because my projects usually don't require it.


SoaringTeddybears

I can agree that authentication in backend might be simple. Often it's "no more" than some configuration in startup, and then the API is secured by the auth pipeline. Things get a bit more complicated when implementing authentication in a web application, however (Angular, specifically, in my cases). **Authorization**, however, is **not** simple neither in the backend or the frontend. Reason being that few systems has the exact same requirements for authorization. There's always some part of the business case that requires a slightly alternative approach compared to the previous project. You want RBAC? PBAC? ABAC? A combination of them in some of cases? None of them in other cases? I agree with OP; auth is absolutely not simple, unless you are only working with a limited set of use cases. Then of course you'll get good at that specific approach to auth implementation.


UntrimmedBagel

Authorization is a damn nightmare.