Why were external contractors using open source technologies that are not well supported by Wikimedia's stack/know how, given that there are many open source alternatives that we can support much better? While Redis and Postgres are good options for projects if you started from 0, and the only options in some usages (e.g. PostGIS), there are technologies that the Foundation knows more, or has worked for years to eliminate from our infrastructure (Redis)- mostly to decrease the proliferation of technologies that work for about the same use cases.
Why where the contractors not told to use alternative stacks that are well known and well supported by the wikimedia employees, which will eventually have to support the stack anyway?
I know they are open source, and they are great tools. But using something like S3 instead of OpenStack Swift, I understand, and it won't be as problematic to change if needed. But Postgres and Redis won't be easy to change in an existing application -as our own years of migration showed T212129-, and goes completely opposite what the rest of the organization has been working for years, having obvious alternatives.
Are people aware that we will have to double our staff, services (monitoring, backups, support) and automation, every time a new technology (not matter how good it is) is introduced? Were people in SRE, Security, Performance, etc. consulted about this?
If this is a small-sized project, then why was a new technology used, given any small sized one has lots of flexibility about underlying tech? If this is a large-sized project, why was a new technology used, given it will take lots of effort to migrate away to a supported tech?