TLDR; It's a bad idea, but maybe not?
Why is this question of any significance? To most it's a simple answer to the question "Can I rely on a Linux distribution for a high reliability service?", If yes then go for it otherwise move on.
I'd like to argue that Arch as an experimental staging server could be very useful. Something in-between a dev and production server. The Arch User Repository certainly being the main attraction, but there's more.
I use an Arch distro to quickly prototype and test my code or a service in production environment without the hassle of acquiring software or setting up the environment. I don't always find a service available on docker and sometimes I might not have my code containerised just yet when I'm just building a proof of concept.
So what do I mean by a staging server? In essence if I intend to use a distro that is not the same as my production server, then can it be considered staging? I hear the resounding loud "NO" in your mind and for the most part I agree. I can never account for the package base, versions and tool chain. But the rest can be easily devised, specifically on Arch Linux. I could pick a specific kernel, packages that would be available on the target distro will be available on Arch for sure. I can test any subset of Arch distro by installing only what I need for my package.
Only downside being working with bespoke software that expects a specific tool chain or package set. If this was an issue I'd face then I'd any way end up turning towards docker, and begrudgingly accept it even in production.
Now comes the question how would one cleanly migrate from staging to production? I can't employ the same scripts or processes. Therein lies the core issue with my take on staging environments. I employ not one but two servers, one would be the target distro treated as production and another being Arch Linux treated _like_ production with the intention of making all my mistakes there. This compels me to think about the different complexities of exposing my code to public.
Depending on the type of service, it could be websites, databases, proxies, file servers or even VPNs. Consciously thinking about how my service should handle replication, load or even security, brings lots of value to my process. But then again for professionals this is very much redundant, if you know what exactly to do, and you have been doing it a ton then this is not for you. I'd say you are wasting your precious time.
But if you are like me only now cultivating the thought process necessary for a safe and secure experience on the internet then consider yourself lucky to be here.
I personally like the struggle of wondering how I need to keep myself ready for possible scaling by setting up replication on multiple databases, working with reverse proxies for resilient load balancing and actually investing in CDN's (I prefer Cloudflare) to best augment the needs of the service at the lowest cost to deliver.
What I mean is its fun and _educational_ to waste your time with bad distro choices.
So what can go wrong? Here's to running Arch on a public facing server for the last 3 years. Nothing bad happened, these were some precautions I took:
- I made sure to pay for backup service on my VPS.
- I never ran
pacman -Syuunless I needed to, and took backups when I did.
- Moved services to production once the service was battle tested and ready for public purview
I also made sure to run directly with docker when I could. All in all it made adding new services that I would personally use or needed to deploy a little easier without having to pollute my production server and quickly at that.