I operate as Almar Klein Scientific Computing, an independent software engineer located in the Netherlands. Combining freelance work with open source software projects, and since 2018 I have also been working on my own SaaS product, TimeTurtle.
I love open source! One open-source project, Asgineer, has already spun out of the work for TimeTurtle, and more can be expected, e.g. related to database access and monitoring. I also intend to feed a significant portion of the profits back into the open-source projects that TimeTurtle relies on.
A while back, I was looking for an affordable and reliable way to run a handful of small websites and my new SaaS product. For ethical reasons, I was reluctant to use AWS and strongly preferred a European provider.
I tried a few providers but these either lacked a good story for reliability and backups, had clumsy user interfaces, or were too expensive to my taste. When I found out about UpCloud I was quickly convinced by their transparency and reliable service. With other providers, it was often much less clear what would happen in the case of a hardware failure, for instance.
Hosting on UpCloud
My cloud infrastructure is nothing unusually fancy and mostly employs practical setups. I use container optimised CoreOS along with Docker for most things. For example, the TimeTurtle app runs on a cloud server with CoreOS and Docker containers. The two most important containers running on it are the Python-based server for the app, and a container running Traefik, a reverse proxy and SSL endpoint.
I develop the app using a GitLab repository, which is connected to another UpCloud cloud server to run the CI/CD, an Ubuntu host with GitLab Runner. GitLab uses this server to perform the unit tests, but also functional tests using Docker containers. Upon a successful test, the same container is automatically deployed to the beta.timeturtle.app. It can then be manually verified and later deployed with a single button to update the main website.
This makes it easy to do near-zero-downtime deploys and a system that’s always up-to-date. Traefik works great as a reverse proxy and load balancer, while the actual back-ends are represented as docker containers. All of my cloud servers use MaxIOPS for the best performance and the critical machines also have automated backups turned on. In addition, not everything has to be strictly business, and I also have a small Minecraft server for my 9-year-old.
One key advantage for me has been that UpCloud makes strong guarantees about uptime and redundancy. Together with automatic backups, I have little to worry about. This gives me the confidence that my services are always fast and always up. It is also reassuring to know that services can be quickly scaled to match the requirements.
Deploying on UpCloud cloud infrastructure has proven an excellent choice for hosting my cloud servers. The combination of solid cloud infrastructure, a speedy cloud server, Docker containers, and Traefik for routing, results in a very flexible setup for me. I can add containers that make daily local backups of the databases or containers to serve content at another domain or subdomain.
I also use this flexibility on another server which hosts several small websites and a few experimental projects. These all benefit from the stable infrastructure, but I can also impose limits on the containers, e.g. by setting the maximum amount of RAM. This safeguards the system even in the case of a memory leak in containers running experimental services. Containers can be killed if they begin over-utilising memory beyond what is allowed and restarted automatically.
To be honest, I’m currently focusing on managing the present services for the foreseeable future. I may repeat the same setup for other services or new projects in the future.
I’ve heard nice things about serverless, which might bring a lot of flexibility with less hassle if UpCloud should offer such a service.