I have quite some Uplink code that my app requires to function properly. I want it to be up 24/7, so I can’t just run it on my local machine. I also don’t want to move it to the app’s server code, because then I run into performance issues due to its size and imports. I can’t afford the Business plan yet for enabling persistent server modules, and moving imports into function calls only gets me so far in terms of performance optimization (still have to run an expensive import statement every time I call the function that needs it), plus it feels wrong.
I’m interested in recommendations for easy and free or affordable hosting options for my python Uplink code. Unfortunately, pythonanywhre doesn’t do the trick.
Surely I’m not the only one with this requirement. What are other people using?
Google Compute engine has a free tier, you still give them your credit card, but when running one of the lowest combination of linux VM/CPU/ram 24/7 the discounts actually add up to $0.00 a month.
A raspberry Pi costs about $40. Can run 24/7 at home. You can remote into in using VNC from your local network, or SSH into it. If you forwarded an SSH port through your router you can use an SSH tunnel. If you wanted to securely access it from anywhere using a terminal program like putty, with VNC and an SSH tunnel you can even remotely and securely control the GUI.
Edit: Oh I forgot to mention your IP address, use something like duckdns.org on your pi to make sure you can still get to your IP address from anywhere in the world.
For my Uplink code, I’ve dedicated a 12-core machine here. A single core easily handles our current compute loads, so it’ll be awhile before I need anything more.
Currently, bringing the PC down for updates is very quick. Should even that delay become intolerable, I have several alternate machines in the area, that can load-share (or completely take over whenever I update the main one).
To deal with power issues, every machine has a sizeable battery backup.
To deal with Microsoft-induced Patch Tuesday reboots, I will be using AlwaysUp, which runs my Uplink code (and any related programs) as Windows services.
My uplinks execute long running Excel, CAD or other software scripts that require the user to be signed in and have a GUI, don’t work on Windows server and can’t have 2 concurrent sessions on the same machine.
So I have 3 2-core Windows VMs running my uplinks. 3 machines are enough for now, but I can add more machines in a few minutes by cloning one of them and changing just a few settings.
They are configured to login the user at reboot and run as the signed in user. Unfortunately you can only do that by creating an user without password, so our IT guy worked to isolate them and make sure they are not a vulnerability for our network.
Having 3 of them allows me to go to one of them, close the uplink windows, do whatever updates I need and restart the uplinks. Then I can send some requests and if something goes wrong, I can shut the uplinks down again. It allows me to use one machine to test the updates while the other two manage the production load.
I have a monitor app that shows what machines does what. If I see only 2 working machines and some jobs on the queue, I know that one of the 3 machines has a problem and I can investigate.
I use droplets, too. I have most of my software installed by scripts I’ve built and it’s a pretty smooth process so I’ve not been compelled to stray into other territories.
I don’t use docker because I just haven’t taken the time to understand it properly. My one brief encounter ended with me turning purple trying to get the thing to connect to the outside world, and I’ve never tried since.
Apps look interesting, but again there’s no compelling reason for me to change how I do things at this time so I’ve never really looked to deeply at them.
Depends. Sometimes I use monit (I run exclusively on Linux), and sometimes I run the scripts every minute from cron, and the scripts use a lockfile to ensure only one is running.
That second one might sound odd, but it suited a purpose once, and out of laziness I just kept doing it …
Almost all my backend functions are on Uplink for the performance improvement, and I’m using Render (background worker service) to deploy them. It works like a charm and gets updated every time there is a push on the main branch.
"""
with FileLock() as lock:
if lock.acquired:
# Perform operations that require exclusive access
else:
print("Could not acquire lock")
# To specify a custom lock file:
with FileLock('/path/to/my/lockfile.lock') as lock:
# ...
"""
import fcntl
import os
import sys
import tempfile
class FileLock:
def __init__(self, lock_file=None):
self.lock_directories = ['/var/lock', '/var/run', '/tmp']
self.file_handle = None
self.acquired = False
self.lock_file = self._get_lock_file(lock_file)
def _get_lock_file(self, lock_file):
if lock_file:
return lock_file
# Try to use the name of the script that included this library
script_name = os.path.basename(sys.argv[0])
lock_name = f"{script_name}.lock"
# Try default directories
for directory in self.lock_directories:
if os.access(directory, os.W_OK):
return os.path.join(directory, lock_name)
# If can't use any of the default directories, create and use ~/lock
home_lock_dir = os.path.expanduser("~/lock")
if not os.path.exists(home_lock_dir):
os.makedirs(home_lock_dir)
return os.path.join(home_lock_dir, lock_name)
def __enter__(self):
try:
self.file_handle = open(self.lock_file, 'w')
fcntl.lockf(self.file_handle, fcntl.LOCK_EX | fcntl.LOCK_NB)
self.acquired = True
except IOError:
self.acquired = False
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if self.file_handle:
if self.acquired:
fcntl.lockf(self.file_handle, fcntl.LOCK_UN)
self.file_handle.close()
if os.path.exists(self.lock_file):
os.unlink(self.lock_file)
return False # Propagate exceptions
To find if something is already running I have always just used some kind of custom parser to check if the script name like os.path.basename(__file__) exists twice in the text gathered by requesting the running processes from the operating system.
So in linux you would be checking: ps aux
In windows cli it would be: wmic path win32_process get name,commandline
In windows powershell ask chatGPT I guess.
If your scripts all have the same file name, just put whatever unique gibberish you want as an argument value when you launch the file and look for that in the running processes from inside the script instead.
Then you tell your cron or whatever to run the script every 5 minutes or whatever you like and it will just exit out if it is already running.
Edit:
There are much better ways to do this, this is just a very simple one.
You could even run a different launcher script instead that uses subprocess.Popen to check for, and then run the process that should not be duplicated.
The point is it can be as complicated or as simple as you like it to be.
I run some of my uplinks as Windows services or Linux services on industrial edge computers.
For my own, I run uplink code either in raspberry pi or in Hyper-V VM or Proxmox CT container.
In other words I prefer compiling the uplink code to executable when I shall distribute it
Perhaps I am old school, but I don’t like cloud services. But IF I were to go with a VPS I have trust in one.com (super good support), Cygrids (4$ per month and Swedish company) or Bahnhof VPS (premium VPS, also Swedish company).