ValueError: signal only works in main thread of the main interpreter

I’m wildly guessing here, very wildly because I don’t know how Anvil works, how Scrapy works or what you are doing.

The Anvil server runs on the main thread of a Python process. When it receives a new request, it processes it on another thread (either already existing in a queue and waiting to be used or created on demand). At any point in time there are many threads running: the main one, managing the whole shebang, one thread per request being processed, one per background task, etc.

Scrapy does something similar: the main thread manages a bunch of worker threads and delegates scraping jobs to each of them.

Now imagine if the main thread for Scrapy were running in one of the Anvil request threads, and the main thread of Anvil decided to kill that thread, for example because it exceeds the 30 seconds timeout. Scrapy would not appreciate!

I’m guessing that Scrapy wants to make sure that its main thread, the one that manages the other scraper threads, is the main thread of the Python process, because it wants to make sure everything is under control.

If that’s the case, I’m afraid you are out of luck. I don’t think you can run your second Python process on the Anvil server, other than the Anvil server process (unless you are running the open source server on your own machine).

I’m sure some of the things I mentioned are wrong, but chances are I got the big picture right. You can use it as a starting point for some further research.
Good luck!