The example in the documentation is very slow because it uploads one row at a time.
I’m in the US and with my persistent server on my dedicated plan, a round trip takes 0.2 seconds.
30,000 rows at 0.2 seconds per row would be 5,000 seconds or 83 minutes. Without persistent server or dedicated plan, or moving to the West coast, or actually doing something in those calls like writing to a database, would go up to hours.
Instead I have excel macros that run nightly and upload thousands of rows in seconds.
They use an http endpoint instead of an uplink connection, because http endpoints can be used in any language, including vba, but that has nothing to do with the speed.
They are fast because they read a few thousands of lines, convert them to json (that’s the slow part in vba) and send them in one shot as the payload in an http call. The server adds them all to the database, rinse repeat.
My macros were calibrated to take about 20 seconds per call, that is tables with a few columns and little data would load chunks of 5,000 rows, while tables with dozens of columns or with larger amount of data per row would load chunks of a few hundreds. Then the accelerated tables came up, and the 20 seconds per chunk went down to 2 seconds.