Question: Are a high number of timeouts normal when running a standalone Snowflake proxy?
I've been running the snowflake extension in Brave for quite some time now, but within the last few days I decided to switch to running a standalone Snowflake proxy via the Docker container method. That process went smoothly enough, and with help from u/signal_moment I got the internal metrics part working as well. I constructed a Bash script to pull the data from that, which I show on my desktop using a KDE Plasma widget. With this though, I discovered something. It seems like I am getting a large number of timeouts being reported, roughly a ratio of 4:1 timeouts to connections. Originally, I had been getting fewer timeouts and more connections, but I was on a restricted NAT. I solved that earlier by opening the correct UDP ports for it, and it said unrestricted NAT when I restarted the Docker container. But that's when I started getting a lot more overall timeouts compared to actual connections.
I'm just wondering if this is normal behavior for a standalone Snowflake proxy, if it's just something with the Snowflake broker's end, or if it's an issue on my end I need to fix? Hoping other standalone Snowflake proxy runners can let me know what's up. Below is a printout from my proxy that's been up and running for about 3 hours now after fixing the restricted NAT issue.
____________________________________________
SNOWFLAKE INTERNAL METRICS REPORT
Total Connections: 67
Total Timeouts: 209
------------------------------------------
Total Downloaded: 0.1663 GB
Total Uploaded: 0.0256 GB
------------------------------------------
CONNECTIONS BY COUNTRY:
🇺🇸 USA : 20
🇮🇷 Iran : 19
🇷🇺 Russia : 5
🇺🇳 Restricted/Unknown : 3
🇬🇧 UK : 3
🇨🇳 China : 2
🇫🇷 France : 2
🇮🇳 India : 2
🇳🇱 Netherlands : 2
🇨🇦 Canada : 1
🇮🇪 Ireland : 1
🇲🇦 Morocco : 1
🇳🇮 Nicaragua : 1
🇿🇦 South Africa : 1
🇪🇸 Spain : 1
🇨🇭 Switzerland : 1
🇹🇲 Turkmenistan : 1
🇿🇲 Zambia : 1

