![]() ![]() ![]() With mitmdump often not even being connected to the target server yet.īy now i can say that it does not depend on the amount of requests to the same sever,īut rather simply the total amount of requests - while panning around openstreetmap,Īll other clients see slower responses aswell, That is, when mitmdump is handlng some number of requests in parallel,Ĭlients are often waiting for responses much longer than the request should take, | sort | uniq -c | sort -rnĦ89 AttributeError: 'NoneType' object has no attribute 'state_machine'Ĥ81 : ġ88 Error in WebSocket connection to :443: WebSocket Error: ABNORMAL_CLOSUREġ70 During handling of the above exception, another exception occurred:Ĥ4 AttributeError: 'NoneType' object has no attribute 'bio_write'ģ7 AssertionError: Unexpected event type at HttpStream.state_done: Expected no events, got RequestEndOfMessage(stream_id=1).ġ6 Error in WebSocket connection to :443: WebSocket Error: ABNORMAL_CLOSUREĦ Error in TCP connection to :5228: Multiple exceptions: Connect call failed ('108.177.126.188', 5228), Network is unreachableīUT i am still seeing the curl issue as described above. What's your output for SIGUSR2? If we have a blocking event loop, this might indicate the root cause.Assuming it hangs for, does curl -x localhost:8080 -k from a separate terminal still work? You mentioned that it "hangs for SOME remote servers".There are no tracebacks in your event log?.Do you have any custom addons or non-default settings?.Your SIGUSR1 output is mitmdump, not mitmproxy - correct?. ![]() It would be interesting to see if this warning was a fluke or if you get some warnings repeatedly. Our event loop is blocking somewhere, which is not good.We're not shutting down cleanly, so occasional "not awaited" warnings are to be expected. You just happened to hit a particular race condition.However, the error message indicates that the coroutine is never run. ( i typically check with $ netstat -ntp | grep EST.*python | grep -v :8080 | awk ')",Īsyncio_utils.create_task is a thin wrapper around asyncio.create_task, and the latter schedules the coroutine to be run on the event loop immediately. In netstat, i don't see mitmproxy having many connections to the destination servers open, so i suspect it's an internal limitation. This is probably most easily reproduced with resource-heavy sites, like,. (most times the requests seem to do finish, but only after many minutes.) It possibly recovers by itself, but usually i give up waiting and restart mitmproxy. Sometimes this starts almost immediately, sometimes after hours, Many requests hang for a very long time until mitmproxy returns a response. Initially after mitmproxy startup, everything works smoothly, but then suddenly stalls, Or it could be related to debian upgrading to python 3.9? sadly it turned out non-trivial to roll this back. I suspect it's related to the new proxy core, Recently this started to be a rather frustrating experience. That is, i run an mitmdump instance continuously, and use it from multiple browsers on multiple computers. I use mitmproxy for interactive browsing a lot. ![]()
0 Comments
Leave a Reply. |