1

Resolved

Debugging is slow, laggy and finally collapses

description

Hi,

I'm experiencing very problematic debugging issues. I cant put my finger on a possible reason but it seems that after a while the move between commands (by F10) becomes very slow and leggy (up to 2 seconds per command, even the most basic ones) until finally VS collapses.

I'm running on an 8GB memory machine, and the VS is using only 425MB, and I have plenty of available memory.
I did noticed though that the cpu usage is very high.
Any Idea what is causing this behavior and more importantly the collision?

comments

pminaev wrote Jan 2 at 5:59 PM

This is not a mode of failure that we're familiar with, so this is likely some corner case that is triggered by the specific code that you're debugging (in particular, this is not #2063 - the problem described there is that debugger is slower than it could be in general, but it doesn't have this effect of incrementally slowing down). We'll need to investigate this one to determine and fix the root cause.

When you say that "finally, VS collapses", can you clarify what exactly happens? Does it simply crash, with the usual Windows error reporting dialog? If so, can you please look in the windows Event Log under Applications, and see if there are any error entries there corresponding to the crash that would have exception information?

Also, when it comes to the error reporting dialog, do you have it configured to send the crash information to Microsoft (this is the default setting, but it will only send it if you don't cancel it)?

One other thing to look at would be the debugged process itself - how much memory does it use before crash? If this is really an out-of-memory problem caused by something in the debugger growing excessively, then it could actually be happening inside that part of the debugger that runs inside the debuggee (visualstudio_*.py scripts). Though we should still be handling termination of debuggee gracefully and not crash VS.

slishak wrote Jan 7 at 1:00 PM

I've been experiencing a similar issue to this in the PyTools debugger. Stepping through my code becomes very slow, although it hasn't got so bad that Python crashes yet. The problem seems to be reproducible by generating a large OrderedDict filled with NumPy arrays. Example code is shown below:
from collections import OrderedDict
from string import ascii_lowercase
from random import random
import numpy as np

OD = OrderedDict()

for i in ascii_lowercase:
    for j in ascii_lowercase:
        OD[i+j] = np.array(random())

for key in OD.keys():
    for i in range(200):
        OD[key] = np.append(OD[key], random())

#Debugger now steps through the following lines very slowly:

print('String')
a = 1+1
print('1')
b = 1+1
print('1')
c = 1+2
print('1')

slishak wrote Jan 7 at 1:23 PM

P.S. I realise that the method I used to build up the arrays is highly inefficient, but it serves to reproduce the problem!

pminaev wrote Jan 7 at 4:05 PM

Thank you, I will investigate. We suspected that this might have something to do with repr() of local objects being computed too slowly for very large collections - it would also explain the gradual slowdown if the collection in question grows over time.

idoda wrote Jan 12 at 6:20 PM

I also using huge dictioneries (10k+ keys). BTW i've tried the code in VS2013, still crush (the VS)

pminaev wrote Jan 12 at 7:31 PM

idoda, are those also OrderedDicts, or the standard dict type, or some other custom dictionary-like type?

pminaev wrote Jan 17 at 6:44 PM

Here is our plan for dealing with this.

We are going to assume that any collection type (defined as having __iter__ and __len__) is going to have repr() that is potentially O(N) for its number of elements. So, if its length exceeds some reasonable value, we will not call repr(), but will instead synthesize our own which will look more like the default one, and will include the length, e.g. <OrderedDict, len() = 10000>. If it is reasonably sized, then we will iterate it and recursively inspect all elements to see if they're reasonably-sized collections etc. For iterables which do not have len and which aren't their own iterators, we will assume that their length is always "too big".

You will be able to inspect all the elements by expanding it (which is probably not a good idea to do on a 10k OrderedDict). Or you can use the Watch window to index/slice it to inspect a reasonably sized subset.

Furthermore, we will add a new visualizer that will run and display the raw repr() on request, and will try to handle out-of-memory errors that get raised from repr() when we invoke it.

Any comments or other feedback on this proposal will be greatly appreciated.

Zooba wrote Jan 17 at 7:10 PM

IMHO, the "reasonable value" is around about 10, on the basis that it the repr probably won't fit in the window anyway and is not going to be easily readable.

Since a list of ten single-digit integers would be readable, and it's likely to be someone's first "test" of PTVS, an alternative heuristic may be N<20 and len(repr)<60 means we display the repr directly.

Another 'nice to have' would be to display the types in the list (e.g., <list{int}, len()=...>), though there's no good way to avoid iterating over the entire collection at that point.

slishak wrote Jan 17 at 7:17 PM

Thank you, this sounds like a sensible plan to me!

pminaev wrote Jan 22 at 7:07 PM

I've also added special-casing for deque and OrderedDict (like we already do for list and dict) in addition to the above generic algorithm.