This project has moved and is read-only. For the latest updates, please go here.

VS 2013 Shell and PTVS 2.1 RC

Aug 29, 2014 at 3:27 PM
Looking back through previous revisions of the installation instructions page I noticed that the instructions for installing a free solution migrated from (1) Install VS shell on your own and then install the appropriate PTVS extension to (2) Use the integrated VS2013/PTVS installer and now it is (3) Install VS 2013 Express web or desktop plus the PTVS 2.1 RC.

I went with option (3) this week and had a LOT of trouble. Now, it may be my fault because I had a number of other VS installations that at least in one case were causing my problems. I'm in the middle of cleaning them up. Since cleaning up VS installations is a big pain I want to install the right thing when I am done.

I'm wondering if the VS2013 shell option will be a good solution once the PTVS 2.1 RTM comes out. As far as I can tell this option will be missing the unittest features. Is that right? Will it have a smaller memory footprint than the express installation approach? Are you moving away from recommending this approach?

I am a little gun shy to go with the express install because the unittest feature was a major problem. The auto search for tests in my project appeared to have a memory leak and the search process would quickly climb to 1.5GB of memory. Also tests would not run. They would say started and just sit there. I could run the same tests directly with no problem. I like the unittest integration but the way it worked for me was a show stopper. I'm hoping it will work ok if I clean up all other VS installations and start from scratch.
Aug 29, 2014 at 7:59 PM
PTVS will continue to work with the shells, and as you say, they are lighter than Express and still a good option.

We're not actively testing the product against the shell anymore, so it's possible that you'll hit some issues that don't exist in Professional or higher (typically due to missing dependencies - they are mostly identical except Pro+ has more stuff).

Unit tests and profiling will be missing, and I suspect there may be limitations on web publishing and editing, but the core functionality should be totally there. I'd suggest trying the RC with Shell first, just in case there are significant issues that we may be able to fix. Once 2.1 RTM is out, we're very unlikely to make a quick fix in order to support shell.

As for the tests issue, we're aware of the limitations but unfortunately haven't had enough cause to really improve it. We are further limited by the design of the UI (by another team), which makes fixing it a much bigger task than it may seem. However, you should still file a bug with as much info about your project as you're able to give us, because we'd like to fix things that are not working well.
Aug 29, 2014 at 9:18 PM
Thanks for the helpful info.

You said:
As for the tests issue, we're aware of the limitations...
Can you explain? Do you mean that you have seen the same things I am talking about (tests not running and the memory leak in the test discovery process?)

BTW I removed all other VS installations and the problem still exists so I will file a bug as you suggest. I did notice that when I first load my solution the test feature seems to be disabled (if you go to Test -> Test Settings -> Default processor architecture then nothing is checked and I don't see the test discovery process show up everytime I save the file).
Aug 29, 2014 at 9:35 PM
The limitations are largely due to Python not providing a simple list of tests anywhere and we are forced (by VS) to discover tests outside of the main VS process, which keeps us from sharing the tests we already know about. We have to reanalyze all of your code, which can be slow and use a lot of memory, but since we don't get any notification that discovery is starting we can't easily work around this. Execution has a similar issue - we need to rediscover all the tests in yet another process - and we also don't have a directly connected test runner (our fault, but it's not exactly trivial to write) and so we run each test individually and check the exit code.

There shouldn't be any memory leaks in test discovery, and if there are then you should be seeing them in VS itself, but it's possible that the code you are using is triggering something that we don't test for. How our code behaves depends very much on arbitrary user input, so it's impossible to test every scenario thoroughly. There are also system configurations that can trigger more issues than the code alone, which means we often can't reproduce and diagnose issues like this. The more information you can provide on the bug, the better.

There is work we can do to avoid these issues, but we haven't done it yet because we've been focusing on other features and issues in the product. As we come around to planning our next release, we will consider and schedule these changes, but without feedback like you're giving us we're forced to assume that either everything is fine, or that nobody is using the feature at all. More issues and votes really help us understand better how people are using the product and what areas need more work.
Aug 29, 2014 at 10:04 PM
I did my best. See bug 2642.

My main concern for VS2013 and 2.1RTM would be to make sure there is a way to clearly disable the test features in case they continue to cause problems for me. They seem to be a "sleeping giant" when I load my solution and as long as I don't touch anything in the test menu it stays sleeping. I would feel more comfortable if there was a way to clearly tell the SW I don't want it to run ever. Maybe a setting in the python section of the options dialog.

Based on what you said, 2013 shell could be a viable option to completely disable test. However, the lack of testing scares me a little. I don't use any of the advanced features. I am really a guy who has been very happy with PyScripter for years but wanted something with:
(1) Project support to keep all the right files at my fingertips (PyScripter has it but it is very poor)
(2) Support for working with multiple python environments easily
(3) Better auto-completion for large modules (Pyscripter was unusable with auto completion and the pandas package)
Aug 29, 2014 at 10:54 PM
Edited Aug 29, 2014 at 10:59 PM
The lack of testing doesn't give us much confidence either, which is why we don't recommend the shells any more.

If you really want to disable the tests feature, you can remove/rename Microsoft.PythonTools.TestAdapter.dll in the install directory and also delete the ComponentModelCache directory at C:\Users\<you>\AppData\Local\Microsoft\VisualStudio\12.0. This should work without causing any problems, as the test adapter is by necessity well isolated from the rest of the product, but I'm not guaranteeing that this is 100% reliable. Not opening the test window is the best way to avoid wasting CPU cycles - even with our support missing, the vstest.discoveryengine process will still be started.

Because of the way the test window works, I'm not even sure whether we could prevent it from loading us through an option in VS. It would probably have to be an option in our installer...
Aug 29, 2014 at 10:55 PM
That said, I've also discovered the root of the memory leak you've reported, so I can clear most/all of that up. Whether it makes our 2.1 RTM is a matter for our next team discussion - we're very careful about last minute changes.
Aug 30, 2014 at 12:23 AM
That's great. I understand the hesitation about putting it in the 2.1 RTM. Let me know!

Does the problem you found explain why I was unable to run any test?
Sep 2, 2014 at 6:02 PM
Edited Sep 2, 2014 at 6:13 PM
Unfortunately not. I suspect that part is more likely related to Canopy (or rather, assumptions that we've made about Python that aren't true for Canopy). I'll do some thorough testing against it later today.

Do your tests run if you run the file directly? What about if you run python -m unittest discover in the directory with the test files?
Sep 2, 2014 at 7:24 PM
Running the file directly works fine. I also added the code below to use the discovery built into unittest directly. It worked fine. I am not positive, but I am guessing the code below does something pretty similar to the command line question you asked. If not let me know and I can try.
suite = unittest.TestLoader().discover(start_dir = ".",pattern="test*.py")
t = unittest.TextTestRunner(verbosity=2).run(suite)
Sep 2, 2014 at 9:03 PM
Our launcher should never be running the code in an if __name__ == '__main__' block, as we import your code rather than running it. You can see/change what we do in in our install directory

How does it look when the tests don't run? Do they all fail? (That's what I'd expect if we're crashing at the wrong time.) Do they timeout? Is the green bar in the test window spinning?
Sep 2, 2014 at 10:00 PM
green bar is spinning. It doesn't look like they fail, they just don't finish. I get the "Run Tests Started" message and then nothing after that.