code-coverage

Can python coverage module conditionally ignore lines in a unit test?

天大地大妈咪最大 提交于 2019-12-23 19:06:12
问题 Using nosetests and the coverage module, I would like coverage reports for code to reflect the version being tested. Consider this code: import sys if sys.version_info < (3,3): print('older version of python') When I test in python version 3.5, the print() shows up as untested. I'd like to have coverage ignore that line, but only when I'm testing using python version 3.3+ Is there a way to do something like # pragma: no cover on the print() statement only for when sys.version_info is not less

Partial Code Coverage C# - Nunit

情到浓时终转凉″ 提交于 2019-12-23 18:08:02
问题 I have partial code coverage and I don't know why. for people who like the question before they start reading Want to start by saying "First Post" as well as I am still very Junior in my development career but I have been a relativly quick learner(imo), so here it goes. Using Nunit to test, and MVP based. Code to be tested - void _view_Delete(object sender, EventArgs<Guid> e) { _agRepo.Delete(_agRepo.GetByID(e.Value)); var g = _agRepo.GetAll(); if (g.Count() > 0) { _view

Combined test coverage report with jMockit

家住魔仙堡 提交于 2019-12-23 15:41:02
问题 I am using jmockit with Ant. For each test file run, an index.html file gets created/over-written in the coverage report folder. For multiple test files, this index.html gets over-written. I am looking for a combined report for all files. what should be done? I have read about using .ser files but I do not know how to create and then use them? 回答1: Have a look here. The trick is to set -Djmockit-coverage-output=merge (or serial - read up on the differences in the link above). Cheers, 来源:

Generate coverage for other modules

痴心易碎 提交于 2019-12-23 13:06:40
问题 My project structure is as follows: :app , :core . :app is the Android application project, and it depends on :core which has all the business logic. I have Espresso tests for :app , and I am able to run and get coverage report thanks to all the questions and guides out there. But the coverage is only for code in :app . How do I get coverage for all projects ( :app and :core ) resulting from my Espresso instrumentation tests? Is this even possible? Any help is greatly appreciated. 回答1: Even

Python Coverage for C++ PyImport

不羁岁月 提交于 2019-12-23 12:08:21
问题 Situation: I'm attempting to get coverage reports on all python code in my current project. I've utilized Coverage.py with great success for the most part. Currently I'm using it like this taking advantage of the sitecustomize.py process. For everything that's being started from the command line, and it works amazing. Issue: I can't get python modules run from C++ via PyImport_Import() type statements to actually trace and output coverage data. Example: [test.cpp] #include <stdio.h> #include

How to capture code coverage from a Go binary?

旧街凉风 提交于 2019-12-23 10:09:00
问题 I know its possible to capture code coverage metrics when running unit tests. However, we would like to know what the coverage is when we run integrations tests (plural) against the binary itself, like: go build ./mybin somefile1 ./mybin somefile2 # ... test a bunch more files and input flags Is it possible to do this? The binary can be built just for the purpose of testing, so any compile options as needed. 回答1: The Go coverage tool only works in conjunction with the testing package. But not

Can't get correct assembly filters to work with TeamCity 8 and dotCover code coverage

笑着哭i 提交于 2019-12-23 10:08:27
问题 I have configured a Nunit test runner build step which successfully runs my testsuite, pointing at a test sub-project of my .Net solution eg. Solution/Solution.Test/bin/debug/Solution.Test.dll. My solution structure is as follows: Solution Solution.Lib Solution.Model Solution.Test Lib and Model dlls are referenced in the test project. I then turned on dotCover without any assembly filters and it performed code coverage analysis on the above test dll correctly. I then added a filter, -

What techniques have you actually used successfully to improve code coverage?

谁说我不能喝 提交于 2019-12-23 09:57:47
问题 I regularly achieve 100% coverage of libraries using TDD, but not always, and there always seem to be parts of applications left over that are untested and uncovered. Then there are the cases when you start with legacy code that has very few tests and very little coverage. Please say what your situation is and what has worked that at least improved coverage. I'm assuming that you are measuring coverage during unit testing, but say if you are using other techniques. 回答1: Delete code. This isn

Incremental code coverage for Python unit tests?

大憨熊 提交于 2019-12-23 09:57:14
问题 How can I get an incremental report on code coverage in Python? By "incremental", I mean what has been the change in the covered lines since some "last" report, or from a particular Git commit. I'm using unittest and coverage (and coveralls.io) to get the code coverage statistics, which work great. But I'm involved only with a part of the project, and at first I'm concerned with what my last commit has changed. I expected coverage to be able to show the difference between two reports, but so

Incremental code coverage for Python unit tests?

可紊 提交于 2019-12-23 09:55:31
问题 How can I get an incremental report on code coverage in Python? By "incremental", I mean what has been the change in the covered lines since some "last" report, or from a particular Git commit. I'm using unittest and coverage (and coveralls.io) to get the code coverage statistics, which work great. But I'm involved only with a part of the project, and at first I'm concerned with what my last commit has changed. I expected coverage to be able to show the difference between two reports, but so