testthat

r devtools test() errors but testthat test_file() works

淺唱寂寞╮ 提交于 2019-12-01 19:08:48
问题 I have a function in a package I'm building that assigns a hex-code to the global environment for use by analysts... optiplum<-function(){ assign( x="optiplum", value=rgb(red=129,green=61,blue=114, maxColorValue = 255), envir=.GlobalEnv) } My unit test code is: test_that("optiplum - produces the correct hex code",{ optiplum() expect_true(identical(optiplum,"#813D72")) }) When I run the code manually, there isn't an error: > str(optiplum) chr "#813D72" > str("#813D72") chr "#813D72" >

r devtools test() errors but testthat test_file() works

隐身守侯 提交于 2019-12-01 18:54:50
I have a function in a package I'm building that assigns a hex-code to the global environment for use by analysts... optiplum<-function(){ assign( x="optiplum", value=rgb(red=129,green=61,blue=114, maxColorValue = 255), envir=.GlobalEnv) } My unit test code is: test_that("optiplum - produces the correct hex code",{ optiplum() expect_true(identical(optiplum,"#813D72")) }) When I run the code manually, there isn't an error: > str(optiplum) chr "#813D72" > str("#813D72") chr "#813D72" > identical("#813D72",optiplum) [1] TRUE > expect_true(identical(optiplum,"#813D72")) When I run a test_file() is

testthat fails within devtools::check but works in devtools::test

孤者浪人 提交于 2019-12-01 04:07:39
Is there any way to reproduce the environment which is used by devtools::check ? I have the problem that my tests work with devtools::test() but fail within devtools::check() . My problem is now, how to find the problem. The report of check just prints the last few lines of the error log and I can't find the complete report for the testing. checking tests ... ERROR Running the tests in ‘tests/testthat.R’ failed. Last 13 lines of output: ... I know that check uses a different environment compared to test but I don't know how I should debug these problems since they are not reproducible at all.

r - data.table and testthat package

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-01 03:53:27
I am building a package which works with data.table and which should be tested using package testthat. While the code works fine when calling from the command line, I run into issues when calling from a test case. It seems that the [] function from the base package, i.e. the function for data.frames is used when running the tests. I have created a minimum example which can be found here: https://github.com/utalo/test_datatable_testthat The package contains a single function: test <- function() { dt <- data.table(MESSAGE="Test 1234567890",TYPE="ERROR") dt[,.(MESSAGE=strwrap(MESSAGE,width = 10))

testthat fails within devtools::check but works in devtools::test

柔情痞子 提交于 2019-12-01 02:28:54
问题 Is there any way to reproduce the environment which is used by devtools::check ? I have the problem that my tests work with devtools::test() but fail within devtools::check() . My problem is now, how to find the problem. The report of check just prints the last few lines of the error log and I can't find the complete report for the testing. checking tests ... ERROR Running the tests in ‘tests/testthat.R’ failed. Last 13 lines of output: ... I know that check uses a different environment

testthat pattern for long-running tests

时光毁灭记忆、已成空白 提交于 2019-11-30 17:55:21
I have a bunch of tests that I don't want them running during CRAN checks or Travis CI builds. They are either long-running, or they could cause transaction/concurrency conflicts writing to a networked database. What approach of separating them (from the R CMD check tests) works best with testthat ? Should I put those tests in a separate folder? Should I tag their filename and use a regex? (eg Using filter argument in test_package to skip tests by @Jeroen) http://cran.r-project.org/web/packages/policies.html : Long-running tests and vignette code can be made optional for checking, but do

testthat pattern for long-running tests

江枫思渺然 提交于 2019-11-30 16:47:17
问题 I have a bunch of tests that I don't want them running during CRAN checks or Travis CI builds. They are either long-running, or they could cause transaction/concurrency conflicts writing to a networked database. What approach of separating them (from the R CMD check tests) works best with testthat? Should I put those tests in a separate folder? Should I tag their filename and use a regex? (eg Using filter argument in test_package to skip tests by @Jeroen) http://cran.r-project.org/web

How to write a test for a ggplot plot

十年热恋 提交于 2019-11-30 04:15:56
I have a lot of functions that generate plots, typically with ggplot2. Right now, I'm generating the plot and testing the underlying data. But I'd like to know if there's a reasonable way to test that the plot contains the layers/options I expect it to or that graphical elements match expectations. For example: library(ggplot2) library(scales) # for percent() library(testthat) df <- data.frame( Response = LETTERS[1:5], Proportion = c(0.1,0.2,0.1,0.2,0.4) ) #' @export plot_fun plot_fun <- function(df) { p1 <- ggplot(df, aes(Response, Proportion)) + geom_bar(stat='identity') + scale_y_continuous

Where should I put data for automated tests with testthat?

醉酒当歌 提交于 2019-11-30 02:47:51
I am using Hadley's testthat-based approach for automated testing of my package. With this approach, what is the most suitable place to put test data files that is files only used by the test scripts in tests/testthat), but not by any other functions in R/? My current approach is to put them in tests/testdata , and then read.table from there with a relative path rather than with system.file (in order to avoid the need to install the package to run tests). Is there a standard way to do this? Lifting from Ben Bolker's comments: I use inst/testdata and then system.file("testdata",...,package="my

Is it possible to determine test order in testthat?

梦想与她 提交于 2019-11-29 17:41:13
问题 I am using testthat to check the code in my package. Some of my tests are for basic functionality, such as constructors and getters. Others are for complex functionality that builds on top of the basic functionality. If the basic tests fail, then it is expected that the complex tests will fail, so there is no point testing further. Is it possible to: Ensure that the basic tests are always done first Make a test-failure halt the testing process 回答1: To answer your question, I don't think it