hudson

Jenkins / Hudson CI Minimum Requirements for a linux RH installation

走远了吗. 提交于 2019-12-04 16:26:35
问题 We are planning on using Jenkins (used to be Hudson) for the automated builds of our project. I need to find out what it needs from a system requirements standpoint (RAM, disk, CPU) for a Linux RH installation. We will be testing a Mobile application project. I did check this post but couldn't find a response. 回答1: I've been maintaining a Jenkins / Sonar / Nexus and I pointed out a minimal configuration (Debian 5): CPU : n/a (bye bye plain old school CPU at least ;) ) RAM : 1GB (I prefer 2)

Hudson: Copy artifact from master to slave fails

霸气de小男生 提交于 2019-12-04 15:46:10
Is it possible to use the 'copy artifact' plugin to copy an artifact from a job that ran on master to a downstream job that runs on a slave node? I'm getting an error on the slave that says: hudson.util.IOException2: hudson.util.IOException2: Failed to extract /srv/hudson/jobs/myproject/builds/2011-04-29_10-28-54/archive/myartifact.foo Obviously that path is not valid, as it points to the artifact folder on master. Am I missing something or is this just not possible? Yes, it is possible. You can use the Copy Artifact Plugin to copy any artifact to the slave. For a first test I recommend to set

Can I mass edit jenkins jobs by modifying the config.xml files?

て烟熏妆下的殇ゞ 提交于 2019-12-04 15:39:00
问题 I have a lot of jobs in jenkins and we have decided to make some wide ranging changes to all of them which will be very tedious to change through the UI. It would be much easier to edit them using scripts on the jenkins master machine but I'm not sure how to get jenkins to recognize changes to the the config.xml that haven't come through the UI or another api. Is there a way to get jenkins to refresh job configurations from disk? Or a better way of mass editing jobs? 回答1: Under the "Managr

Problem Publishing NUnit Testing Result Reports with Hudson

梦想的初衷 提交于 2019-12-04 13:56:53
问题 I am facing a problem with Hudson and NUnit testing. When trying to publish the Test Result Report for NUnit, the option in Hudson, i.e., "Publish NUnit Test Result Reports", is creating a problem. I am unable to provide the Path of the already-created XML file under the workspace folder of the Job. When I set the path of my file, i.e., "nunit-result.xml" and run the job, it throws an error: "No test report files were found. Configuration error?" Can anyone please help me out? Thanks in

How do I deploy to private Maven repo from CloudBees?

≡放荡痞女 提交于 2019-12-04 12:33:24
I'd like to use CloudBees for my CI environment, but I'd also like to deploy my Maven artifacts to my existing private Nexus repository. In my current local Hudson setup, I utilize the username/password settings within the .m2/settings.xml file as follows: ... <servers> <server> <id>my-repository</id> <username>username</username> <password>password</password> </server> </servers> ... How/where can I configure these credentials on CloudBees? You can put these in your private webdav filestore: http://wiki.cloudbees.com/bin/view/DEV/Sharing+Files+with+Build+Executors Then, just point Maven at

Is it possible to see the source code of the violating files in Hudson with Violations and Pylint?

狂风中的少年 提交于 2019-12-04 11:55:43
I'm using Hudson CI with a Python project. I've installed the Violations plugin and configured it to run the code against pylint. This works, but I only see a list of violations without linking to the source code. Is it possible to setup Violations and pylint to load and highlight the violating source files (something similar to the Cobertura Coverage Reports)? Better yet, can Violations integrate with pep8.py? Well, after some more debugging, I realized that the pylint output file referenced the source code files relative to where pylint was being run, which wasn't the same path that Hudson

Hudson/Jenkins — how to access a private git repository on BitBucket.com

给你一囗甜甜゛ 提交于 2019-12-04 10:44:45
This question is long and multifaceted, so I'll start with a brief overview, and then show in detail everything I've tried and my questions as to why they don't work and what I'm doing wrong. Overview I'm trying to setup a Build Job on Hudson for source code on a private repository on BitBucket. There are a lot of similar questions on Stack Overflow, but for various reasons none of them address my needs. I would like to access it using https instead of ssh, but there seems no way forward accessing it in Hudson with https, and everyone on the web seems sold on ssh. So I have tried to make it

Integrating SourceMonitor into a Jenkins CI-System

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-04 10:38:09
问题 I would like to integrate SourceMonitor into my Jenkins CI-system. Since there is no SourceMonitor plugin how can i make the results of SourceMonitor visible on my Jenkins Server. 回答1: There is an open issue associated with the Violations plugin. You can vote up for the implementation of this issue: https://issues.jenkins-ci.org/browse/JENKINS-5741 回答2: I guess you're out of luck. You could take a look at similar plugins (probably FindBugs, PMD, Checkstyle plugins should be comparable) and

How to build a pipeline of jobs in Jenkins?

一世执手 提交于 2019-12-04 09:48:07
In my project, I have 3 web-applications, all depend on one all-commons project. In my Jenkins server, I built 4 jobs, all-commons_RELEASE , web-A_RELEASE , web-B_RELEASE and web-C_RELEASE . The role of these jobs is to build the artifacts, which are deployed on our Nexus. Then, someone retrieve these artifacts in Nexus and deploy them on our dev / homologation servers. What I want, is to have one (additional?) job that will launch all the 4 builds, in a sequential way. This way, once this job is finished, all the RELEASE jobs have been executed. Of course, if one build fails, the process is

Hudson: What is a good way to store a variable between the two job runs?

旧巷老猫 提交于 2019-12-04 09:33:39
I have a job which needs to know the ceratin value calculated by the previous job run. Is there a way to store it in the Hudson/Jenkins environment? E.g., I can write something like next in a shell script action: XXX=`cat /hardcoded/path/xxx` #job itself echo NEW_XXX > /hardcoded/path/xxx But is there a more reliable approach? A few options: Store the data in the workspace. If the data isn't critical (i.e. it's ok to nuke it when the workspace is nuked) that should be fine. I only use this to cache expensive-to-compute data such as prebuilt library dependancies. Store the data in some fixed