-
-
Save EvilBeaver/35ebded526fb53ecb9716889c1de11df to your computer and use it in GitHub Desktop.
// Check if a slave has < 10 GB of free space, wipe out workspaces if it does | |
import hudson.model.*; | |
import hudson.util.*; | |
import jenkins.model.*; | |
import hudson.FilePath.FileCallable; | |
import hudson.slaves.OfflineCause; | |
import hudson.node_monitors.*; | |
def performCleanup(def node, def items) { | |
for (item in items) { | |
jobName = item.getFullDisplayName() | |
println("Cleaning " + jobName) | |
if(item instanceof com.cloudbees.hudson.plugins.folder.AbstractFolder) { | |
performCleanup(node, item.items) | |
continue | |
} | |
if (item.isBuilding()) { | |
println(".. job " + jobName + " is currently running, skipped") | |
continue | |
} | |
println(".. wiping out workspaces of job " + jobName) | |
workspacePath = node.getWorkspaceFor(item) | |
if (workspacePath == null) { | |
println(".... could not get workspace path") | |
continue | |
} | |
println(".... workspace = " + workspacePath) | |
pathAsString = workspacePath.getRemote() | |
if (workspacePath.exists()) { | |
workspacePath.deleteRecursive() | |
println(".... deleted from location " + pathAsString) | |
} else { | |
println(".... nothing to delete at " + pathAsString) | |
} | |
} | |
} | |
for (node in Jenkins.instance.nodes) { | |
computer = node.toComputer() | |
if (computer.getChannel() == null) continue | |
rootPath = node.getRootPath() | |
size = DiskSpaceMonitor.DESCRIPTOR.get(computer).size | |
roundedSize = size / (1024 * 1024 * 1024) as int | |
println("node: " + node.getDisplayName() + ", free space: " + roundedSize + "GB") | |
computer.setTemporarilyOffline(true, new hudson.slaves.OfflineCause.ByCLI("disk cleanup")) | |
performCleanup(node, Jenkins.instance.items) | |
computer.setTemporarilyOffline(false, null) | |
} |
@menvol3 as long as I can see, current version wipes everything, regardless of free space. It seems comment in the first line is lying.
Hello,
i'm new with jenkins
Could you please tell where i can change the size from 10 gb, for example to 50 gb to delete files
And also is there any option to run this script only for custom nodes, not on all existing in jenkins ?
Thank You
As I understand, the first line (comment) is left from original script (see Forked from rb2k/gist:8372402 on the top page). The original script has if statement to check if free space is less than 10 GB: https://gist.github.com/rb2k/8372402#file-gistfile1-groovy-L19
@EvilBeaver THX for sharing Maybe someone is interested, I added a function to skip younger build:
Could you say why? What does it give you?
@dracorp
The benefits depend heavily on your application, but of course there is a reason why the workspace is not automatically cleaned up immediately after a build.
Furthermore, the workspace can serve as a cache for further builds, at least for the git checkout.
But there are also many good reasons to clean the workspace directly, but then I would rely directly on short-lived agents anyway.
@EvilBeaver THX for sharing Maybe someone is interested, I added a function to skip younger build:
buildAgeDays = (System.currentTimeMillis() - item.getLastBuild().getTimeInMillis()) / (1000*60*60*24) if(buildAgeDays < 5 ){ println(".. job " + jobName + " was recently running, last build: " + item.getLastBuild().getTimestampString()) continue }
This will fail on jobs without builds and probably on disabled jobs.
I might have missed a point. Old workspaces are automatically cleaned by Jenkins. Why such a script?
https://stackoverflow.com/questions/58637793/jenkins-is-deleting-workspaces-on-agents
https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/hudson/model/WorkspaceCleanupThread.java
I might have missed a point. Old workspaces are automatically cleaned by Jenkins. Why such a script?
The built-in cleanup is time based, but the cleanup should happen when disk space is used up. Jenkins has no means to schedule jobs based on agents with enough disk space, which causes multiple large workspaces to be allocated on the same agent, leading to build failures. There are also situations, when large jobs are run in a quick succession, making time based retainment useless.
Also, keeping old workspaces is usefull, if agent disk space in underused (for debugging).
Hello,
i'm new with jenkins
Could you please tell where i can change the size from 10 gb, for example to 50 gb to delete files
And also is there any option to run this script only for custom nodes, not on all existing in jenkins ?
Thank You