One of the customers I do work for recently decided that, in addition to supplying native deployment-automation code targeting their desired cloud-...
For further actions, you may consider blocking this person and/or reporting abuse
I've always just run them until they work. I'm not sure there's another way to validate the whole thing unless they've made adding exceptions to the script security defaults easier recently.
Adding your code to a global shared library assumes the code is "trusted" meaning you should not have to validate each and every call outside of the sandbox.
jenkins.io/doc/book/pipeline/share...
"These libraries are considered "trusted:" they can run any methods in Java, Groovy, Jenkins internal APIs, Jenkins plugins, or third-party libraries. This allows you to define libraries which encapsulate individually unsafe APIs in a higher-level wrapper safe for use from any Pipeline"
That looks new! The Jenkins instance I was working with was a couple years old and I don't remember seeing anything like that functionality.
Yeah. I'd been doing similar. However, as we've added new people to this particular project and have more "never used Jenkins" people on the team writing Jenkins jobs, more in the way of simple mistakes have been happening.
Always nice to get a green from Travis saying that at least the syntax of something is correct.
I'd recommend the following:
We generally just define tests within our ..yml files. With the various versions of git clients in use, it's more reliable than hoping that a given user's git client is - or can even be - configured to use standardized pre-commit hooks.
Our overall model is "work in a branch of your own fork, then PR back to the appropriate branch of the root project".
This is what I'm looking for. Happen to have any examples? As mentioned above, when using things like GitLab's built-in CI tool or tools like Travis, we've mostly been just handing off tests by way of file-extensions (
.json
files going throughjq
;.sh
files going throughShellCheck
; etc.). I'd been hoping there was a similar linter tool out there to hand off to. If we had to go the "roll your own route" (as your above seems to hint at), would be super helpful to have something tosteal fromuse as a reference.I wouldn't change what you're doing. Jenkins very much wants to be utilized in the same manner. If you can shell out to CLI tools to accomplish tasks, that's perfect. I encounter many pipelines that use tools like make, pip, npm.
The nice thing about a pre-commit hook is the user doesn't have to much more installed than CURL. I'll put together a little article here shortly for getting it setup since I think others may benefit too.
Here are a couple declaritive examples and a single scripted exampled below. Typically we recommend you use declaritive syntax where possible assuming you don't already have a large scripted code base.
Declaritive:
And with scripted:
I'd probably do more stuff in the DSL if the plugins I wanted to use were more "on board" with the pipeline thing. Seems like the bulk of the plugins (at least the ones our user-community requests) were designed for "Freestyle" usage rather than pipeline usage ...frequently meaning have to go read the plugin's source-code to suss out the "which of these bits are usable in Pipeline mode." Usually, it's just quicker/easier to use the OS-native tools via a
sh()
statement than try to do things The Right Way™This is doubly so when, the reason I'm dicking with Jenkins-managed jobs is because the primary customer for the devops tooling keeps harping on "we need to eat our own dogfood". I find myself struggling with the "why would I waste already over-commited time wrapping shit in Jenkins when I can just use the AWS CLI directly". Adding the pipeline layer doesn't really seem to be more "infrastructure as code" than is directly-maintaining CFns and CFn parameter-files in relevant git projects. Maybe there's something that's "too obvious" that I'm missing it. :p
I'll admit, though, that I'm a curmudgeon. To me, when you aggregate the knowledge required to use all of these "simplified" tools, you actually require more in the way of specific knowledge than if you just did things "the hard way" (see: all the various simplified markup languages like Markdown, Textile ...and then all of their variants). But, I'm looking at these things from the standpoint of the person charged with fielding and operating all the tools rather than our users who typically only use one or two of the tools (frequently after someone like me has created the further-simplifying automation for them).
Oh. Wow. Wasn't really meaning for that to turn into a rant! =)
If it comes to it you can invoke plugins manually in the DSL, like these for coverage/lint reports:
True... But that's part of what I was alluding to when saying you frequently need to dig through the GUI-oriented plugins' source to find the names of the knobs to turn via the DSL.
github.com/june07/sublime-Jenkinsf...
Sublime Text is my goto so I wrote that plugin to leverage Jenkins' own declarative linter.
Uploading the Jenkinsfile and running the pipeline on each edit was definitely a broken workflow and took loads of time.