sunnuntai 7. elokuuta 2016

Ubuntu Xenial64 on Virtual Box and Vagrant

There was a lot of strange problems with ubuntu/xenial64, and in there is a mention by Seth Vargo (employee of Hashicorp)
The ubuntu/xenial64 box is built wrong and horribly broken. Please note that "ubuntu" is the name of a user, not a representation of a canonical source for ubuntu images. Please try bento/ubuntu-16.04 instead. Thanks.

These errors included following:

rejecting i/o to offline device
This happened almost everytime after heavier I/O operations, for example after loading Docker images.

stderr: Inappropriate ioctl for device
I think that this happened when Vagrant tried to setup network interfaces, mainly "enp0s8".

So just use bento/ubuntu-16.04

tiistai 5. heinäkuuta 2016

Jenkins Workflow: Executing build step for every change in commit

At work, we wanted send an email for every change that was made in a project. By default, Jenkins likes to collate changes into as few builds as possible, and normally sends an email per build.

The solution seemed to be usage of Jenkins Pipeline. Jenkins Pipeline enables creation and execution of jobs "on the fly" as needed.

First problem was to get access to ChangeLogSet. There is some preset variables in Jenkinsfile, but I could not find documentation for them. After some googling, Stack Overflow came to rescue.

def changes = currentBuild.rawBuild.changeSets

But when this was executed, Jenkins complained
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method getRawBuild

There's a "In-process Script Approval" -tool in Jenkins, where you can allow usage of these methods.

After that was solved, next problem was with serialization. As the actual job execution is transferred to different node, every non-serializable object caused an exception. To prevent this, I had to null objects in proper places. This then prevented running jobs in loop, as the variables in loops needed to be nulled before job execution. So I had to collect jobs into map, and after every job was defined, null everything and use "parallel" -task to execute jobs.

So the whole thing is here:

//changes is
def changes = currentBuild.rawBuild.changeSets
//We need to create branches for later execution, as otherwise there would be serialization exceptions
branches = [:]
for (int j = 0; j < changes.size(); j++) {
    def change = changes.get(j)
    for (int i = 0; i < change.getItems().size(); i++) {
        def entry = change.getItems()[i]
        def commitTitleWithCaseNumber = entry.getMsg()
        def commitMessage = entry.getComment()
        //split from first non digit
        def caseNumber = (commitTitleWithCaseNumber =~ /^[0-9]*/)
        // check that caseNumber was in case place
        if( !caseNumber[0].isEmpty() && commitTitleWithCaseNumber.startsWith(caseNumber[0])) {
          // Remove number from title, just for nicer subject line
          def commitTitle = commitTitleWithCaseNumber.substring(caseNumber[0].length()).trim()
          def number = caseNumber[0]
          branches["mail-${j}-${i}"] = {
              node {
                  emailext body: commitMessage, subject: "[Sysart ${number}] ${commitTitle}", to: ''
        // Need to forcibly null all non serializable classes
        caseNumber = null
        entry = null
    change = null
changes = null
stage 'Mail'
parallel branches
This was a little more difficult that I had expected, mainly because of serialization complications. But in the end, it works so it cannot be completely stupid.

maanantai 27. kesäkuuta 2016

docker: Error response from daemon: invalid bit range [4, 4]

Fooling around with docker, trying to create a overlay network. Copied some settings from net, and when starting container, docker reported an error.

root@infra-front:~# docker network create -d overlay --subnet= --gateway= --ip-range= test


root@infra-front:~# docker run --rm -ti --net test alpine sh

docker: Error response from daemon: invalid bit range [4, 4].

It seems that my network settings where wrong. For now, I just removed gateway and subnet and things started to work.

tiistai 14. kesäkuuta 2016

Jaspersoft Studio 6.2.2 on Fedora 23: no swt-pi-gtk in java.library.path

When starting Jaspersoft Studio 6.2.2 only thing I got was

Jaspersoft Studio:
GTK+ Version Check
Jaspersoft Studio:
An error has occurred. See the log file

Log file had: cannot open shared object file: No such file or directory
no swt-pi-gtk in java.library.path
/home/jyrki/.swt/lib/linux/x86/ cannot open shared object file: No such file or directory
Can't load library: /home/jyrki/.swt/lib/linux/x86/
Problem got fixed after installing gtk2.i686 (32 bit version)

sudo dnf install gtk2.i686

Using ldd (print shared library dependencies) helped to find out what was actually missing, as the error message is somewhat miss leading (Can't load library: /home/jyrki/.swt/lib/linux/x86/

ldd /home/jyrki/projects/jasper/
ldd: warning: you do not have execution permission for `/home/jyrki/projects/jasper/' (0xf7741000) => not found => /lib/ (0xf76af000) => /lib/ (0xf76a8000) => /lib/ (0xf74da000) => /lib/ (0xf74bd000) => /lib/ (0xf737b000) => /lib/ (0xf723a000) => /lib/ (0xf7226000) => /lib/ (0xf7214000)
/lib/ (0x5660d000) => /lib/ (0xf71ed000) => /lib/ (0xf71e8000) => /lib/ (0xf71e4000)

tiistai 19. huhtikuuta 2016

Using Keycloak APIs: "RESTEASY004655: Unable to invoke request"

Following exception was thrown while executing multiple calls to Keycloak API.

Caused by: RESTEASY004655: Unable to invoke request
at org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient4Engine.invoke(
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.invoke(
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientInvoker.invoke(
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientProxy.invoke(
at com.sun.proxy.$Proxy276.findAll(Unknown Source)
at org.keycloak.admin.client.resource.ClientsResource$ Source)
Caused by: java.lang.IllegalStateException: Invalid use of BasicClientConnManager: connection still allocated.
Make sure to release the connection before allocating another one.
at org.apache.http.util.Asserts.check(
at org.apache.http.impl.conn.BasicClientConnectionManager.getConnection(
at org.apache.http.impl.conn.BasicClientConnectionManager$1.getConnection(
at org.apache.http.impl.client.DefaultRequestDirector.execute(
at org.apache.http.impl.client.AbstractHttpClient.doExecute(
at org.apache.http.impl.client.CloseableHttpClient.execute(
at org.apache.http.impl.client.CloseableHttpClient.execute(
at org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient4Engine.invoke(

I was calling

and did not read anything from response. Simple fix was

            def response = keycloak.realm(realm).clients().create(representation)

lauantai 26. maaliskuuta 2016

Problem with Kubernetes SkyDNS healtz

I had some problems when trying to get DNS working on Kubernetes. I followed instructions from Everything seemed to be working well, but the pod got restarted after 30 seconds. The log for healthz -container had following entries:

2016/03/19 04:25:25 Client ip requesting /healthz probe servicing cmd sleep 10 && nslookup kubernetes.default.svc.kube.local localhost >/dev/null
2016/03/19 04:25:25 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.kube.local': Name does not resolve, at 2016-03-19 04:25:23.967737423 +0000 UTC, error exit status 1
After trying a lot of things, I found a bug report for Alpine Linux. Basically, the nslookup does not respect the server parameter, if /etc/resolv.conf  has entries. Comment on that issues recommends using dig or drill for querying.

So I made a simple image and pushed it into docker hub, Nothing fancy, just added drill ( I used the existing image as base as I wanted to have the exechealtz available.

Then I had to change the healtz command to
drill -q kubernetes.default.svc.kube.local @localhost

perjantai 18. maaliskuuta 2016

Kubernetes 1.2.0 beta-1 not starting on Raspbian 8.0

While trying to start kubernetes v1.2.0 on Raspbian 8.0 I ran into problems. Only k8s-master and k8s-master-proxy containers were started so the system was not getting up properly. Logs for k8s-master were telling following:

7215 kubelet.go:2365] skipping pod synchronization - [Failed to start ContainerManager system validation failed - Following Cgroup subsystem not mounted: [memory]]
Cgroup memory subsystem is not enabled by default. You can enable it by adding
into /boot/cmdline. Reboot is needed after this.

You can check if the memory subsystem is enabled by listing /sys/fs/cgroup/ which should the have directory called "memory" among others
blkio  cpu  cpuacct  cpu,cpuacct  cpuset  devices  freezer  memory  net_cls  systemd