Jenkins does not use HTTPS. It’s a mistery why it does not. So, in order to run this behind HTTPS, you need a reverse HTTP proxy server in order to add “S” to HTTP.
I spent some time looking for ways to set up HTTPS for Jenkins, and the answer was negative. 🙁
Since you don’t want to expose HTTP over network, make sure Jenkins only answers to the localhost. Then, the nginx must be on the same host, or else there is no point of this exercise.
First, Jenkins is working at jenkins_host:9000 and want https runs on 8000. (I just realized the port number choices are kind of weird.)
Install nginx
This is an easy part – “sudo apt install -y nginx”
Configure nginx
This is a little harder part but here is my current config file.
upstream jenkins_host {
server localhost:9000 fail_timeout=0; # jenkins_host ip and port
}
server {
listen 8000 ssl; # Listen on port 8000 for IPv4 requests with ssl
server_name jenkins_host.cleanwinner.com;
ssl_certificate /etc/ssl/cleanwinner/jenkins_host-nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/cleanwinner/jenkins_host-nginx-selfsigned.key;
access_log /var/log/nginx/jenkins/access.log;
error_log /var/log/nginx/jenkins/error.log;
location ^~ /jenkins {
proxy_pass http://localhost:9000;
proxy_read_timeout 30;
# Fix the "It appears that your reverse proxy set up is broken" error.
proxy_redirect http://localhost:9000 $scheme://jenkins_host:8000;
}
location / {
# Don't send any file out
sendfile off;
#
proxy_pass http://jenkins_host;
proxy_redirect http:// https://;
# Required for new HTTP-based CLI
proxy_http_version 1.1;
# Don't want any buffering
proxy_request_buffering off;
proxy_buffering off; # Required for HTTP-based CLI to work over SSL
#this is the maximum upload size
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# workaround for https://issues.jenkins-ci.org/browse/JENKINS-45651
add_header 'X-SSH-Endpoint' 'jenkins_host.cleanwinner.com:50022' always;
}
}
So have this file as /etc/nginx/avaialbe-site/jenkins. You need to a link from /etc/nginx/enabled-site to this file in order for this setting to work. “sudo ln -s ../site-available/jenkins” in /etc/nginx/site-enabled is good.
cert files
As you can see, for SSL, you need a SSL certificate. You can create a self-signed, or get something real. For this exercise, it’s not quite relevant so I’ll leave it to you. I’ll talk about making one with pfSense. Stay tuned.
On Ubuntu, if you are using Jenkins package, you can change the session timeout in /etc/default/jenkins.
JENKINS_ARGS=" BLA BLA -- --sessionEviction=604800"
I tried –sessionTimeout and it does not work.
Where BLA BLA is the existing args and --sessionEviction=604800 is the new session timeout. The default is 30 minutes and I was timing out a lot while testing Jenkinsfile. Unlke sessionTimeout, sessionEviction’s unit is in seconds not minutes. 604800 is 60*60*24*7 so the timeout is a week.
I decided to redo my React project with TypeScript. It causes an error event and doesn’t run at all.
Starting the development server...
events.js:183
throw er; // Unhandled 'error' event
^
Error: watch /home/ntai/sand/triageui/public ENOSPC
at _errnoException (util.js:1022:11)
at FSWatcher.start (fs.js:1382:19)
at Object.fs.watch (fs.js:1408:11)
at createFsWatchInstance (/home/ntai/sand/triageui/node_modules/chokidar/lib/nodefs-handler.js:38:15)
at setFsWatchListener (/home/ntai/sand/triageui/node_modules/chokidar/lib/nodefs-handler.js:81:15)
at FSWatcher.NodeFsHandler._watchWithNodeFs (/home/ntai/sand/triageui/node_modules/chokidar/lib/nodefs-handler.js:233:14)
at FSWatcher.NodeFsHandler._handleDir (/home/ntai/sand/triageui/node_modules/chokidar/lib/nodefs-handler.js:429:19)
at FSWatcher.<anonymous> (/home/ntai/sand/triageui/node_modules/chokidar/lib/nodefs-handler.js:477:19)
at FSWatcher.<anonymous> (/home/ntai/sand/triageui/node_modules/chokidar/lib/nodefs-handler.js:482:16)
at FSReqWrap.oncomplete (fs.js:153:5)
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Process finished with exit code 1
After few minutes of Google, it turns out file system monitor resource is running out. You need to increase the fs.inotify.max_user_watches.
Sadly, one of hard disks for ESXi server died. I put in a replacement disk but one of VMs doesn’t start as not having the disk. So, I need to delete the old disk node, create new one, and set up in the VM.
Before installing disk, take a note on the disk serial number. (S/N) This comes in handy to ID the disk in the process. More over, you should label the S/N near SATA port to make life easier.
Log in to ESXi via SSH as root. Then, check the disks.
# ls -l /vmfs/devices/disks
You’ll see a bunch but you should be able to ID the disk you just put in if you know the S/N. Since I’m stubborn, I want to use the same “hitachi_2tb_2.vmdk”, I first deleted the vmdk file. You actually need to delete two files.
Now time to go to the ESXi web interface and create a new disk. First, delete the dead disk from the settings. Then “Add New Disk” and pick “Existing disk” and choose the VMDK file you just created.
You are done with the VM setting. Now, go into the XigmaNAS.
First, “Disks” > “Management” > HDD Management. If you don’t see the new device not showing up, “Import Disks” [Import].
Second, format the disk you just put in. It’s “Disks” > “Management” > “HDD Format”. Pick “ZFS Storage Pool”. Choose the disk. (Serial Number shows up here as well.) Click “Next” and format. (pretty quick.)
Third, “Disks” > “ZFS”> “Pools” > “Tools”. ZFS Pool knows that the disk is changed. Chose “Replace a device” to replace the dead disk to the new one. Once you are done, it should start recovering the mirror.
Quick note – For SSH’s X forwarding to work on XigmaNAS, you need to explicitly enable the forwarding. The FreeBSD doc says it’s default yes, but it’s not true for XigmaNAS. It didn’t work until I added the X11forwarding in the additional parameters. Cheers!
Here is the steps to run git private server on XigmaNAS.
Install git package
Create “git” account
Set up git directory
Set up the ssh public key auth for easy login
I know I can do this from the XigmaNAS web GUI’s command but it’s too tedious so please use the terminal of your choice. You also need a text editor most likely. MYVOLUME should be your data store of choice.
# pkg install -y git
# GITHOME=/mnt/MYVOLUME/git
# mkdir -p $GITHOME/projects
# mkdir -p $GITHOME/.ssh
# cd $GITHOME/.ssh
# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): mygit_rsa
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in mygit_rsa.
Your public key has been saved in mygit_rsa.pub.
The key fingerprint is:
# cat mygit_rsa.pub >> authorized_keys
# chmod 600 authorized_keys
Now, you need to create “git” account. From XigmaNAS UI, Access>User&Groups, first go to Groups and add “git” group. GID can be anything so I picked a random number 3178. Then, create “git” user.
So the “git-shell”. Unfortunately, the shell selection is not picked up from /etc/shells. I sniffed around and you need to hack the php file for this to show up. You need to be root to edit the PHP file. The file is /usr/local/www/access_users_edit.php so become root and open it with a text editor. Look for $l_shell. Add a line for git-shell. Snippet and diff follow. I use ksh a lot so I added ksh as an option as well.
UPDATE: On XigmaNAS 12, /etc/inc/system/access/user/grid_properties.php contains the list of shells.
As a git server, all set. The remaining thing is to add “git” group to the users on the server so users can create new repo under projects, and handing out the private key mygit_rsa to users, or add the public key to the authorized_keys of “git” user.
Example: Let’s say I want to have a “config.git” on the server. This repo stores all my Linux machine’s configuration files so when I have to set up a new machine, I can see how I set up my account in the past. First, since I don’t know how to create fresh repo from client side, I will create a fresh repo on XigmaNAS. Here is the steps:
SSH-Login to NAS. Since “git” account is not shell account, you have to do this as root unfortunately.
Create a repo directory “mkdir $GITHOME/config.git“
Still as root, cd $GITHOME/config.git && git init --bare
chown -R git:git $GITHOME/config.git
From the client side, now repo is ready, if you set up the ssh keys right, you do: git clone ssh://git@nas/~/config.git