Entries Tagged 'IT' ↓

Reverse proxy using squid + Redirection

Squid – Reverse Proxy

In computer networks, a reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client as though it originated from the reverse proxy itself. While a forward proxy is usually situated between the client application (such as a web browser) and the server(s) hosting the desired resources, a reverse proxy is usually situated closer to the server(s) and will only return a configured set of resources.

See: http://en.wikipedia.org/wiki/Reverse_proxy

Configuration

Squid should already be installed, if not then install it:

yum install squid

Then we edit squid config:


vim /etc/squid/squid.conf

Add we add the following to the top of the file:

http_port 80 vhost
https_port 443 cert=/etc/squid/localhost.crt key=/etc/squid/localhost.key vhost

cache_effective_user squid
cache_effective_group squid

cache_peer 1.2.3.4 parent 80 0 no-query originserver login=PASS name=site1-http
cache_peer 1.2.3.5 parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=site2-ssl
cache_peer_domain site1-http site1.example.lan
cache_peer_domain site2-ssl site2.anotherexample.lan

acl bad_requests urlpath_regex -i cmd.exe \/bin\/sh \/bin\/bash default\.ida?XXX insert update delete select
http_access deny bad_requests

Now I’ll walk us through the above configuration.

http_port 80 vhost
https_port 443 cert=/etc/squid/localhost.crt key=/etc/squid/localhost.key vhost

This sets the http and https ports squid is listening on. Note the cert options for https, we can get squid to use https up to the proxy and unencrytped link to the last hop if we want.. which is cool. If for some reason the server doesn’t support https.


cache_effective_user squid
cache_effective_group squid

Set the effective user and group for squid.. this may not be required, but doesn’t hurt.


cache_peer 1.2.3.4 parent 80 0 no-query originserver name=site1-http
cache_peer 1.2.3.5 parent 443 0 no-query originserver ssl sslflags=DONT_VERIFY_PEER name=site2-ssl
cache_peer_domain site1-http site1.example.lan
cache_peer_domain site2-ssl site2.anotherexample.lan

This is the magic, the first two lines, tell squid which peer to reverse proxy for and what port to use. Note if you use ssl the ‘sslflags=DONT_VERIFY_PEER’ is useful otherwise if your using a self signed cert you’ll have certificate errors.

IMPORTANT: If you want to allow http authentication (auth handled by the web server, such as htaccess) then you need to add ‘login=PASS’ otherwise squid will try and authenticate to squid rather than the http server.

The last two lines, reference the first two and tell squid the domains to listen to, so if someone connects to squid looking for that domain it knows where to go/cache.


acl bad_requests urlpath_regex -i cmd.exe \/bin\/sh \/bin\/bash default\.ida?XXX insert update delete select
http_access deny bad_requests

NOTE: The acl line has been cut over two lines, this should be on one. There should be the ACL line and the http_access line.

These lines set up some bad requests to which we deny access to, this is to help prevent SQL injection, and other hack attempts, etc.

That’s it, after a (re)start to squid you it will be reverse proxying the domains.

Redirect to SSL

We had a requirement to automatically redirect to https if someone came in on http. Squid allows redirecting through a variety of ways, you can write a redirect script at get squid to use it, but there is a simpler way, using all squid internals and acls.

Add the following to the entries added in the last section:


acl port80 myport 80
acl site1 dstdomain site1.example.lan
http_access deny port80 site1
deny_info https://site1.example.lan/ site1

acl site2 dstdomain site2.anotherexample.lan
http_access deny port80 site2
deny_info https://site2.anotherexample.lan/ site2

We create an acl for the squids port 80 and then one for the domain we want to redirect. We then use “http_access deny” to cause squid to deny access to that domain coming in on port 80 (http). This causes a deny which is caught by the deny_info which redirects it to https.

The order used of the acl’s in the http_access and the deny_info is important. Squid only remembers the last acl used by a http_access command and will look for a corresponding deny_info matched to that acl. So make sure the last acl matches the acl used in the deny_info statement!

NOTE: See http://www.squid-cache.org/Doc/config/deny_info/

Appendix

The following is the configuration all put together now.

Reverse proxy + redirection:

http_port 80 vhost
https_port 443 cert=/etc/squid/localhost.crt key=/etc/squid/localhost.key vhost

cache_effective_user squid
cache_effective_group squid

cache_peer 1.2.3.4 parent 80 0 no-query originserver login=PASS name=site1-http
cache_peer 1.2.3.5 parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=site2-ssl
cache_peer_domain site1-http site1.example.lan
cache_peer_domain site2-ssl site2.anotherexample.lan

acl bad_requests urlpath_regex -i cmd.exe \/bin\/sh \/bin\/bash default\.ida?XXX insert update delete select
http_access deny bad_requests

acl port80 myport 80
acl site1 dstdomain site1.example.lan
http_access deny port80 site1
deny_info https://site1.example.lan/ site1

acl site2 dstdomain site2.anotherexample.lan
http_access deny port80 site2
deny_info https://site2.anotherexample.lan/ site2

Posfix – Making sense of delays in mail

The maillog

The maillog is easy enough to follow, but when you understand what all the delay and delays numbers mean then this may help really understand what is going on!
A standard email entry in postfix looks like:

Jan 10 10:00:00 testmtr postfix/smtp[20123]: 34A1B160852B: to=, relay=mx1.example.lan[1.2.3.4]:25, delay=0.49, delays=0.2/0/0.04/0.25, dsn=2.0.0, status=sent

Pretty straight forward: date, email identifier in the mailq (34A1B160852B), recipient, which server the email is being sent to (relay). It is the delay and delays I’d like to talk about.

Delay and Delays
If we take a look at the example email from above:

Jan 10 10:00:00 testmtr postfix/smtp[20123]: 34A1B160852B: to=, relay=mx1.example.lan[1.2.3.4]:25, delay=0.49, delays=0.2/0/0.04/0.25, dsn=2.0.0, status=sent

The delay parameter (delay=0.49) is fairly self explanatory, it is the total amount of time this email (34A1B160852B) has been on this server. But what is the delays parameter all about?

delays=0.2/0/0.04/0.25

NOTE: Numbers smaller than 0.01 seconds are truncated to 0, to reduce the noise level in the logfile.

You might have guessed it is a break down of the total delay, but what do each number represent?

Well from the release notes we get:

delays=a/b/c/d:
a=time before queue manager, including message transmission;
b=time in queue manager;
c=connection setup time including DNS, HELO and TLS;
d=message transmission time.

There for looking at our example:

  • a (0.2): The time before getting to the queue manager, so the time it took to be transmitted onto the mail server and into postfix.
  • b (0): The time in queue manager, so this email didn’t hit the queues, so it was emailed straight away.
  • c (0.04): The time it took to set up a connection with the destination mail relay.
  • d (0.25): The time it took to transmit the email to the destination mail relay.

However if the email is deferred, then when the email is attempted to be sent again:

Jan 10 10:00:00 testmtr postfix/smtp[20123]: 34A1B160852B: to=, relay=mx1.example.lan[1.2.3.4]:25, delay=82, delays=0.25/0/0.5/81, dsn=4.4.2, status=deferred (lost connection with mx1.example.lan[1.2.3.4] while sending end of data -- message may be sent more than once)

Jan 10 testmtr postfix/smtp[20123]: 34A1B160852B: to=, relay=mx1.example.lan[1.2.3.4]:25, delay=1092, delays=1091/0.2/0.8/0.25, dsn=2.0.0, status=sent

This time the first entry shows how long it took before the destination mail relay took to time out and close the connection:

delays=0.25/0/0.5/81
Therefore: 81 seconds.

The email was deferred then about 15 minutes later (1009 seconds [delays – <total delay from last attempt> ]) another attempt is made.
This time the delay is a lot larger, as the total time this email has spent on the server is a lot longer.

delay=1092, delays=1091/0.2/0.8/0.25

What is interesting though is the value of ‘a’ is now 1091, which means when an email is resent the ‘a’ value in the breakdown also includes the amount of time this email has currently spend on the system (before this attempt).

So there you go, those delays values are rather interesting and can really help solve where bottlenecks lie on your system. In the above case we obviously had some problem communicating to the destination mail relay, but worked the second time, so isn’t a problem with our system… or so I’d like to think.

Use xmllint and vim to format xml documents

If you want vim to nicely format an XML file (and a xena file in this example, 2nd line) then add this to your ~/.vimrc file:
" Format *.xml and *.xena files by sending them to xmllint
au FileType xml exe ":silent 1,$!xmllint --format --recover - 2>/dev/null"
au FileType xena exe ":silent 1,$!xmllint --format --recover - 2>/dev/null"

This uses the xmllint command to format the xml file.. useful on xml docs that aren’t formatted in the file.

Debian 6 GNU/KFreeBSD Grub problems on VirtualBox

Debian 6 was released the other day, with this release they not only released a Linux kernel version but they now support a FreeBSD version as well!
So I decided to install it under VirtualBox and check it out…

The install process went smoothly until I got to the end when it was installing and setting up grub2. It installed ok on the MBR but got an error in the installer while trying to set it up. I jumped into the console to take a look around.

I started off trying to run the update-grub command which fails silently (checking $? shows the return code of 1). On closer inspection I noticed the command created an incomplete grub config named /boot/grub/grub.cfg.new

So all we need to do is finish off this config file. So jump back into the installer and select continue without boot loader, this will pop up a message about what you must set the root partition as when you do set up a boot loader, so take note of it.. mine was /dev/ad0s5.

OK, with that info we can finish off our config file. Firstly lets rename the incomplete one:
cp /boot/grub/grub.cfg.new /boot/grub/grub.cfg

Now my /boot/grub/grub.cfg ended like:
### BEGIN /etc/grub.d/10_kfreebsd ###
menuentry 'Debian GNU/kFreeBSD, with kFreeBSD 8.1-1-amd64' --class debian --class gnu-kfreebsd --class gnu --class os {
insmod part_msdos
insmod ext2


set root='(hd0,1)'
search --no-floppy --fs-uuid --set dac05f8a-2746-4feb-a29d-31baea1ce751
echo 'Loading kernel of FreeBSD 8.1-1-amd64 ...'
kfreebsd /kfreebsd-8.1-1-amd64.gz

So I needed to add the following to finish it off (note this I’ll repeat that last part):
### BEGIN /etc/grub.d/10_kfreebsd ###
menuentry 'Debian GNU/kFreeBSD, with kFreeBSD 8.1-1-amd64' --class debian --class gnu-kfreebsd --class gnu --class os {
insmod part_msdos
insmod ext2
insmod ufs2


set root='(hd0,1)'
search --no-floppy --fs-uuid --set dac05f8a-2746-4feb-a29d-31baea1ce751
echo 'Loading kernel of FreeBSD 8.1-1-amd64 ...'
kfreebsd /kfreebsd-8.1-1-amd64.gz
set kFreeBSD.vfs.root.mountfrom=ufs:/dev/ad0s5
set kFreeBSD.vfs.root.mountfrom.options=rw
}

Note: My root filesytem was UFS, thus the ‘ufs:/dev/ad0s5’ in the mountfrom option.

That’s it, you Debian GNU/kFreeBSD should now boot successfully 🙂

Fedora preupgrade from local mirror

If you have a local mirror and want to use it as the mirror for preupgrade then follow the these normal steps EXCEPT do the following BEFORE you run the preupgrade(-cli) command:

  1. Download the releases.txt file used:
    wget http://mirrors.fedoraproject.org/releases.txt
  2. Modify the releases.txt file, I changed the Fedora 14 (what I’m upgrading to) options to:
    [Fedora 14 (Laughlin)]
    stable=True
    preupgrade-ok=True
    version=14
    baseurl=http://localmirror/fedora/linux/releases/14/Fedora/$basearch/os/
    installurl=http://localmirror/fedora/linux/releases/14/Fedora/$basearch/os/
    #mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-14&arch=$basearch
    #installmirrorlist=

    Note: I commented out the ‘mirrorlist’ and ‘installmirrorlist’ options and added the ‘baseurl’ and ‘installurl’ options.
  3. Finally run the preupgrade command from this directory, as one of the locations it looks for the releases.txt file is ./ (current directory).

For more places you can put the releases.txt see here or see the same info at the end of this post.

Happy upgrading.

preupgrade - tool to help you update a fedora system from one distro to the
next. Pre-resolves dependencies and sets up the system to be
upgraded via anaconda
License: GPLv2 or above
URL: https://fedorahosted.org/preupgrade/

= NOTES =

== Cleanup ==
preupgrade modifies data in ~3 places:
/var/cache/yum/preupgrade*
/boot/upgrade
/etc/grub.conf

If you want to clean up manually, you can do:
preupgrade --clean
Or, if you really want to be sure, do it by hand:
grubby --remove-kernel=/boot/upgrade/vmlinuz
rm -rf /var/cache/yum/preupgrade* /boot/upgrade

== Remote Headless Upgrades ==
Use preupgrade-cli --vnc=VNCPASSWORD.
See preupgrade-cli --help for more info.
The upgrade will start a VNC server on port 5901, requiring the given password.
The upgrade will proceed whether you connect the VNC client or not.

--> IMPORTANT NOTE ABOUT VNC INSTALLS <-- If something goes wrong or the installer needs more info, it will hang forever, waiting for you to tell it what to do. So you should probably connect a VNC client and monitor its progress. == Adding Custom Distributions == preupgrade searches the following locations for release data, in order: ./releases.txt ./data/releases.txt http://mirrors.fedoraproject.org/releases.txt If you want to add your own distribution to preupgrade: download releases.txt, edit it to your liking, then run preupgrade from that dir - or save it to ~/releases.txt to make it work when preupgrade is run normally. Please note that /usr/share/preupgrade/releases.list is ignored and is only being shipped for compatibility reasons. Use ~/releases.txt for customization.

Bind the bash history methods

Bash is an awesome shell and also very configurable. Some of bash’s build-in functions are not binded to keys, some of which REALLY should be!

Bash has two very useful functions used for searching your bash history:

  • history-search-backward
  • history-search-forward

So the question is how do we bind them. I bound them to Control-<up> and Control-<down>. To do this add the following lines to your ~/.inputrc file:
"\e[1;5A": history-search-backward
"\e[1;5B": history-search-forward

Note: To test it out you can use the bind command for your current shell:
bind "\e[1;5A": history-search-backward

If you want to use a different key combination then you can use the ‘read’ command to print it out, for example runnding the command read, then hitting control-<up>:

matt@wks1005847 ~ $ read
^[[1;5A

When we bind the key we replace the ‘^[‘ with a ‘\e’ as ‘^[‘ can match the <Escape> key.

Git remote ssh syntax

Git makes it easy to add remote repositories to push to, best part is you can use ssh.

The ‘git remote add’ command takes in a html URL like parameter for even SSH like:
ssh://<user>@<host>/<path to git repo>/
NOT the standard ssh scp syntax:
ssh://<user>@<host>:<path to repo>/
Which allows you to base the location from the users home directory or specify a full path:
matt@notrealhost.com:<repo in home directory>/
matt@notrealhost.com:/home/matt/<repo in home directory>/

NOTE: Just put a full path after the ‘:’, or if you want something from the home directory the just assume your in the homedir with out a starting ‘/’.

OK so git doesn’t seem to like that syntax, which is a shame because those of us who use ssh are SO used to it.
The good news is you can specify from the home directory with the git URL style syntax as you do with the standard URL sytax and it looks like:
ssh://matt@notrealhost/~/<repo in home directory>/

Those Linux/Unix guys will recognise the ‘~’ as a shortcut to home in bash, which means the same thing here!

So putting it all together I can add a remote to a git repository which exists in my home directory on that server by:
git remote add notrealhost ssh://matt@notrealhost.com/~/code/myRepository/

This post was written because I keep trying to use the scp syntax when dealing with ssh + git.. So its being filled away here for my own reference.

Git clean

Sometimes I find myself saying WTF, why isn’t something behaving the way I expect it do, and then get frustrated. “This is not how I’d have designed it, if I was writing it”, I guess you can call this the mantra of the OSS developer 😛

But as normal, when you blaming something like git or Linux, it just means your doing something wrong or you don’t have a complete understanding of the situation. A lesson I have learnt time and time again, you’d think I’d learn but I don’t.

He’s what happened, we use git at work. Git has some very useful commands.
To return ALL repo tracked files to the state they were in at last checkout:
git reset --hard
To remove all untracked files:
git clean -df

Usually running these two allows out to go back to the point you were at last checkout, removing all compiled files, logs, etc. This is extremely useful for testing.
Also as most revision control systems have, git allows you to create ignore files (.gitignore), so you can tell git try and add certain files or folders to the repository.

OK, so for most people who use git, you’d be saying yeah of course. Well some of my work colleagues noticed that the ‘git clean’ wasn’t actually cleaning all untracked files. It was ignoring the compiled .class files and a heap of other stuff. This seemed weird, we could go to the root directory of the repository create a file, and ‘git clean’ would remove it, put it a few subdirectories down and nope wouldn’t be removed.
In fact running a dry run would return nothing.. so why wasn’t git removing these untracked files.

Well it turns out, and if you haven’t guessed by the fact I mentioned ignore files in the lead up, git is smarter then we gave it credit for. We have ignore files, so what does git do? …it ignores them!
It turns out the ignore files don’t just stop git from adding or wanting to add (telling us about hundreds of untracked files) certain files to the repo, but also, and the file suggests, ignores them in other git commands. This behaviour actually makes sense, if you wanted to keep some notes, wanted to keep them with your code, but not delete them during a ‘git clean’, then just add your notes directory to  a .gitignore file.

Turns out ‘git clean’ has another switch, created to solve the “problem” we were having:

  • -x: Don’t use the ignore rules. This allows removing all untracked files, including build products. This can be used (possibly in conjunction with git reset) to create a pristine
    working directory to test a clean build.
  • -X: Remove only files ignored by git. This may be useful to rebuild everything from scratch, but keep manually created files.

So all we needed to do is run:
git clean -dfx

NOTE: git clean, cleans from the current directory, so if you want to clean the entire repo then make sure your in the root folder of it.

This is actually an awesome feature, so yup, lesson learned again. It wasn’t a problem with git, it was a problem with my understanding! Maybe this time I’ll remember 😛

Fedora + akmod = WIN!

Fedora is an awesome Linux distro.  Chris just informed me of an awesome feature so I thought I’d blog for everyone else’s benefit and also my own 😉

When installing a kernel module through yum, instead of installing the kmod-<module> package install the akmod-<module> version e.g:
sudo yum install kmod-nvidia

Becomes
sudo yum install akmod-nvidia

WHY? I’m glad you asked.
Akmod packages check to see if the kernel has changed and if so recompile the module to match it so you always have a module that works.

Teh awesome? I think yes!

Fedora 13 upgrade woes, another problem with nvidia.

I just upgraded to Fedora 13, all went smoothly.. or so I thought. When it finally became time login to my new system I got a error saying ksmserver wouldn’t start, and my session would then close and throw me back to the login screen.

Turning to a console I decided to run ‘ksmserver’ and see what errors I got, sure enough it failed. There was a mismatch between the versions of  libGL.so and libGLCore.so.
The libGL.so being used was offered by the “mesa-libGL” package, the other the “kmod-nvidia” package. I thought this was odd, should it be using the mesa libGL? I dunno, what I do know is the nvidia package does install nvidia’s own libGL.so library.. maybe the package was suppose to set up the links? Maybe the mesa one is suppose to be compatible?

Anyway this is how the linking looked:

# ls -l /usr/lib64/libGL.so*
lrwxrwxrwx. 1 root root 10 May 31 16:06 /usr/lib64/libGL.so -> libGL.so.1
lrwxrwxrwx. 1 root root 15 Jun 1 10:43 /usr/lib64/libGL.so.1 -> libGL.so.190.42
-rwxr-xr-x. 1 root root 439952 May 1 10:38 /usr/lib64/libGL.so.1.2
-rwxr-xr-x. 1 root root 928808 Dec 11 09:43 /usr/lib64/libGL.so.190.42

So I linked ‘libGL.so.1 to the nvidia one:

cd /usr/lib64
unlink libGL.so.1
ln -sf /usr/lib64/nvidia/libGL.so.1 libGL.so.1

So it looks like:

# ls -l /usr/lib64/libGL.so*
lrwxrwxrwx. 1 root root 10 May 31 16:06 /usr/lib64/libGL.so -> libGL.so.1
lrwxrwxrwx. 1 root root 28 Jun 1 11:00 /usr/lib64/libGL.so.1 -> /usr/lib64/nvidia/libGL.so.1
-rwxr-xr-x. 1 root root 439952 May 1 10:38 /usr/lib64/libGL.so.1.2
-rwxr-xr-x. 1 root root 928808 Dec 11 09:43 /usr/lib64/libGL.so.190.42

I restarted X (just to be on the safe side) and logged in… problem solved!

NOTE: To restart X under fedora you just kill the kdm or gdm process as X is spawned as a part of inittab (pkill kdm). Unlike a debian based system in which it’s a init script (/etc/init.d/kdm restart).

It’s a bit of a hack, and hopefully it will be fixed properly, but here is at least a solution that works!