You are viewing chexum

chexum's Journal
 
[Most Recent Entries] [Calendar View] [Friends]

Below are the 6 most recent journal entries recorded in chexum's LiveJournal:

    Tuesday, March 27th, 2007
    12:11 am
    nginx -- the perfect frontend proxy

    My curiosity grow stronger, and made me check out nginx. The fact that I have an Apache already running with its slowly-grown configuration made this a real fun task (in the nerd sense), as introducing nginx as an additional component into the system does not look as an obvious improvement.

    After a few days, it seems it is. Apache is still used (mainly because of zend optimizer, see later), but the box looks more healthy now. There are still real pikes in the load, but the CPUs are at least twice less loaded (the machine runs at half to 1/10 of the load average than before). One of the reasons is obviously that now Apache does not need to care about the static files with all its infrastructure built around running PHP stuff. A bit less apparent reason is that even when serving dynamic pages, Apache does not need to linger around, waiting for the clients to fetch the last bytes of the data. All the data produced by the dynamic pages are quickly passed to nginx, which will care about all those pesky clients with their TCP bugs with its few processes in an event loop optimized for Linux 2.6 features.

    But the feature that made me stop in awe was the impactless software upgrade (see also here). When you send an nginx master process the USR2 signal, it will fork, and start another copy of the executable (mind you, which may be a new version of nginx). The appropriate socket handles are passed to the new process, so it can start serving immediately. In fact, you can keep two versions of nginx serving your site(s), though I can't see how this is really an advantage :) The usual process is that you then send a WINCH signal to the old nginx (change the window over to the new process, eh?), then only the new nginx is doing the job, but the old one is still hanging around just in case. If you realize running a new nginx was a mistake, just HUP the old one, and it will be brought to live (you still need to get rid of the new one, of course).

    You can do a few more things to the processes. There's usually a "master" process, which has its pid written to the usual locations, and one or more "worker" processes (as you configured), running as the specified unprivileged users. Usually, you only need to signal the master processes, unless some of the workers smell weird (didn't happen yet). Masters respond to WINCH by (gracefully) killing their worker child processes. You can also HUP them to reread the configuration, and spawning new workers. This is also why it's not a good idea to change config and try to upgrade at the same time -- it never is if you think about it... QUIT is the signal to send if you want a master to stop its workers serving and quitting if all of them is finished -- this is also the final step when you are happy with the upgrade.

    During the whole process, you can see the master process in the usual pid file, say /var/run/nginx.pid, and the pid of the planned master process in PIDFILE.oldbin (/var/run/nginx.pid.oldbin), so it's very easy to script for it. Now that I think of it, I'd refrain from trying to run three masters because of this :)

    This impactless upgrade also needs to exec the old pathname, so if you just run "nginx" without the full path, it won't work. But you aren't out of luck either. Just find where it wants to exec the new binary, chances are it isn't in the PATH, but the docroot of the first site, and put a simple script there which will exec /usr/sbin/nginx, and your just as good as if you started it that way. [hackety-hack]

    I also happen to like the configuration syntax, just about comfortably C like. Not too many $s peppered around, and nothing resembling XML (<IfModule> anyone?). Still there's a few roadblock to make a system multi-tiered, leaving the one Apache way. You need to think about the IP addresses used for each server -- I moved Apache to 127.0.0.1:80, and it still thinks it's serving the main site. Because now every client to Apache is localhost, you might want to fake the real IP address to some scripts, for example with mod_rpaf. When you want to use the IP addresses for cookie access control, this is the easiest way, and the only one if you don't have access to the PHP source...

    So, why did not I get rid of Apache already? Well, php is the reason. The situation isn't really grave, nginx apparently has a promising FastCGI interface (thought less featureful than when proxying simple HTTP), and php can run as a FastCGI server, so it can usually work, except if you must run Zend Optimized scripts. Apparently, php either runs as a FastCGI server (-b PORT option), or loads the Zend Optimizer modules (-z MODULE), which looks kinda silly, but may even be a feature as far as I know... So Apache will hum for a while in my server, but at a lower pitch.

    I have plenty of config examples now :)



    Current Mood: awake
    Friday, January 12th, 2007
    10:31 pm
    standards access again

    Only a month passed since my rant about IEEE's tightening grip on their standards, and what happens? ITU, the International Telecommunication Union allows completely free access to most of their published standards.

    As of 1 January 2007, take advantage of free pdf versions of current, in-force ITU-T Recommendations during a trial period.

    See here.

    Interesting. Although the "trial period" bothers me a little, it's almost unbelievable at first. Of course, to me it seems the most sensible thing to do, considering the dangers the telcos (as in, POTS/ISDN providers) face from the VoIP world. Now you can inspect all levels of telecommunications standards from these bodies:



    Whoever is responsible for that, thank you!

    Current Mood: cheerful

    Monday, December 4th, 2006
    5:06 pm
    IEEE standards access

    For a while, IEEE had the great program called Get IEEE 802. It allowed the most "free" access to standards of the 802.* range, including 802.3 (most of what we call Ethernet), 802.11 (most of what we call wlan). The only thing they required from the downloaders is to state their "user type", like standards developer, student, etc. Although very hard to map "curious person interested in networking" to their terminology -- this is common with most types of official instutions.

    The other somewhat developer friendly standards organization is ETSI, responsible for many of the GSM and DECT related standards, they have a bit awkward registration and download mechanism, but still a pack of informative standards. The other people however, mainly ISO/IEC/ITU are not cheap, and many of their standards are just as uninformative as at these "cheap" orgs.

    Another limit at IEEE to the access was that you cannot download too recent standards, the arbitrary limit was set to six months. This worked quite well for "curious persons interested in networking", when you allow me to represent this group. It certainly is useful to get to know the internals of switches, the minor details of the spanning tree protocol, etc. However, the last standard to pass this limit was 802.16e-2005. Interestingly, it's not trivial to discover when a standard is released, 802.16e-2005 was apparently issued in February 2006, that's why it went "free" in September this year.

    However, for the last few months, there were three more documents on the waiting list, two including updates to the VLAN related standards. What irked me is all of them are apparently dated 2005 in their names. As you can see, it has nothing to do with their release date, which is stated as "2006", if you click on any one of them. I had hoped that 2007 being so close, maybe some of them can be freed any day now. I am not really interested in changes in the MAC Sublayer of the High Rate WPANs, for example. Yes, wireless stuff, but I'm rather confident 802.11, ethernet (and related 802 standards), bluetooth, usb, gsm, isdn/pstn networking covers such a large area that I won't be needing a device with 802.15. But VLANs do interest me.

    Today I discovered that they have cleared this issue of ever sliding deadlines by sliding them even more: "Effective 1 January 2007 new IEEE 802® standards will be included in the program after they have been published in PDF for twelve months."

    What this means to me that I'll be even more limited in "curiosity" access to these standards. There were probably too many "freeloaders" willing to wait a few months to get the standards. And you can also say I did nothing useful with my access, and very few people did, there's not a 802.1w (Rapid Spanning Tree Protocol) implementation for free operating systems. It was even approved in 2001, so we had quite some time to do it.

    What this also means that the IEEE standards are quite ahead of their time, you probably can even manufacture useful devices implementing only the protocols standardized more than a few months ago. So they try to artificially increase demand for the expensive version of the new standards. Obvious action, but maybe "cheap" manufacturers still won't care about them even if its just a few hundreds or tens of bucks to buy, so what they really have introduced is another period when the newest useful features are just not available in many devices. Worth it?



    Current Mood: disappointed
    Thursday, October 12th, 2006
    5:57 pm
    Using env or how not to pollute the environment

    rc(1) is my preferred shell for many kinds of shell scripting tasks. Sometimes even interactively(2), but zsh is a clear winner there. It is small, efficient, clean. It might look dumb after using zsh/bash for a while, as the kind of advanced string manipulation one gets used to in those shells is not existing here. The upside? This forces me to think about the problem, and decide what tool is best for solving it.

    As an example, for removing elements from PATH, I used sed in this script. Further, as rc does not have the concept of "export", each user variable ends up in the environment of the executed programs, so one needs to clean up the environment itself for extra correctness. This is a bit difficult if I need to use a given variable for naming the program to be run. So there is a need for running "env" as shown below.

    #!/bin/rc
    # unwrap: used in shadowed copies of commands
    # as in ~/bin/ssh: unwrap ssh $myargs
    cmd=$1
    if (~ $#* 0) { exit 1; }
    shift
    PATH=`{echo -n :$PATH:|sed 's,/home/[^:]\+/'^$USER^'/bin:,,g;s/^://;s/:$//'}
    exec env -u cmd -- $cmd $*
    
    1. rlwrap is a very nice little program to add history and line editing to otherwise dumb interactive programs.
    2. rc was here, but it's gone stale recently, the reportedly newer page isn't working either now.


    Current Mood: geeky
    Monday, October 9th, 2006
    11:15 am
    undefined reference to `rpl_malloc'

    I'm not actually reluctant to use the auto* tools, when I started, it was just autoconf itself, not these miriad helpers, so delving into them was a bit less difficult. Like many others however, sometimes I face weird problems, like the one which results in rpl_malloc being apparently undefined.

    This time, it was an xbindkeys compile:

    keys.o: In function `set_keysym':
    keys.c:(.text+0x611): undefined reference to `rpl_malloc'
    

    After a short while, I discovered this has another, uncommon reason: AC_FUNC_MALLOC is being used too late in the .ac script, thus masking the real problem this way:

    checking for stdlib.h... (cached) yes
    checking for GNU libc compatible malloc... no
    checking return type of signal handlers... void
    

    Not having a GNU libc compatible malloc on any recent Linux distribution is quite weird, but this is not yet obvious if you don't look at every possible autoconf output. In a completely GNU(-ified) package, that would result in incorporating a helper library with a replacement malloc(), rpl_malloc(), that many autoconf using packages nowadays simply aren't doing. In my case however, it was simply a hidden compile failure, because the guile libs weren't properly checked for another dependency, -lpthread, so after the check for libguile, no compile checks were successful.

    I can argue with myself that it's a problem with my build of guile, but the solution is quite easy:

    $ LDFLAGS="$LDFLAGS -lpthread" ./configure ...
    

    Thank you for visiting the arcane chamber GNU tools.

    Sunday, October 8th, 2006
    2:13 pm
    Conspiracy theory

    Although Clarke denied that naming HAL had anything to do with IBM, when IBM sells its notebook division, it gets named Leonov, scratch that, Lenovo...

    Maybe I'm a bit slow today, but haven't found it mentioned...



    Current Mood: silly
About LiveJournal.com