---------------------- HAProxy Configuration Manual ---------------------- version 1.4.27 willy tarreau 2016/03/13 This document covers the configuration language as implemented in the version specified above. It does not provide any hint, example or advice. For such documentation, please refer to the Reference Manual or the Architecture Manual. The summary below is meant to help you search sections by name and navigate through the document. Note to documentation contributors : This document is formated with 80 columns per line, with even number of spaces for indentation and without tabs. Please follow these rules strictly so that it remains easily printable everywhere. If a line needs to be printed verbatim and does not fit, please end each line with a backslash ('\') and continue on next line. If you add sections, please update the summary below for easier searching. Summary ------- 1. Quick reminder about HTTP 1.1. The HTTP transaction model 1.2. HTTP request 1.2.1. The Request line 1.2.2. The request headers 1.3. HTTP response 1.3.1. The Response line 1.3.2. The response headers 2. Configuring HAProxy 2.1. Configuration file format 2.2. Time format 2.3. Examples 3. Global parameters 3.1. Process management and security 3.2. Performance tuning 3.3. Debugging 3.4. Userlists 4. Proxies 4.1. Proxy keywords matrix 4.2. Alphabetically sorted keywords reference 5. Server and default-server options 6. HTTP header manipulation 7. Using ACLs and pattern extraction 7.1. Matching integers 7.2. Matching strings 7.3. Matching regular expressions (regexes) 7.4. Matching IPv4 addresses 7.5. Available matching criteria 7.5.1. Matching at Layer 4 and below 7.5.2. Matching contents at Layer 4 7.5.3. Matching at Layer 7 7.6. Pre-defined ACLs 7.7. Using ACLs to form conditions 7.8. Pattern extraction 8. Logging 8.1. Log levels 8.2. Log formats 8.2.1. Default log format 8.2.2. TCP log format 8.2.3. HTTP log format 8.3. Advanced logging options 8.3.1. Disabling logging of external tests 8.3.2. Logging before waiting for the session to terminate 8.3.3. Raising log level upon errors 8.3.4. Disabling logging of successful connections 8.4. Timing events 8.5. Session state at disconnection 8.6. Non-printable characters 8.7. Capturing HTTP cookies 8.8. Capturing HTTP headers 8.9. Examples of logs 9. Statistics and monitoring 9.1. CSV format 9.2. Unix Socket commands 1. Quick reminder about HTTP ---------------------------- When haproxy is running in HTTP mode, both the request and the response are fully analyzed and indexed, thus it becomes possible to build matching criteria on almost anything found in the contents. However, it is important to understand how HTTP requests and responses are formed, and how HAProxy decomposes them. It will then become easier to write correct rules and to debug existing configurations. 1.1. The HTTP transaction model ------------------------------- The HTTP protocol is transaction-driven. This means that each request will lead to one and only one response. Traditionally, a TCP connection is established from the client to the server, a request is sent by the client on the connection, the server responds and the connection is closed. A new request will involve a new connection : [CON1] [REQ1] ... [RESP1] [CLO1] [CON2] [REQ2] ... [RESP2] [CLO2] ... In this mode, called the "HTTP close" mode, there are as many connection establishments as there are HTTP transactions. Since the connection is closed by the server after the response, the client does not need to know the content length. Due to the transactional nature of the protocol, it was possible to improve it to avoid closing a connection between two subsequent transactions. In this mode however, it is mandatory that the server indicates the content length for each response so that the client does not wait indefinitely. For this, a special header is used: "Content-length". This mode is called the "keep-alive" mode : [CON] [REQ1] ... [RESP1] [REQ2] ... [RESP2] [CLO] ... Its advantages are a reduced latency between transactions, and less processing power required on the server side. It is generally better than the close mode, but not always because the clients often limit their concurrent connections to a smaller value. A last improvement in the communications is the pipelining mode. It still uses keep-alive, but the client does not wait for the first response to send the second request. This is useful for fetching large number of images composing a page : [CON] [REQ1] [REQ2] ... [RESP1] [RESP2] [CLO] ... This can obviously have a tremendous benefit on performance because the network latency is eliminated between subsequent requests. Many HTTP agents do not correctly support pipelining since there is no way to associate a response with the corresponding request in HTTP. For this reason, it is mandatory for the server to reply in the exact same order as the requests were received. By default HAProxy operates in a tunnel-like mode with regards to persistent connections: for each connection it processes the first request and forwards everything else (including additional requests) to selected server. Once established, the connection is persisted both on the client and server sides. Use "option http-server-close" to preserve client persistent connections while handling every incoming request individually, dispatching them one after another to servers, in HTTP close mode. Use "option httpclose" to switch both sides to HTTP close mode. "option forceclose" and "option http-pretend-keepalive" help working around servers misbehaving in HTTP close mode. 1.2. HTTP request ----------------- First, let's consider this HTTP request : Line Contents number 1 GET /serv/login.php?lang=en&profile=2 HTTP/1.1 2 Host: www.mydomain.com 3 User-agent: my small browser 4 Accept: image/jpeg, image/gif 5 Accept: image/png 1.2.1. The Request line ----------------------- Line 1 is the "request line". It is always composed of 3 fields : - a METHOD : GET - a URI : /serv/login.php?lang=en&profile=2 - a version tag : HTTP/1.1 All of them are delimited by what the standard calls LWS (linear white spaces), which are commonly spaces, but can also be tabs or line feeds/carriage returns followed by spaces/tabs. The method itself cannot contain any colon (':') and is limited to alphabetic letters. All those various combinations make it desirable that HAProxy performs the splitting itself rather than leaving it to the user to write a complex or inaccurate regular expression. The URI itself can have several forms : - A "relative URI" : /serv/login.php?lang=en&profile=2 It is a complete URL without the host part. This is generally what is received by servers, reverse proxies and transparent proxies. - An "absolute URI", also called a "URL" : http://192.168.0.12:8080/serv/login.php?lang=en&profile=2 It is composed of a "scheme" (the protocol name followed by '://'), a host name or address, optionally a colon (':') followed by a port number, then a relative URI beginning at the first slash ('/') after the address part. This is generally what proxies receive, but a server supporting HTTP/1.1 must accept this form too. - a star ('*') : this form is only accepted in association with the OPTIONS method and is not relayable. It is used to inquiry a next hop's capabilities. - an address:port combination : 192.168.0.12:80 This is used with the CONNECT method, which is used to establish TCP tunnels through HTTP proxies, generally for HTTPS, but sometimes for other protocols too. In a relative URI, two sub-parts are identified. The part before the question mark is called the "path". It is typically the relative path to static objects on the server. The part after the question mark is called the "query string". It is mostly used with GET requests sent to dynamic scripts and is very specific to the language, framework or application in use. 1.2.2. The request headers -------------------------- The headers start at the second line. They are composed of a name at the beginning of the line, immediately followed by a colon (':'). Traditionally, an LWS is added after the colon but that's not required. Then come the values. Multiple identical headers may be folded into one single line, delimiting the values with commas, provided that their order is respected. This is commonly encountered in the "Cookie:" field. A header may span over multiple lines if the subsequent lines begin with an LWS. In the example in 1.2, lines 4 and 5 define a total of 3 values for the "Accept:" header. Contrary to a common mis-conception, header names are not case-sensitive, and their values are not either if they refer to other header names (such as the "Connection:" header). The end of the headers is indicated by the first empty line. People often say that it's a double line feed, which is not exact, even if a double line feed is one valid form of empty line. Fortunately, HAProxy takes care of all these complex combinations when indexing headers, checking values and counting them, so there is no reason to worry about the way they could be written, but it is important not to accuse an application of being buggy if it does unusual, valid things. Important note: As suggested by RFC2616, HAProxy normalizes headers by replacing line breaks in the middle of headers by LWS in order to join multi-line headers. This is necessary for proper analysis and helps less capable HTTP parsers to work correctly and not to be fooled by such complex constructs. 1.3. HTTP response ------------------ An HTTP response looks very much like an HTTP request. Both are called HTTP messages. Let's consider this HTTP response : Line Contents number 1 HTTP/1.1 200 OK 2 Content-length: 350 3 Content-Type: text/html As a special case, HTTP supports so called "Informational responses" as status codes 1xx. These messages are special in that they don't convey any part of the response, they're just used as sort of a signaling message to ask a client to continue to post its request for instance. In the case of a status 100 response the requested information will be carried by the next non-100 response message following the informational one. This implies that multiple responses may be sent to a single request, and that this only works when keep-alive is enabled (1xx messages are HTTP/1.1 only). HAProxy handles these messages and is able to correctly forward and skip them, and only process the next non-100 response. As such, these messages are neither logged nor transformed, unless explicitly state otherwise. Status 101 messages indicate that the protocol is changing over the same connection and that haproxy must switch to tunnel mode, just as if a CONNECT had occurred. Then the Upgrade header would contain additional information about the type of protocol the connection is switching to. 1.3.1. The Response line ------------------------ Line 1 is the "response line". It is always composed of 3 fields : - a version tag : HTTP/1.1 - a status code : 200 - a reason : OK The status code is always 3-digit. The first digit indicates a general status : - 1xx = informational message to be skipped (eg: 100, 101) - 2xx = OK, content is following (eg: 200, 206) - 3xx = OK, no content following (eg: 302, 304) - 4xx = error caused by the client (eg: 401, 403, 404) - 5xx = error caused by the server (eg: 500, 502, 503) Please refer to RFC2616 for the detailed meaning of all such codes. The "reason" field is just a hint, but is not parsed by clients. Anything can be found there, but it's a common practice to respect the well-established messages. It can be composed of one or multiple words, such as "OK", "Found", or "Authentication Required". Haproxy may emit the following status codes by itself : Code When / reason 200 access to stats page, and when replying to monitoring requests 301 when performing a redirection, depending on the configured code 302 when performing a redirection, depending on the configured code 303 when performing a redirection, depending on the configured code 307 when performing a redirection, depending on the configured code 308 when performing a redirection, depending on the configured code 400 for an invalid or too large request 401 when an authentication is required to perform the action (when accessing the stats page) 403 when a request is forbidden by a "block" ACL or "reqdeny" filter 408 when the request timeout strikes before the request is complete 500 when haproxy encounters an unrecoverable internal error, such as a memory allocation failure, which should never happen 502 when the server returns an empty, invalid or incomplete response, or when an "rspdeny" filter blocks the response. 503 when no server was available to handle the request, or in response to monitoring requests which match the "monitor fail" condition 504 when the response timeout strikes before the server responds The error 4xx and 5xx codes above may be customized (see "errorloc" in section 4.2). 1.3.2. The response headers --------------------------- Response headers work exactly like request headers, and as such, HAProxy uses the same parsing function for both. Please refer to paragraph 1.2.2 for more details. 2. Configuring HAProxy ---------------------- 2.1. Configuration file format ------------------------------ HAProxy's configuration process involves 3 major sources of parameters : - the arguments from the command-line, which always take precedence - the "global" section, which sets process-wide parameters - the proxies sections which can take form of "defaults", "listen", "frontend" and "backend". The configuration file syntax consists in lines beginning with a keyword referenced in this manual, optionally followed by one or several parameters delimited by spaces. If spaces have to be entered in strings, then they must be preceded by a backslash ('\') to be escaped. Backslashes also have to be escaped by doubling them. 2.2. Time format ---------------- Some parameters involve values representing time, such as timeouts. These values are generally expressed in milliseconds (unless explicitly stated otherwise) but may be expressed in any other unit by suffixing the unit to the numeric value. It is important to consider this because it will not be repeated for every keyword. Supported units are : - us : microseconds. 1 microsecond = 1/1000000 second - ms : milliseconds. 1 millisecond = 1/1000 second. This is the default. - s : seconds. 1s = 1000ms - m : minutes. 1m = 60s = 60000ms - h : hours. 1h = 60m = 3600s = 3600000ms - d : days. 1d = 24h = 1440m = 86400s = 86400000ms 2.3. Examples ------------- # Simple configuration for an HTTP proxy listening on port 80 on all # interfaces and forwarding requests to a single backend "servers" with a # single server "server1" listening on 127.0.0.1:8000 global daemon maxconn 256 defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms frontend http-in bind *:80 default_backend servers backend servers server server1 127.0.0.1:8000 maxconn 32 # The same configuration defined with a single listen block. Shorter but # less expressive, especially in HTTP mode. global daemon maxconn 256 defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms listen http-in bind *:80 server server1 127.0.0.1:8000 maxconn 32 Assuming haproxy is in $PATH, test these configurations in a shell with: $ sudo haproxy -f configuration.conf -c 3. Global parameters -------------------- Parameters in the "global" section are process-wide and often OS-specific. They are generally set once for all and do not need being changed once correct. Some of them have command-line equivalents. The following keywords are supported in the "global" section : * Process management and security - chroot - daemon - gid - group - log - log-send-hostname - nbproc - pidfile - uid - ulimit-n - user - stats - node - description * Performance tuning - maxconn - maxpipes - noepoll - nokqueue - nopoll - nosepoll - nosplice - spread-checks - tune.bufsize - tune.chksize - tune.maxaccept - tune.maxpollevents - tune.maxrewrite - tune.rcvbuf.client - tune.rcvbuf.server - tune.sndbuf.client - tune.sndbuf.server * Debugging - debug - quiet 3.1. Process management and security ------------------------------------ chroot Changes current directory to and performs a chroot() there before dropping privileges. This increases the security level in case an unknown vulnerability would be exploited, since it would make it very hard for the attacker to exploit the system. This only works when the process is started with superuser privileges. It is important to ensure that is both empty and unwritable to anyone. daemon Makes the process fork into background. This is the recommended mode of operation. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. gid Changes the process' group ID to . It is recommended that the group ID is dedicated to HAProxy or to a small set of similar daemons. HAProxy must be started with a user belonging to this group, or with superuser privileges. Note that if haproxy is started from a user having supplementary groups, it will only be able to drop these groups if started with superuser privileges. See also "group" and "uid". group Similar to "gid" but uses the GID of group name from /etc/group. See also "gid" and "user". log
[max level [min level]] Adds a global syslog server. Up to two global servers can be defined. They will receive logs for startups and exits, as well as all logs from proxies configured with "log global".
can be one of: - An IPv4 address optionally followed by a colon and a UDP port. If no port is specified, 514 is used by default (the standard syslog port). - A filesystem path to a UNIX domain socket, keeping in mind considerations for chroot (be sure the path is accessible inside the chroot) and uid/gid (be sure the path is appropriately writeable). must be one of the 24 standard syslog facilities : kern user mail daemon auth syslog lpr news uucp cron auth2 ftp ntp audit alert cron2 local0 local1 local2 local3 local4 local5 local6 local7 An optional level can be specified to filter outgoing messages. By default, all messages are sent. If a maximum level is specified, only messages with a severity at least as important as this level will be sent. An optional minimum level can be specified. If it is set, logs emitted with a more severe level than this one will be capped to this level. This is used to avoid sending "emerg" messages on all terminals on some default syslog configurations. Eight levels are known : emerg alert crit err warning notice info debug log-send-hostname [] Sets the hostname field in the syslog header. If optional "string" parameter is set the header is set to the string contents, otherwise uses the hostname of the system. Generally used if one is not relaying logs through an intermediate syslog server or for simply customizing the hostname printed in the logs. log-tag Sets the tag field in the syslog header to this string. It defaults to the program name as launched from the command line, which usually is "haproxy". Sometimes it can be useful to differentiate between multiple processes running on the same host. nbproc Creates processes when going daemon. This requires the "daemon" mode. By default, only one process is created, which is the recommended mode of operation. For systems limited to small sets of file descriptors per process, it may be needed to fork multiple daemons. USING MULTIPLE PROCESSES IS HARDER TO DEBUG AND IS REALLY DISCOURAGED. See also "daemon". pidfile Writes pids of all daemons into file . This option is equivalent to the "-p" command line argument. The file must be accessible to the user starting the process. See also "daemon". stats socket [{uid | user} ] [{gid | group} ] [mode ] [level ] Creates a UNIX socket in stream mode at location . Any previously existing socket will be backed up then replaced. Connections to this socket will return various statistics outputs and even allow some commands to be issued. Please consult section 9.2 "Unix Socket commands" for more details. An optional "level" parameter can be specified to restrict the nature of the commands that can be issued on the socket : - "user" is the least privileged level ; only non-sensitive stats can be read, and no change is allowed. It would make sense on systems where it is not easy to restrict access to the socket. - "operator" is the default level and fits most common uses. All data can be read, and only non-sensitive changes are permitted (eg: clear max counters). - "admin" should be used with care, as everything is permitted (eg: clear all counters). On platforms which support it, it is possible to restrict access to this socket by specifying numerical IDs after "uid" and "gid", or valid user and group names after the "user" and "group" keywords. It is also possible to restrict permissions on the socket by passing an octal value after the "mode" keyword (same syntax as chmod). Depending on the platform, the permissions on the socket will be inherited from the directory which hosts it, or from the user the process is started with. stats timeout The default timeout on the stats socket is set to 10 seconds. It is possible to change this value with "stats timeout". The value must be passed in milliseconds, or be suffixed by a time unit among { us, ms, s, m, h, d }. stats maxconn By default, the stats socket is limited to 10 concurrent connections. It is possible to change this value with "stats maxconn". uid Changes the process' user ID to . It is recommended that the user ID is dedicated to HAProxy or to a small set of similar daemons. HAProxy must be started with superuser privileges in order to be able to switch to another one. See also "gid" and "user". ulimit-n Sets the maximum number of per-process file-descriptors to . By default, it is automatically computed, so it is recommended not to use this option. user Similar to "uid" but uses the UID of user name from /etc/passwd. See also "uid" and "group". node Only letters, digits, hyphen and underscore are allowed, like in DNS names. This statement is useful in HA configurations where two or more processes or servers share the same IP address. By setting a different node-name on all nodes, it becomes easy to immediately spot what server is handling the traffic. description Add a text that describes the instance. Please note that it is required to escape certain characters (# for example) and this text is inserted into a html page so you should avoid using "<" and ">" characters. 3.2. Performance tuning ----------------------- maxconn Sets the maximum per-process number of concurrent connections to . It is equivalent to the command-line argument "-n". Proxies will stop accepting connections when this limit is reached. The "ulimit-n" parameter is automatically adjusted according to this value. See also "ulimit-n". maxpipes Sets the maximum per-process number of pipes to . Currently, pipes are only used by kernel-based tcp splicing. Since a pipe contains two file descriptors, the "ulimit-n" value will be increased accordingly. The default value is maxconn/4, which seems to be more than enough for most heavy usages. The splice code dynamically allocates and releases pipes, and can fall back to standard copy, so setting this value too low may only impact performance. noepoll Disables the use of the "epoll" event polling system on Linux. It is equivalent to the command-line argument "-de". The next polling system used will generally be "poll". See also "nosepoll", and "nopoll". nokqueue Disables the use of the "kqueue" event polling system on BSD. It is equivalent to the command-line argument "-dk". The next polling system used will generally be "poll". See also "nopoll". nopoll Disables the use of the "poll" event polling system. It is equivalent to the command-line argument "-dp". The next polling system used will be "select". It should never be needed to disable "poll" since it's available on all platforms supported by HAProxy. See also "nosepoll", and "nopoll" and "nokqueue". nosepoll Disables the use of the "speculative epoll" event polling system on Linux. It is equivalent to the command-line argument "-ds". The next polling system used will generally be "epoll". See also "noepoll", and "nopoll". nosplice Disables the use of kernel tcp splicing between sockets on Linux. It is equivalent to the command line argument "-dS". Data will then be copied using conventional and more portable recv/send calls. Kernel tcp splicing is limited to some very recent instances of kernel 2.6. Most versions between 2.6.25 and 2.6.28 are buggy and will forward corrupted data, so they must not be used. This option makes it easier to globally disable kernel splicing in case of doubt. See also "option splice-auto", "option splice-request" and "option splice-response". spread-checks <0..50, in percent> Sometimes it is desirable to avoid sending health checks to servers at exact intervals, for instance when many logical servers are located on the same physical server. With the help of this parameter, it becomes possible to add some randomness in the check interval between 0 and +/- 50%. A value between 2 and 5 seems to show good results. The default value remains at 0. tune.bufsize Sets the buffer size to this size (in bytes). Lower values allow more sessions to coexist in the same amount of RAM, and higher values allow some applications with very large cookies to work. The default value is 16384 and can be changed at build time. It is strongly recommended not to change this from the default value, as very low values will break some services such as statistics, and values larger than default size will increase memory usage, possibly causing the system to run out of memory. At least the global maxconn parameter should be decreased by the same factor as this one is increased. tune.chksize Sets the check buffer size to this size (in bytes). Higher values may help find string or regex patterns in very large pages, though doing so may imply more memory and CPU usage. The default value is 16384 and can be changed at build time. It is not recommended to change this value, but to use better checks whenever possible. tune.maxaccept Sets the maximum number of consecutive accepts that a process may perform on a single wake up. High values give higher priority to high connection rates, while lower values give higher priority to already established connections. This value is limited to 100 by default in single process mode. However, in multi-process mode (nbproc > 1), it defaults to 8 so that when one process wakes up, it does not take all incoming connections for itself and leaves a part of them to other processes. Setting this value to -1 completely disables the limitation. It should normally not be needed to tweak this value. tune.maxpollevents Sets the maximum amount of events that can be processed at once in a call to the polling system. The default value is adapted to the operating system. It has been noticed that reducing it below 200 tends to slightly decrease latency at the expense of network bandwidth, and increasing it above 200 tends to trade latency for slightly increased bandwidth. tune.maxrewrite Sets the reserved buffer space to this size in bytes. The reserved space is used for header rewriting or appending. The first reads on sockets will never fill more than bufsize-maxrewrite. Historically it has defaulted to half of bufsize, though that does not make much sense since there are rarely large numbers of headers to add. Setting it too high prevents processing of large requests or responses. Setting it too low prevents addition of new headers to already large requests or to POST requests. It is generally wise to set it to about 1024. It is automatically readjusted to half of bufsize if it is larger than that. This means you don't have to worry about it when changing bufsize. tune.rcvbuf.client tune.rcvbuf.server Forces the kernel socket receive buffer size on the client or the server side to the specified value in bytes. This value applies to all TCP/HTTP frontends and backends. It should normally never be set, and the default size (0) lets the kernel autotune this value depending on the amount of available memory. However it can sometimes help to set it to very low values (eg: 4096) in order to save kernel memory by preventing it from buffering too large amounts of received data. Lower values will significantly increase CPU usage though. tune.sndbuf.client tune.sndbuf.server Forces the kernel socket send buffer size on the client or the server side to the specified value in bytes. This value applies to all TCP/HTTP frontends and backends. It should normally never be set, and the default size (0) lets the kernel autotune this value depending on the amount of available memory. However it can sometimes help to set it to very low values (eg: 4096) in order to save kernel memory by preventing it from buffering too large amounts of received data. Lower values will significantly increase CPU usage though. Another use case is to prevent write timeouts with extremely slow clients due to the kernel waiting for a large part of the buffer to be read before notifying haproxy again. 3.3. Debugging -------------- debug Enables debug mode which dumps to stdout all exchanges, and disables forking into background. It is the equivalent of the command-line argument "-d". It should never be used in a production configuration since it may prevent full system startup. quiet Do not display any message during startup. It is equivalent to the command- line argument "-q". 3.4. Userlists -------------- It is possible to control access to frontend/backend/listen sections or to http stats by allowing only authenticated and authorized users. To do this, it is required to create at least one userlist and to define users. userlist Creates new userlist with name . Many independent userlists can be used to store authentication & authorization data for independent customers. group [users ,,(...)] Adds group to the current userlist. It is also possible to attach users to this group by using a comma separated list of names proceeded by "users" keyword. user [password|insecure-password ] [groups ,,(...)] Adds user to the current userlist. Both secure (encrypted) and insecure (unencrypted) passwords can be used. Encrypted passwords are evaluated using the crypt(3) function so depending of the system's capabilities, different algorithms are supported. For example modern Glibc based Linux system supports MD5, SHA-256, SHA-512 and of course classic, DES-based method of crypting passwords. Example: userlist L1 group G1 users tiger,scott group G2 users xdb,scott user tiger password $6$k6y3o.eP$JlKBx9za9667qe4(...)xHSwRv6J.C0/D7cV91 user scott insecure-password elgato user xdb insecure-password hello userlist L2 group G1 group G2 user tiger password $6$k6y3o.eP$JlKBx(...)xHSwRv6J.C0/D7cV91 groups G1 user scott insecure-password elgato groups G1,G2 user xdb insecure-password hello groups G2 Please note that both lists are functionally identical. 4. Proxies ---------- Proxy configuration can be located in a set of sections : - defaults - frontend - backend - listen A "defaults" section sets default parameters for all other sections following its declaration. Those default parameters are reset by the next "defaults" section. See below for the list of parameters which can be set in a "defaults" section. The name is optional but its use is encouraged for better readability. A "frontend" section describes a set of listening sockets accepting client connections. A "backend" section describes a set of servers to which the proxy will connect to forward incoming connections. A "listen" section defines a complete proxy with its frontend and backend parts combined in one section. It is generally useful for TCP-only traffic. All proxy names must be formed from upper and lower case letters, digits, '-' (dash), '_' (underscore) , '.' (dot) and ':' (colon). ACL names are case-sensitive, which means that "www" and "WWW" are two different proxies. Historically, all proxy names could overlap, it just caused troubles in the logs. Since the introduction of content switching, it is mandatory that two proxies with overlapping capabilities (frontend/backend) have different names. However, it is still permitted that a frontend and a backend share the same name, as this configuration seems to be commonly encountered. Right now, two major proxy modes are supported : "tcp", also known as layer 4, and "http", also known as layer 7. In layer 4 mode, HAProxy simply forwards bidirectional traffic between two sides. In layer 7 mode, HAProxy analyzes the protocol, and can interact with it by allowing, blocking, switching, adding, modifying, or removing arbitrary contents in requests or responses, based on arbitrary criteria. 4.1. Proxy keywords matrix -------------------------- The following list of keywords is supported. Most of them may only be used in a limited set of section types. Some of them are marked as "deprecated" because they are inherited from an old syntax which may be confusing or functionally limited, and there are new recommended keywords to replace them. Keywords marked with "(*)" can be optionally inverted using the "no" prefix, eg. "no option contstats". This makes sense when the option has been enabled by default and must be disabled for a specific instance. Such options may also be prefixed with "default" in order to restore default settings regardless of what has been specified in a previous "defaults" section. keyword defaults frontend listen backend ------------------------------------+----------+----------+---------+--------- acl - X X X appsession - - X X backlog X X X - balance X - X X bind - X X - bind-process X X X X block - X X X capture cookie - X X - capture request header - X X - capture response header - X X - clitimeout (deprecated) X X X - contimeout (deprecated) X - X X cookie X - X X default-server X - X X default_backend X X X - description - X X X disabled X X X X dispatch - - X X enabled X X X X errorfile X X X X errorloc X X X X errorloc302 X X X X -- keyword -------------------------- defaults - frontend - listen -- backend - errorloc303 X X X X force-persist - X X X fullconn X - X X grace X X X X hash-type X - X X http-check disable-on-404 X - X X http-check expect - - X X http-check send-state X - X X http-request - X X X id - X X X ignore-persist - X X X log X X X X maxconn X X X - mode X X X X monitor fail - X X - monitor-net X X X - monitor-uri X X X - option abortonclose (*) X - X X option accept-invalid-http-request (*) X X X - option accept-invalid-http-response (*) X - X X option allbackups (*) X - X X option checkcache (*) X - X X option clitcpka (*) X X X - option contstats (*) X X X - option dontlog-normal (*) X X X - option dontlognull (*) X X X - option forceclose (*) X X X X -- keyword -------------------------- defaults - frontend - listen -- backend - option forwardfor X X X X option http-no-delay (*) X X X X option http-pretend-keepalive (*) X X X X option http-server-close (*) X X X X option http-use-proxy-header (*) X X X - option httpchk X - X X option httpclose (*) X X X X option httplog X X X X option http_proxy (*) X X X X option independant-streams (*) X X X X option ldap-check X - X X option log-health-checks (*) X - X X option log-separate-errors (*) X X X - option logasap (*) X X X - option mysql-check X - X X option nolinger (*) X X X X option originalto X X X X option persist (*) X - X X option redispatch (*) X - X X option smtpchk X - X X option socket-stats (*) X X X - option splice-auto (*) X X X X option splice-request (*) X X X X option splice-response (*) X X X X option srvtcpka (*) X - X X option ssl-hello-chk X - X X -- keyword -------------------------- defaults - frontend - listen -- backend - option tcp-smart-accept (*) X X X - option tcp-smart-connect (*) X - X X option tcpka X X X X option tcplog X X X X option transparent (*) X - X X persist rdp-cookie X - X X rate-limit sessions X X X - redirect - X X X redisp (deprecated) X - X X redispatch (deprecated) X - X X reqadd - X X X reqallow - X X X reqdel - X X X reqdeny - X X X reqiallow - X X X reqidel - X X X reqideny - X X X reqipass - X X X reqirep - X X X reqisetbe - X X X reqitarpit - X X X reqpass - X X X reqrep - X X X -- keyword -------------------------- defaults - frontend - listen -- backend - reqsetbe - X X X reqtarpit - X X X retries X - X X rspadd - X X X rspdel - X X X rspdeny - X X X rspidel - X X X rspideny - X X X rspirep - X X X rsprep - X X X server - - X X source X - X X srvtimeout (deprecated) X - X X stats admin - - X X stats auth X - X X stats enable X - X X stats hide-version X - X X stats http-request - - X X stats realm X - X X stats refresh X - X X stats scope X - X X stats show-desc X - X X stats show-legends X - X X stats show-node X - X X stats uri X - X X -- keyword -------------------------- defaults - frontend - listen -- backend - stick match - - X X stick on - - X X stick store-request - - X X stick-table - - X X tcp-request content accept - X X - tcp-request content reject - X X - tcp-request inspect-delay - X X - timeout check X - X X timeout client X X X - timeout clitimeout (deprecated) X X X - timeout connect X - X X timeout contimeout (deprecated) X - X X timeout http-keep-alive X X X X timeout http-request X X X X timeout queue X - X X timeout server X - X X timeout srvtimeout (deprecated) X - X X timeout tarpit X X X X transparent (deprecated) X - X X use_backend - X X - ------------------------------------+----------+----------+---------+--------- keyword defaults frontend listen backend 4.2. Alphabetically sorted keywords reference --------------------------------------------- This section provides a description of each keyword and its usage. acl [flags] [operator] ... Declare or complete an access list. May be used in sections : defaults | frontend | listen | backend no | yes | yes | yes Example: acl invalid_src src 0.0.0.0/7 224.0.0.0/3 acl invalid_src src_port 0:1023 acl local_dst hdr(host) -i localhost See section 7 about ACL usage. appsession len timeout [request-learn] [prefix] [mode ] Define session stickiness on an existing application cookie. May be used in sections : defaults | frontend | listen | backend no | no | yes | yes Arguments : this is the name of the cookie used by the application and which HAProxy will have to learn for each new session. this is the max number of characters that will be memorized and checked in each cookie value. this is the time after which the cookie will be removed from memory if unused. If no unit is specified, this time is in milliseconds. request-learn If this option is specified, then haproxy will be able to learn the cookie found in the request in case the server does not specify any in response. This is typically what happens with PHPSESSID cookies, or when haproxy's session expires before the application's session and the correct server is selected. It is recommended to specify this option to improve reliability. prefix When this option is specified, haproxy will match on the cookie prefix (or URL parameter prefix). The appsession value is the data following this prefix. Example : appsession ASPSESSIONID len 64 timeout 3h prefix This will match the cookie ASPSESSIONIDXXXX=XXXXX, the appsession value will be XXXX=XXXXX. mode This option allows to change the URL parser mode. 2 modes are currently supported : - path-parameters : The parser looks for the appsession in the path parameters part (each parameter is separated by a semi-colon), which is convenient for JSESSIONID for example. This is the default mode if the option is not set. - query-string : In this mode, the parser will look for the appsession in the query string. When an application cookie is defined in a backend, HAProxy will check when the server sets such a cookie, and will store its value in a table, and associate it with the server's identifier. Up to characters from the value will be retained. On each connection, haproxy will look for this cookie both in the "Cookie:" headers, and as a URL parameter (depending on the mode used). If a known value is found, the client will be directed to the server associated with this value. Otherwise, the load balancing algorithm is applied. Cookies are automatically removed from memory when they have been unused for a duration longer than . The definition of an application cookie is limited to one per backend. Note : Consider not using this feature in multi-process mode (nbproc > 1) unless you know what you do : memory is not shared between the processes, which can result in random behaviours. Example : appsession JSESSIONID len 52 timeout 3h See also : "cookie", "capture cookie", "balance", "stick", "stick-table", "ignore-persist", "nbproc" and "bind-process". backlog Give hints to the system about the approximate listen backlog desired size May be used in sections : defaults | frontend | listen | backend yes | yes | yes | no Arguments : is the number of pending connections. Depending on the operating system, it may represent the number of already acknowledged connections, of non-acknowledged ones, or both. In order to protect against SYN flood attacks, one solution is to increase the system's SYN backlog size. Depending on the system, sometimes it is just tunable via a system parameter, sometimes it is not adjustable at all, and sometimes the system relies on hints given by the application at the time of the listen() syscall. By default, HAProxy passes the frontend's maxconn value to the listen() syscall. On systems which can make use of this value, it can sometimes be useful to be able to specify a different value, hence this backlog parameter. On Linux 2.4, the parameter is ignored by the system. On Linux 2.6, it is used as a hint and the system accepts up to the smallest greater power of two, and never more than some limits (usually 32768). See also : "maxconn" and the target operating system's tuning guide. balance [ ] balance url_param [check_post []] Define the load balancing algorithm to be used in a backend. May be used in sections : defaults | frontend | listen | backend yes | no | yes | yes Arguments : is the algorithm used to select a server when doing load balancing. This only applies when no persistence information is available, or when a connection is redispatched to another server. may be one of the following : roundrobin Each server is used in turns, according to their weights. This is the smoothest and fairest algorithm when the server's processing time remains equally distributed. This algorithm is dynamic, which means that server weights may be adjusted on the fly for slow starts for instance. It is limited by design to 4095 active servers per backend. Note that in some large farms, when a server becomes up after having been down for a very short time, it may sometimes take a few hundreds requests for it to be re-integrated into the farm and start receiving traffic. This is normal, though very rare. It is indicated here in case you would have the chance to observe it, so that you don't worry. static-rr Each server is used in turns, according to their weights. This algorithm is as similar to roundrobin except that it is static, which means that changing a server's weight on the fly will have no effect. On the other hand, it has no design limitation on the number of servers, and when a server goes up, it is always immediately reintroduced into the farm, once the full map is recomputed. It also uses slightly less CPU to run (around -1%). leastconn The server with the lowest number of connections receives the connection. Round-robin is performed within groups of servers of the same load to ensure that all servers will be used. Use of this algorithm is recommended where very long sessions are expected, such as LDAP, SQL, TSE, etc... but is not very well suited for protocols using short sessions such as HTTP. This algorithm is dynamic, which means that server weights may be adjusted on the fly for slow starts for instance. source The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request. This ensures that the same client IP address will always reach the same server as long as no server goes down or up. If the hash result changes due to the number of running servers changing, many clients will be directed to a different server. This algorithm is generally used in TCP mode where no cookie may be inserted. It may also be used on the Internet to provide a best-effort stickiness to clients which refuse session cookies. This algorithm is static by default, which means that changing a server's weight on the fly will have no effect, but this can be changed using "hash-type". uri This algorithm hashes either the left part of the URI (before the question mark) or the whole URI (if the "whole" parameter is present) and divides the hash value by the total weight of the running servers. The result designates which server will receive the request. This ensures that the same URI will always be directed to the same server as long as no server goes up or down. This is used with proxy caches and anti-virus proxies in order to maximize the cache hit rate. Note that this algorithm may only be used in an HTTP backend. This algorithm is static by default, which means that changing a server's weight on the fly will have no effect, but this can be changed using "hash-type". This algorithm supports two optional parameters "len" and "depth", both followed by a positive integer number. These options may be helpful when it is needed to balance servers based on the beginning of the URI only. The "len" parameter indicates that the algorithm should only consider that many characters at the beginning of the URI to compute the hash. Note that having "len" set to 1 rarely makes sense since most URIs start with a leading "/". The "depth" parameter indicates the maximum directory depth to be used to compute the hash. One level is counted for each slash in the request. If both parameters are specified, the evaluation stops when either is reached. url_param The URL parameter specified in argument will be looked up in the query string of each HTTP GET request. If the modifier "check_post" is used, then an HTTP POST request entity will be searched for the parameter argument, when it is not found in a query string after a question mark ('?') in the URL. Optionally, specify a number of octets to wait for before attempting to search the message body. If the entity can not be searched, then round robin is used for each request. For instance, if your clients always send the LB parameter in the first 128 bytes, then specify that. The default is 48. The entity data will not be scanned until the required number of octets have arrived at the gateway, this is the minimum of: (default/max_wait, Content-Length or first chunk length). If Content-Length is missing or zero, it does not need to wait for more data than the client promised to send. When Content-Length is present and larger than , then waiting is limited to and it is assumed that this will be enough data to search for the presence of the parameter. In the unlikely event that Transfer-Encoding: chunked is used, only the first chunk is scanned. Parameter values separated by a chunk boundary, may be randomly balanced if at all. If the parameter is found followed by an equal sign ('=') and a value, then the value is hashed and divided by the total weight of the running servers. The result designates which server will receive the request. This is used to track user identifiers in requests and ensure that a same user ID will always be sent to the same server as long as no server goes up or down. If no value is found or if the parameter is not found, then a round robin algorithm is applied. Note that this algorithm may only be used in an HTTP backend. This algorithm is static by default, which means that changing a server's weight on the fly will have no effect, but this can be changed using "hash-type". hdr() The HTTP header will be looked up in each HTTP request. Just as with the equivalent ACL 'hdr()' function, the header name in parenthesis is not case sensitive. If the header is absent or if it does not contain any value, the roundrobin algorithm is applied instead. An optional 'use_domain_only' parameter is available, for reducing the hash algorithm to the main domain part with some specific headers such as 'Host'. For instance, in the Host value "haproxy.1wt.eu", only "1wt" will be considered. This algorithm is static by default, which means that changing a server's weight on the fly will have no effect, but this can be changed using "hash-type". rdp-cookie rdp-cookie(name) The RDP cookie (or "mstshash" if omitted) will be looked up and hashed for each incoming TCP request. Just as with the equivalent ACL 'req_rdp_cookie()' function, the name is not case-sensitive. This mechanism is useful as a degraded persistence mode, as it makes it possible to always send the same user (or the same session ID) to the same server. If the cookie is not found, the normal roundrobin algorithm is used instead. Note that for this to work, the frontend must ensure that an RDP cookie is already present in the request buffer. For this you must use 'tcp-request content accept' rule combined with a 'req_rdp_cookie_cnt' ACL. This algorithm is static by default, which means that changing a server's weight on the fly will have no effect, but this can be changed using "hash-type". is an optional list of arguments which may be needed by some algorithms. Right now, only "url_param" and "uri" support an optional argument. balance uri [len ] [depth ] balance url_param [check_post []] The load balancing algorithm of a backend is set to roundrobin when no other algorithm, mode nor option have been set. The algorithm may only be set once for each backend. Examples : balance roundrobin balance url_param userid balance url_param session_id check_post 64 balance hdr(User-Agent) balance hdr(host) balance hdr(Host) use_domain_only Note: the following caveats and limitations on using the "check_post" extension with "url_param" must be considered : - all POST requests are eligible for consideration, because there is no way to determine if the parameters will be found in the body or entity which may contain binary data. Therefore another method may be required to restrict consideration of POST requests that have no URL parameters in the body. (see acl reqideny http_end) - using a value larger than the request buffer size does not make sense and is useless. The buffer size is set at build time, and defaults to 16 kB. - Content-Encoding is not supported, the parameter search will probably fail; and load balancing will fall back to Round Robin. - Expect: 100-continue is not supported, load balancing will fall back to Round Robin. - Transfer-Encoding (RFC2616 3.6.1) is only supported in the first chunk. If the entire parameter value is not present in the first chunk, the selection of server is undefined (actually, defined by how little actually appeared in the first chunk). - This feature does not support generation of a 100, 411 or 501 response. - In some cases, requesting "check_post" MAY attempt to scan the entire contents of a message body. Scanning normally terminates when linear white space or control characters are found, indicating the end of what might be a URL parameter list. This is probably not a concern with SGML type message bodies. See also : "dispatch", "cookie", "appsession", "transparent", "hash-type" and "http_proxy". bind [
]: [, ...] bind [
]: [, ...] interface bind [
]: [, ...] mss bind [
]: [, ...] transparent bind [
]: [, ...] id bind [
]: [, ...] name bind [
]: [, ...] defer-accept Define one or several listening addresses and/or ports in a frontend. May be used in sections : defaults | frontend | listen | backend no | yes | yes | no Arguments :
is optional and can be a host name, an IPv4 address, an IPv6 address, or '*'. It designates the address the frontend will listen on. If unset, all IPv4 addresses of the system will be listened on. The same will apply for '*' or the system's special address "0.0.0.0". is either a unique TCP port, or a port range for which the proxy will accept connections for the IP address specified above. The port is mandatory. Note that in the case of an IPv6 address, the port is always the number after the last colon (':'). A range can either be : - a numerical port (ex: '80') - a dash-delimited ports range explicitly stating the lower and upper bounds (ex: '2000-2100') which are included in the range. Particular care must be taken against port ranges, because every couple consumes one socket (= a file descriptor), so it's easy to consume lots of descriptors with a simple range, and to run out of sockets. Also, each couple must be used only once among all instances running on a same system. Please note that binding to ports lower than 1024 generally require particular privileges to start the program, which are independant of the 'uid' parameter. is an optional physical interface name. This is currently only supported on Linux. The interface must be a physical interface, not an aliased interface. When specified, all addresses on the same line will only be accepted if the incoming packet physically come through the designated interface. It is also possible to bind multiple frontends to the same address if they are bound to different interfaces. Note that binding to a physical interface requires root privileges. is an optional TCP Maximum Segment Size (MSS) value to be advertised on incoming connections. This can be used to force a lower MSS for certain specific ports, for instance for connections passing through a VPN. Note that this relies on a kernel feature which is theorically supported under Linux but was buggy in all versions prior to 2.6.28. It may or may not work on other operating systems. The commonly advertised value on Ethernet networks is 1460 = 1500(MTU) - 40(IP+TCP). is a persistent value for socket ID. Must be positive and unique in the proxy. An unused value will automatically be assigned if unset. Can only be used when defining only a single socket. is an optional name provided for stats transparent is an optional keyword which is supported only on certain Linux kernels. It indicates that the addresses will be bound even if they do not belong to the local machine. Any packet targeting any of these addresses will be caught just as if the address was locally configured. This normally requires that IP forwarding is enabled. Caution! do not use this with the default address '*', as it would redirect any traffic for the specified port. This keyword is available only when HAProxy is built with USE_LINUX_TPROXY=1. defer-accept is an optional keyword which is supported only on certain Linux kernels. It states that a connection will only be accepted once some data arrive on it, or at worst after the first retransmit. This should be used only on protocols for which the client talks first (eg: HTTP). It can slightly improve performance by ensuring that most of the request is already available when the connection is accepted. On the other hand, it will not be able to detect connections which don't talk. It is important to note that this option is broken in all kernels up to 2.6.31, as the connection is never accepted until the client talks. This can cause issues with front firewalls which would see an established connection while the proxy will only see it in SYN_RECV. It is possible to specify a list of address:port combinations delimited by commas. The frontend will then listen on all of these addresses. There is no fixed limit to the number of addresses and ports which can be listened on in a frontend, as well as there is no limit to the number of "bind" statements in a frontend. Example : listen http_proxy bind :80,:443 bind 10.0.0.1:10080,10.0.0.1:10443 See also : "source". bind-process [ all | odd | even | ] ... Limit visibility of an instance to a certain set of processes numbers. May be used in sections : defaults | frontend | listen | backend yes | yes | yes | yes Arguments : all All process will see this instance. This is the default. It may be used to override a default value. odd This instance will be enabled on processes 1,3,5,...31. This option may be combined with other numbers. even This instance will be enabled on processes 2,4,6,...32. This option may be combined with other numbers. Do not use it with less than 2 processes otherwise some instances might be missing from all processes. number The instance will be enabled on this process number, between 1 and 32. You must be careful not to reference a process number greater than the configured global.nbproc, otherwise some instances might be missing from all processes. This keyword limits binding of certain instances to certain processes. This is useful in order not to have too many processes listening to the same ports. For instance, on a dual-core machine, it might make sense to set 'nbproc 2' in the global section, then distributes the listeners among 'odd' and 'even' instances. At the moment, it is not possible to reference more than 32 processes using this keyword, but this should be more than enough for most setups. Please note that 'all' really means all processes and is not limited to the first 32. If some backends are referenced by frontends bound to other processes, the backend automatically inherits the frontend's processes. Example : listen app_ip1 bind 10.0.0.1:80 bind-process odd listen app_ip2 bind 10.0.0.2:80 bind-process even listen management bind 10.0.0.3:80 bind-process 1 2 3 4 See also : "nbproc" in global section. block { if | unless } Block a layer 7 request if/unless a condition is matched May be used in sections : defaults | frontend | listen | backend no | yes | yes | yes The HTTP request will be blocked very early in the layer 7 processing if/unless is matched. A 403 error will be returned if the request is blocked. The condition has to reference ACLs (see section 7). This is typically used to deny access to certain sensitive resources if some conditions are met or not met. There is no fixed limit to the number of "block" statements per instance. Example: acl invalid_src src 0.0.0.0/7 224.0.0.0/3 acl invalid_src src_port 0:1023 acl local_dst hdr(host) -i localhost block if invalid_src || local_dst See section 7 about ACL usage. capture cookie len Capture and log a cookie in the request and in the response. May be used in sections : defaults | frontend | listen | backend no | yes | yes | no Arguments : is the beginning of the name of the cookie to capture. In order to match the exact name, simply suffix the name with an equal sign ('='). The full name will appear in the logs, which is useful with application servers which adjust both the cookie name and value (eg: ASPSESSIONXXXXX). is the maximum number of characters to report in the logs, which include the cookie name, the equal sign and the value, all in the standard "name=value" form. The string will be truncated on the right if it exceeds . Only the first cookie is captured. Both the "cookie" request headers and the "set-cookie" response headers are monitored. This is particularly useful to check for application bugs causing session crossing or stealing between users, because generally the user's cookies can only change on a login page. When the cookie was not presented by the client, the associated log column will report "-". When a request does not cause a cookie to be assigned by the server, a "-" is reported in the response column. The capture is performed in the frontend only because it is necessary that the log format does not change for a given frontend depending on the backends. This may change in the future. Note that there can be only one "capture cookie" statement in a frontend. The maximum capture length is configured in the sources by default to 64 characters. It is not possible to specify a capture in a "defaults" section. Example: capture cookie ASPSESSION len 32 See also : "capture request header", "capture response header" as well as section 8 about logging. capture request header len Capture and log the first occurrence of the specified request header. May be used in sections : defaults | frontend | listen | backend no | yes | yes | no Arguments : is the name of the header to capture. The header names are not case-sensitive, but it is a common practice to write them as they appear in the requests, with the first letter of each word in upper case. The header name will not appear in the logs, only the value is reported, but the position in the logs is respected. is the maximum number of characters to extract from the value and report in the logs. The string will be truncated on the right if it exceeds . Only the first value of the last occurrence of the header is captured. The value will be added to the logs between braces ('{}'). If multiple headers are captured, they will be delimited by a vertical bar ('|') and will appear in the same order they were declared in the configuration. Non-existent headers will be logged just as an empty string. Common uses for request header captures include the "Host" field in virtual hosting environments, the "Content-length" when uploads are supported, "User-agent" to quickly differentiate between real users and robots, and "X-Forwarded-For" in proxied environments to find where the request came from. Note that when capturing headers such as "User-agent", some spaces may be logged, making the log analysis more difficult. Thus be careful about what you log if you know your log parser is not smart enough to rely on the braces. There is no limit to the number of captured request headers, but each capture is limited to 64 characters. In order to keep log format consistent for a same frontend, header captures can only be declared in a frontend. It is not possible to specify a capture in a "defaults" section. Example: capture request header Host len 15 capture request header X-Forwarded-For len 15 capture request header Referer len 15 See also : "capture cookie", "capture response header" as well as section 8 about logging. capture response header len Capture and log the first occurrence of the specified response header. May be used in sections : defaults | frontend | listen | backend no | yes | yes | no Arguments : is the name of the header to capture. The header names are not case-sensitive, but it is a common practice to write them as they appear in the response, with the first letter of each word in upper case. The header name will not appear in the logs, only the value is reported, but the position in the logs is respected. is the maximum number of characters to extract from the value and report in the logs. The string will be truncated on the right if it exceeds . Only the first value of the last occurrence of the header is captured. The result will be added to the logs between braces ('{}') after the captured request headers. If multiple headers are captured, they will be delimited by a vertical bar ('|') and will appear in the same order they were declared in the configuration. Non-existent headers will be logged just as an empty string. Common uses for response header captures include the "Content-length" header which indicates how many bytes are expected to be returned, the "Location" header to track redirections. There is no limit to the number of captured response headers, but each capture is limited to 64 characters. In order to keep log format consistent for a same frontend, header captures can only be declared in a frontend. It is not possible to specify a capture in a "defaults" section. Example: capture response header Content-length len 9 capture response header Location len 15 See also : "capture cookie", "capture request header" as well as section 8 about logging. clitimeout (deprecated) Set the maximum inactivity time on the client side. May be used in sections : defaults | frontend | listen | backend yes | yes | yes | no Arguments : is the timeout value is specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. The inactivity timeout applies when the client is expected to acknowledge or send data. In HTTP mode, this timeout is particularly important to consider during the first phase, when the client sends the request, and during the response while it is reading data sent by the server. The value is specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as specified at the top of this document. In TCP mode (and to a lesser extent, in HTTP mode), it is highly recommended that the client timeout remains equal to the server timeout in order to avoid complex situations to debug. It is a good practice to cover one or several TCP packet losses by specifying timeouts that are slightly above multiples of 3 seconds (eg: 4 or 5 seconds). This parameter is specific to frontends, but can be specified once for all in "defaults" sections. This is in fact one of the easiest solutions not to forget about it. An unspecified timeout results in an infinite timeout, which is not recommended. Such a usage is accepted and works but reports a warning during startup because it may results in accumulation of expired sessions in the system if the system's timeouts are not configured either. This parameter is provided for compatibility but is currently deprecated. Please use "timeout client" instead. See also : "timeout client", "timeout http-request", "timeout server", and "srvtimeout". contimeout (deprecated) Set the maximum time to wait for a connection attempt to a server to succeed. May be used in sections : defaults | frontend | listen | backend yes | no | yes | yes Arguments : is the timeout value is specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. If the server is located on the same LAN as haproxy, the connection should be immediate (less than a few milliseconds). Anyway, it is a good practice to cover one or several TCP packet losses by specifying timeouts that are slightly above multiples of 3 seconds (eg: 4 or 5 seconds). By default, the connect timeout also presets the queue timeout to the same value if this one has not been specified. Historically, the contimeout was also used to set the tarpit timeout in a listen section, which is not possible in a pure frontend. This parameter is specific to backends, but can be specified once for all in "defaults" sections. This is in fact one of the easiest solutions not to forget about it. An unspecified timeout results in an infinite timeout, which is not recommended. Such a usage is accepted and works but reports a warning during startup because it may results in accumulation of failed sessions in the system if the system's timeouts are not configured either. This parameter is provided for backwards compatibility but is currently deprecated. Please use "timeout connect", "timeout queue" or "timeout tarpit" instead. See also : "timeout connect", "timeout queue", "timeout tarpit", "timeout server", "contimeout". cookie [ rewrite | insert | prefix ] [ indirect ] [ nocache ] [ postonly ] [ preserve ] [ httponly ] [ secure ] [ domain ]* [ maxidle ] [ maxlife ] Enable cookie-based persistence in a backend. May be used in sections : defaults | frontend | listen | backend yes | no | yes | yes Arguments : is the name of the cookie which will be monitored, modified or inserted in order to bring persistence. This cookie is sent to the client via a "Set-Cookie" header in the response, and is brought back by the client in a "Cookie" header in all requests. Special care should be taken to choose a name which does not conflict with any likely application cookie. Also, if the same backends are subject to be used by the same clients (eg: HTTP/HTTPS), care should be taken to use different cookie names between all backends if persistence between them is not desired. rewrite This keyword indicates that the cookie will be provided by the server and that haproxy will have to modify its value to set the server's identifier in it. This mode is handy when the management of complex combinations of "Set-cookie" and "Cache-control" headers is left to the application. The application can then decide whether or not it is appropriate to emit a persistence cookie. Since all responses should be monitored, this mode only works in HTTP close mode. Unless the application behaviour is very complex and/or broken, it is advised not to start with this mode for new deployments. This keyword is incompatible with "insert" and "prefix". insert This keyword indicates that the persistence cookie will have to be inserted by haproxy in server responses if the client did not already have a cookie that would have permitted it to access this server. When used without the "preserve" option, if the server emits a cookie with the same name, it will be remove before processing. For this reason, this mode can be used to upgrade existing configurations running in the "rewrite" mode. The cookie will only be a session cookie and will not be stored on the client's disk. By default, unless the "indirect" option is added, the server will see the cookies emitted by the client. Due to caching effects, it is generally wise to add the "nocache" or "postonly" keywords (see below). The "insert" keyword is not compatible with "rewrite" and "prefix". prefix This keyword indicates that instead of relying on a dedicated cookie for the persistence, an existing one will be completed. This may be needed in some specific environments where the client does not support more than one single cookie and the application already needs it. In this case, whenever the server sets a cookie named , it will be prefixed with the server's identifier and a delimiter. The prefix will be removed from all client requests so that the server still finds the cookie it emitted. Since all requests and responses are subject to being modified, this mode requires the HTTP close mode. The "prefix" keyword is not compatible with "rewrite" and "insert". indirect When this option is specified, no cookie will be emitted to a client which already has a valid one for the server which has processed the request. If the server sets such a cookie itself, it will be removed, unless the "preserve" option is also set. In "insert" mode, this will additionally remove cookies from the requests transmitted to the server, making the persistence mechanism totally transparent from an application point of view. nocache This option is recommended in conjunction with the insert mode when there is a cache between the client and HAProxy, as it ensures that a cacheable response will be tagged non-cacheable if a cookie needs to be inserted. This is important because if all persistence cookies are added on a cacheable home page for instance, then all customers will then fetch the page from an outer cache and will all share the same persistence cookie, leading to one server receiving much more traffic than others. See also the "insert" and "postonly" options. postonly This option ensures that cookie insertion will only be performed on responses to POST requests. It is an alternative to the "nocache" option, because POST responses are not cacheable, so this ensures that the persistence cookie will never get cached. Since most sites do not need any sort of persistence before the first POST which generally is a login request, this is a very efficient method to optimize caching without risking to find a persistence cookie in the cache. See also the "insert" and "nocache" options. preserve This option may only be used with "insert" and/or "indirect". It allows the server to emit the persistence cookie itself. In this case, if a cookie is found in the response, haproxy will leave it untouched. This is useful in order to end persistence after a logout request for instance. For this, the server just has to emit a cookie with an invalid value (eg: empty) or with a date in the past. By combining this mechanism with the "disable-on-404" check option, it is possible to perform a completely graceful shutdown because users will definitely leave the server after they logout. httponly This option tells haproxy to add an "HttpOnly" cookie attribute when a cookie is inserted. This attribute is used so that a user agent doesn't share the cookie with non-HTTP components. Please check RFC6265 for more information on this attribute. secure This option tells haproxy to add a "Secure" cookie attribute when a cookie is inserted. This attribute is used so that a user agent never emits this cookie over non-secure channels, which means that a cookie learned with this flag will be presented only over SSL/TLS connections. Please check RFC6265 for more information on this attribute. domain This option allows to specify the domain at which a cookie is inserted. It requires exactly one parameter: a valid domain name. If the domain begins with a dot, the browser is allowed to use it for any host ending with that name. It is also possible to specify several domain names by invoking this option multiple times. Some browsers might have small limits on the number of domains, so be careful when doing that. For the record, sending 10 domains to MSIE 6 or Firefox 2 works as expected. maxidle This option allows inserted cookies to be ignored after some idle time. It only works with insert-mode cookies. When a cookie is sent to the client, the date this cookie was emitted is sent too. Upon further presentations of this cookie, if the date is older than the delay indicated by the parameter (in seconds), it will be ignored. Otherwise, it will be refreshed if needed when the response is sent to the client. This is particularly useful to prevent users who never close their browsers from remaining for too long on the same server (eg: after a farm size change). When this option is set and a cookie has no date, it is always accepted, but gets refreshed in the response. This maintains the ability for admins to access their sites. Cookies that have a date in the future further than 24 hours are ignored. Doing so lets admins fix timezone issues without risking kicking users off the site. maxlife This option allows inserted cookies to be ignored after some life time, whether they're in use or not. It only works with insert mode cookies. When a cookie is first sent to the client, the date this cookie was emitted is sent too. Upon further presentations of this cookie, if the date is older than the delay indicated by the parameter (in seconds), it will be ignored. If the cookie in the request has no date, it is accepted and a date will be set. Cookies that have a date in the future further than 24 hours are ignored. Doing so lets admins fix timezone issues without risking kicking users off the site. Contrary to maxidle, this value is not refreshed, only the first visit date counts. Both maxidle and maxlife may be used at the time. This is particularly useful to prevent users who never close their browsers from remaining for too long on the same server (eg: after a farm size change). This is stronger than the maxidle method in that it forces a redispatch after some absolute delay. There can be only one persistence cookie per HTTP backend, and it can be declared in a defaults section. The value of the cookie will be the value indicated after the "cookie" keyword in a "server" statement. If no cookie is declared for a given server, the cookie is not set. Examples : cookie JSESSIONID prefix cookie SRV insert indirect nocache cookie SRV insert postonly indirect cookie SRV insert indirect nocache maxidle 30m maxlife 8h See also : "appsession", "balance source", "capture cookie", "server" and "ignore-persist". default-server [param*] Change default options for a server in a backend May be used in sections : defaults | frontend | listen | backend yes | no | yes | yes Arguments: is a list of parameters for this server. The "default-server" keyword accepts an important number of options and has a complete section dedicated to it. Please refer to section 5 for more details. Example : default-server inter 1000 weight 13 See also: "server" and section 5 about server options default_backend Specify the backend to use when no "use_backend" rule has been matched. May be used in sections : defaults | frontend | listen | backend yes | yes | yes | no Arguments : is the name of the backend to use. When doing content-switching between frontend and backends using the "use_backend" keyword, it is often useful to indicate which backend will be used when no rule has matched. It generally is the dynamic backend which will catch all undetermined requests. Example : use_backend dynamic if url_dyn use_backend static if url_css url_img extension_img default_backend dynamic See also : "use_backend", "reqsetbe", "reqisetbe" disabled Disable a proxy, frontend or backend. May be used in sections : defaults | frontend | listen | backend yes | yes | yes | yes Arguments : The "disabled" keyword is used to disable an instance, mainly in order to liberate a listening port or to temporarily disable a service. The instance will still be created and its configuration will be checked, but it will be created in the "stopped" state and will appear as such in the statistics. It will not receive any traffic nor will it send any health-checks or logs. It is possible to disable many instances at once by adding the "disabled" keyword in a "defaults" section. See also : "enabled" dispatch
: Set a default server address May be used in sections : defaults | frontend | listen | backend no | no | yes | yes Arguments : none
is the IPv4 address of the default server. Alternatively, a resolvable hostname is supported, but this name will be resolved during start-up. is a mandatory port specification. All connections will be sent to this port, and it is not permitted to use port offsets as is possible with normal servers. The "dispatch" keyword designates a default server for use when no other server can take the connection. In the past it was used to forward non persistent connections to an auxiliary load balancer. Due to its simple syntax, it has also been used for simple TCP relays. It is recommended not to use it for more clarity, and to use the "server" directive instead. See also : "server" enabled Enable a proxy, frontend or backend. May be used in sections : defaults | frontend | listen | backend yes | yes | yes | yes Arguments : none The "enabled" keyword is used to explicitly enable an instance, when the defaults has been set to "disabled". This is very rarely used. See also : "disabled" errorfile Return a file contents instead of errors generated by HAProxy May be used in sections : defaults | frontend | listen | backend yes | yes | yes | yes Arguments : is the HTTP status code. Currently, HAProxy is capable of generating codes 200, 400, 403, 408, 500, 502, 503, and 504. designates a file containing the full HTTP response. It is recommended to follow the common practice of appending ".http" to the filename so that people do not confuse the response with HTML error pages, and to use absolute paths, since files are read before any chroot is performed. It is important to understand that this keyword is not meant to rewrite errors returned by the server, but errors detected and returned by HAProxy. This is why the list of supported errors is limited to a small set. Code 200 is emitted in response to requests matching a "monitor-uri" rule. The files are returned verbatim on the TCP socket. This allows any trick such as redirections to another URL or site, as well as tricks to clean cookies, force enable or disable caching, etc... The package provides default error files returning the same contents as default errors. The files should not exceed the configured buffer size (BUFSIZE), which generally is 8 or 16 kB, otherwise they will be truncated. It is also wise not to put any reference to local contents (eg: images) in order to avoid loops between the client and HAProxy when all servers are down, causing an error to be returned instead of an image. For better HTTP compliance, it is recommended that all header lines end with CR-LF and not LF alone. The files are read at the same time as the configuration and kept in memory. For this reason, the errors continue to be returned even when the process is chrooted, and no file change is considered while the process is running. A simple method for developing those files consists in associating them to the 403 status code and interrogating a blocked URL. See also : "errorloc", "errorloc302", "errorloc303" Example : errorfile 400 /etc/haproxy/errorfiles/400badreq.http errorfile 403 /etc/haproxy/errorfiles/403forbid.http errorfile 503 /etc/haproxy/errorfiles/503sorry.http errorloc errorloc302 Return an HTTP redirection to a URL instead of errors generated by HAProxy May be used in sections : defaults | frontend | listen | backend yes | yes | yes | yes Arguments : is the HTTP status code. Currently, HAProxy is capable of generating codes 200, 400, 403, 408, 500, 502, 503, and 504. it is the exact contents of the "Location" header. It may contain either a relative URI to an error page hosted on the same site, or an absolute URI designating an error page on another site. Special care should be given to relative URIs to avoid redirect loops if the URI itself may generate the same error (eg: 500). It is important to understand that this keyword is not meant to rewrite errors returned by the server, but errors detected and returned by HAProxy. This is why the list of supported errors is limited to a small set. Code 200 is emitted in response to requests matching a "monitor-uri" rule. Note that both keyword return the HTTP 302 status code, which tells the client to fetch the designated URL using the same HTTP method. This can be quite problematic in case of non-GET methods such as POST, because the URL sent to the client might not be allowed for something other than GET. To workaround this problem, please use "errorloc303" which send the HTTP 303 status code, indicating to the client that the URL must be fetched with a GET request. See also : "errorfile", "errorloc303" errorloc303 Return an HTTP redirection to a URL instead of errors generated by HAProxy May be used in sections : defaults | frontend | listen | backend yes | yes | yes | yes Arguments : is the HTTP status code. Currently, HAProxy is capable of generating codes 400, 403, 408, 500, 502, 503, and 504. it is the exact contents of the "Location" header. It may contain either a relative URI to an error page hosted on the same site, or an absolute URI designating an error page on another site. Special care should be given to relative URIs to avoid redirect loops if the URI itself may generate the same error (eg: 500). It is important to understand that this keyword is not meant to rewrite errors returned by the server, but errors detected and returned by HAProxy. This is why the list of supported errors is limited to a small set. Code 200 is emitted in response to requests matching a "monitor-uri" rule. Note that both keyword return the HTTP 303 status code, which tells the client to fetch the designated URL using the same HTTP GET method. This solves the usual problems associated with "errorloc" and the 302 code. It is possible that some very old browsers designed before HTTP/1.1 do not support it, but no such problem has been reported till now. See also : "errorfile", "errorloc", "errorloc302" force-persist { if | unless } Declare a condition to force persistence on down servers May be used in sections: defaults | frontend | listen | backend no | yes | yes | yes By default, requests are not dispatched to down servers. It is possible to force this using "option persist", but it is unconditional and redispatches to a valid server if "option redispatch" is set. That leaves with very little possibilities to force some requests to reach a server which is artificially marked down for maintenance operations. The "force-persist" statement allows one to declare various ACL-based conditions which, when met, will cause a request to ignore the down status of a server and still try to connect to it. That makes it possible to start a server, still replying an error to the health checks, and run a specially configured browser to test the service. Among the handy methods, one could use a specific source IP address, or a specific cookie. The cookie also has the advantage that it can easily be added/removed on the browser from a test page. Once the service is validated, it is then possible to open the service to the world by returning a valid response to health checks. The forced persistence is enabled when an "if" condition is met, or unless an "unless" condition is met. The final redispatch is always disabled when this is used. See also : "option redispatch", "ignore-persist", "persist", and section 7 about ACL usage. fullconn Specify at what backend load the servers will reach their maxconn May be used in sections : defaults | frontend | listen | backend yes | no | yes | yes Arguments : is the number of connections on the backend which will make the servers use the maximal number of connections. When a server has a "maxconn" parameter specified, it means that its number of concurrent connections will never go higher. Additionally, if it has a "minconn" parameter, it indicates a dynamic limit following the backend's load. The server will then always accept at least connections, never more than , and the limit will be on the ramp between both values when the backend has less than concurrent connections. This makes it possible to limit the load on the servers during normal loads, but push it further for important loads without overloading the servers during exceptional loads. Example : # The servers will accept between 100 and 1000 concurrent connections each # and the maximum of 1000 will be reached when the backend reaches 10000 # connections. backend dynamic fullconn 10000 server srv1 dyn1:80 minconn 100 maxconn 1000 server srv2 dyn2:80 minconn 100 maxconn 1000 See also : "maxconn", "server" grace