This is the command qsfilter2 that can be run in the OnWorks free hosting provider using one of our multiple free online workstations such as Ubuntu Online, Fedora Online, Windows online emulator or MAC OS online emulator
PROGRAM:
NAME
qsfilter2 - an utility to generate mod_qos request line rules out from existing
access/audit log data.
SYNOPSIS
qsfilter2 -i <path> [-c <path>] [-d <num>] [-h] [-b <num>] [-p|-s|-m|-o] [-l <len>] [-n]
[-e] [-u 'uni'] [-k <prefix>] [-t] [-f <path>] [-v 0|1|2]
DESCRIPTION
mod_qos implements a request filter which validates each request line. The module supports
both, negative and positive security model. The QS_Deny* directives are used to specify
request line patterns which are not allowed to access the server (negative security model
/ blacklist). These rules are used to restrict access to certain resources which should
not be available to users or to protect the server from malicious patterns. The QS_Permit*
rules implement a positive security model (whitelist). These directives are used to define
allowed request line patterns. Request which do not match any of thses patterns are not
allowed to access the server.
qsfilter2 is an audit log analyzer used to generate filter rules (perl compatible regular
expressions) which may be used by mod_qos to deny access for suspect requests
(QS_PermitUri rules). It parses existing audit log files in order to generate request
patterns covering all allowed requests.
OPTIONS
-i <path>
Input file containing request URIs. The URIs for this file have to be extracted
from the servers access logs. Each line of the input file contains a request URI
consiting of a path and and query.
Example:
/aaa/index.do
/aaa/edit?image=1.jpg
/aaa/image/1.jpg
/aaa/view?page=1
/aaa/edit?document=1
These access log data must include current request URIs but also request lines from
previous rule generation steps. It must also include request lines which cover
manually generated rules.
-c <path>
mod_qos configuration file defining QS_DenyRequestLine and QS_PermitUri directives.
qsfilter2 generates rules from access log data automatically. Manually generated
rules (QS_PermitUri) may be provided from this file. Note: each manual rule must be
represented by a request URI in the input data (-i) in order to make sure not to be
deleted by the rule optimisation algorithm. QS_Deny* rules from this file are used
to filter request lines which should not be used for whitelist rule generation.
Example:
# manually defined whitelist rule:
QS_PermitUri +view deny "^[/a-zA-Z0-9]+/view\?(page=[0-9]+)?$"
# filter unwanted request line patterns:
QS_DenyRequestLine +printable deny ".*[\x00-\x19].*"
-d <num>
Depth (sub locations) of the path string which is defined as a literal string.
Default is 1.
-h Always use a string representing the handler name in the path even the url does not
have a query. See also -d option.
-b <num>
Replaces url pattern by the regular expression when detecting a base64/hex encoded
string. Detecting sensibility is defined by a numeric value. You should use values
higher than 5 (default) or 0 to disable this function.
-p Repesents query by pcre only (no literal strings).
-s Uses one single pcre for the whole query string.
-m Uses one pcre for multipe query values (recommended mode).
-o Does not care the order of query parameters.
-l <len>
Outsizes the query length by the defined length ({0,size+len}), default is 10.
-n Disables redundant rules elimination.
-e Exit on error.
-u 'uni'
Enables additional decoding methods. Use the same settings as you have used for the
QS_Decoding directive.
-p Repesents query by pcre only (no literal strings). Determines the worst case
performance for the generated whitelist by applying each rule for each request line
(output is real time filter duration per request line in milliseconds).
-k <prefix>
Prefix used to generate rule identifiers (QSF by default).
-t Calculates the maximal latency per request (worst case) using the generated rules.
-f <path>
Filters the input by the provided path (prefix) only processing matching lines.
-v <level>
Verbose mode. (0=silent, 1=rule source, 2=detailed). Default is 1. Don't use rules
you haven't checked the request data used to generate it! Level 1 is highly
recommended (as long as you don't have created the log data using your own web
crawler).
OUTPUT
The output of qsfilter2 is written to stdout. The output contains the generated
QS_PermitUri directives but also information about the source which has been used to
generate these rules. It is very important to check the validity of each request line
which has been used to calculate the QS_PermitUri rules. Each request line which has been
used to generate a new rule is shown in the output prefixed by "ADD line <line number>:".
These request lines should be stored and reused at any later rule generation (add them to
the URI input file). The subsequent line shows the generated rule. At the end of data
processing a list of all generated QS_PermitUri rules is shown. These directives may be
used withn the configuration file used by mod_qos.
EXAMPLE
./qsfilter2 -i loc.txt -c httpd.conf -m -e
...
# ADD line 1: /aaa/index.do
# 003 ^(/[a-zA-Z0-9\-_]+)+[/]?\.?[a-zA-Z]{0,4}$
# ADD line 3: /aaa/view?page=1
# --- ^[/a-zA-Z0-9]+/view\?(page=[0-9]+)?$
# ADD line 4: /aaa/edit?document=1
# 004 ^[/a-zA-Z]+/edit\?((document)(=[0-9]*)*[&]?)*$
# ADD line 5: /aaa/edit?image=1.jpg
# 005 ^[/a-zA-Z]+/edit\?((image)(=[0-9\.a-zA-Z]*)*[&]?)*$
...
QS_PermitUri +QSF001 deny "^[/a-zA-Z]+/edit\?((document|image)(=[0-9\.a-zA-Z]*)*[&]?)*$"
QS_PermitUri +QSF002 deny "^[/a-zA-Z0-9]+/view\?(page=[0-9]+)?$"
QS_PermitUri +QSF003 deny "^(/[a-zA-Z0-9\-_]+)+[/]?\.?[a-zA-Z]{0,4}$"
Use qsfilter2 online using onworks.net services