Skip to content

Instantly share code, notes, and snippets.

@nickwallen
Last active June 5, 2020 08:31
Show Gist options
  • Save nickwallen/c9acf75f0731ea565133b0c3e5fdea78 to your computer and use it in GitHub Desktop.
Save nickwallen/c9acf75f0731ea565133b0c3e5fdea78 to your computer and use it in GitHub Desktop.

Installation

Environment Setup

[1] https://github.com/ntop/PF_RING/blob/dev/doc/README.hugepages.md

  1. Dependencies

    yum -y install epel-release
    yum -y install "@Development tools" python-devel libpcap-devel dkms glib2-devel pcre-devel zlib-devel openssl-devel
    yum install kernel-devel-$(uname -r)
    
  2. Enable 2MB THP

    echo always > /sys/kernel/mm/transparent_hugepage/enabled
    echo 8192 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
    mkdir -p /mnt/hugepages
    mount -t hugetlbfs nodev /mnt/hugepages
    
  3. Validate THP

    grep Huge /proc/meminfo
    
  4. Install Librdkafka

    wget https://github.com/edenhill/librdkafka/archive/v0.9.4.tar.gz  -O - | tar -xz
    cd librdkafka-0.9.4/
    ./configure --prefix=/usr
    make
    make install
    

Install PF Ring

  1. Build and install.

    wget https://github.com/ntop/PF_RING/archive/6.6.0.tar.gz -O - | tar -xz
    cd PF_RING-6.6.0
    
    cd kernel
    make
    make install
    
    cd ../userland/lib
    ./configure --prefix=/usr/local/pfring
    make 
    make install
    
    cd ../libpcap
    ./configure --prefix=/usr/local/pfring
    make
    make install
    
    cd ../tcpdump-4.1.1
    ./configure --prefix=/usr/local/pfring
    make install
    
  2. Load the kernel module.

    modprobe pf_ring
    
  3. Validate

    $ lsmod | grep pf_ring
    pf_ring              1234009  0
    
  4. Build the ZC driver.

    cd ~/PF_RING-6.6.0/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src
    make
    
  5. Load the ZC driver

    rmmod ixgbe
    insmod ~/PF_RING-6.6.0/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src/ixgbe.ko
    
  6. Validate the ZC driver.

    lsmod | grep ixgbe
    
  7. Create License

  8. Build the ZC tools.

    cd userland/examples_zc/
    make
    
  9. Validate the license.

    [root@localhost examples_zc]# ./zcount -i <tap-interface-1> -C
    License OK
    
  10. Sanity check

    [root@localhost examples_zc]# ./zsanitycheck
    Writing data..
    Reading data..
    Test completed, 1024 buffers inspected
    
  11. Add PF Ring's libpcap to the dynamic library load path.

    echo "/usr/local/pfring/lib/" >> /etc/ld.so.conf.d/pfring.conf
    ldconfig -v
    

Install Bro

[1] https://www.bro.org/documentation/load-balancing.html

  1. Install Bro on the host where it will run (y137).

    cd
    wget https://www.bro.org/downloads/release/bro-2.4.1.tar.gz  -O - | tar -xz
    cd bro-2.4.1
    ./configure --prefix=/usr --with-pcap=/usr/local/pfring
    make
    make install
    
  2. Configure Bro to listen on the TAP interface.

    sed -i 's/eth0/<tap-interface-1>/g' /usr/etc/node.cfg
    
  3. Configure load balancer; edit /usr/etc/node.cfg to look similar to the following.

    [manager]
    type=manager
    host=localhost
    
    [proxy-1]
    type=proxy
    host=localhost
    
    [worker-1]
    type=worker
    host=localhost
    interface=<tap-interface-1>
    lb_method=pf_ring
    lb_procs=4
    pin_cpus=0,1,2,3
    
  4. Install config changes.

    broctl install
    
  5. Configure logs at /usr/etc/broctl.cfg. Replace /metron1 with desired mount point for the data storage disks.

    # Rotation interval in seconds for log files on manager (or standalone) node.
    # A value of 0 disables log rotation.
    LogRotationInterval = 3600
    
    # Expiration interval for archived log files in LogDir.  Files older than this
    # will be deleted by "broctl cron".  The interval is an integer followed by
    # one of these time units:  day, hr, min.  A value of 0 means that logs
    # never expire.
    LogExpireInterval = 7 day
    
    # Location of the log directory where log files will be archived each rotation
    # interval.
    LogDir = /metron1/bro/logs
    
    # Location of the spool directory where files and data that are currently being
    # written are stored.
    SpoolDir = /metron1/bro/spool
    
  6. Install the Bro Plugin on the host where it will run (y137).

    wget https://github.com/apache/metron/archive/master.zip
    unzip master.zip
    cd metron-master/metron-sensors/bro-plugin-kafka
    ./configure --bro-dist=/root/bro-2.4.1 --install-root=/usr/lib/bro/plugins/ --with-librdkafka=/usr
    make
    make install
    
  7. Add the following to /usr/share/bro/site/local.bro. Add the appropriate kafka broker and kerberos information.

    @load Bro/Kafka/logs-to-kafka.bro
    redef Kafka::logs_to_send = set(HTTP::LOG, DNS::LOG);
    redef Kafka::topic_name = "bro";
    redef Kafka::tag_json = T;
    redef Kafka::kafka_conf = table( ["metadata.broker.list"] = "<kafka-broker-list>"
                                   , ["security.protocol"] = "SASL_PLAINTEXT"
                                   , ["sasl.kerberos.keytab"] = "<path-to-kerberos-keytab>"
                                   , ["sasl.kerberos.principal"] = "<kerberos-principal>"
                                   , ["debug"] = "metadata"
                                   );
    
  8. Make sure the changes are installed.

    broctl install
    
  9. Start Bro.

    broctl deploy
    
  10. Ensure that Bro is producing telemetry.

    ls -ltr /metron1/bro/logs/current
    
  11. If there is telemetry in the logs, then validate that it is also landing in the Kafka topic.

Install YAF

  1. Install libfixbuf.

    cd
    wget http://tools.netsa.cert.org/releases/libfixbuf-1.7.1.tar.gz  -O - | tar -xz
    cd libfixbuf-1.7.1/
    ./configure
    make
    make install
    
  2. Build YAF. Double check path to PF Ring.

    cd
    wget http://tools.netsa.cert.org/releases/yaf-2.8.0.tar.gz -O - | tar -xz
    cd yaf-2.8.0/
    ./configure --enable-applabel --enable-plugins --disable-airframe --with-pfring=/usr/local/pfring/
    make
    make install
    
  3. Add YAF's lib path to the dynamic library load path.

    echo "/usr/local/lib/" >> /etc/ld.so.conf.d/yaf.conf
    ldconfig -v
    
  4. Start yaf by following the instructions below.

Install Fastcapa

  1. Enable 1G THP by following these instructions.

  2. Install DPDK following these instructions.

  3. Grab the source code for Fastcapa.

    wget https://github.com/nickwallen/metron/archive/X520-POC.zip
    unzip X520-POC.zip
    cd metron-X520-POC/metron-sensors/fastcapa/
    
  4. Build Fastcapa following these instructions.

  5. Start with an example fastcapa configuration file like the following.

    #
    # kafka global settings
    #
    [kafka-global]
    
    #debug = broker,topic,msg
    
    # Protocol used to communicate with brokers. 
    # Type: enum value { plaintext, ssl, sasl_plaintext, sasl_ssl }
    security.protocol = SASL_PLAINTEXT
    
    # Broker service name
    #sasl.kerberos.service.name=kafka
    
    # Client keytab location
    sasl.kerberos.keytab=<path-to-kerberos-keytab>
    
    # sasl.kerberos.principal
    sasl.kerberos.principal=<kerberos-principal>
    
    # Initial list of brokers as a CSV list of broker host or host:port. 
    # Type: string
    metadata.broker.list=kafka1:9092,kafka2:9092,kafka3:9092
    
    # Client identifier. 
    # Type: string
    client.id = fastcapa-ens3f0
    
    # Maximum number of messages allowed on the producer queue. 
    # Type: integer
    # Default: 100000
    queue.buffering.max.messages = 5000000
    
    # Maximum total message size sum allowed on the producer queue. 
    # Type: integer
    #queue.buffering.max.kbytes = 2097151
    
    # Maximum time, in milliseconds, for buffering data on the producer queue. 
    # Type: integer
    # Default: 1000
    queue.buffering.max.ms = 20000
    
    # Maximum size for message to be copied to buffer. Messages larger than this will be 
    # passed by reference (zero-copy) at the expense of larger iovecs.  
    # Type: integer
    # Default: 65535
    #message.copy.max.bytes = 65535
    
    # Compression codec to use for compressing message sets. This is the default value 
    # for all topics, may be overriden by the topic configuration property compression.codec. 
    # Type: enum value { none, gzip, snappy, lz4 }
    # Default: none
    compression.codec = snappy
    
    # Maximum number of messages batched in one MessageSet. The total MessageSet size is 
    # also limited by message.max.bytes. 
    # Increase for better compression.
    # Type: integer
    batch.num.messages = 100000
    
    # Maximum transmit message size. 
    # Type: integer
    # Default: 1000000
    message.max.bytes = 10000000 
    
    # How many times to retry sending a failing MessageSet. Note: retrying may cause reordering. 
    # Type: integer
    message.send.max.retries = 5
    
    # The backoff time in milliseconds before retrying a message send. 
    # Type: integer
    # Default: 100
    retry.backoff.ms = 500
    
    # how often statistics are emitted; 0 = never
    # Statistics emit interval. The application also needs to register a stats callback 
    # using rd_kafka_conf_set_stats_cb(). The granularity is 1000ms. A value of 0 disables statistics. 
    # Type: integer
    # Default: 0
    statistics.interval.ms = 5000
    
    socket.timeout.ms = 10000
    
    # Only provide delivery reports for failed messages. 
    # Type: boolean
    # Default: false
    delivery.report.only.error = false
    
    #
    # kafka topic settings
    #
    [kafka-topic]
    
    # This field indicates how many acknowledgements the leader broker must receive from ISR brokers 
    # before responding to the request: 
    #   0=Broker does not send any response/ack to client, 
    #   1=Only the leader broker will need to ack the message, 
    #  -1 or all=broker will block until message is committed by all in sync replicas (ISRs) or broker's in.sync.replicas setting before sending response. 
    # Type: integer
    request.required.acks = 1
    
    # Local message timeout. This value is only enforced locally and limits the time a produced message 
    # waits for successful delivery. A time of 0 is infinite. 
    # Type: integer
    # Default: 300000
    message.timeout.ms = 900000
    
    # Report offset of produced message back to application. The application must be use the 
    # dr_msg_cb to retrieve the offset from rd_kafka_message_t.offset. 
    # Type: boolean
    # Default: false
    #produce.offset.report = false
    
  6. Update the metadata.broker.list in the configuration file.

  7. Update the following Kerberos properties in the configuration file under the [kafka-global] header.

    sasl.kerberos.keytab=/etc/security/keytabs/metron.service.keytab
    [email protected]
    
  8. Add the build location of Fastcapa to our path.

    export PATH=$PATH:/root/metron-X520-POC/metron-sensors/fastcapa/build/app
    
  9. Run Fastcapa. Edit the settings like topic, lcores to match the environment.

    screen -S fastcapa
    
    TOPIC=pcap
    CONFIG=/root/fastcapa.conf
    THP_MNT=/mnt/huge_1GB
    
    fastcapa -l 8-15,24 \
        --huge-dir $THP_MNT -- \
        -t $TOPIC \
        -c $CONFIG \
        -b 192 \
        -x 262144 \
        -q 4 \
        -r 4096
    

Management

Bro

(Q) How do I start Bro?

Note: This will not pick-up any configuration changes. For that use install or deploy.

broctl start

(Q) How do I start Bro after making a configuration change?

broctl deploy

(Q) How do I check the status of Bro?

broctl status

(Q) How do I stop Bro?

broctl stop

(Q) How do I diagnose a problem starting Bro?

broctl diag

(Q) Is Bro actually working?

BRO_LOGS=/metron1/bro/logs

The http.log and dns.log produced by Bro should be very active.

ls -ltr $BRO_LOGS/current/

Look for any connectivity or authorization issues.

cat $BRO_LOGS/current/stderr.out
cat $BRO_LOGS/current/stdout.out

(Q) How do I change the topic Bro is writing to?

Alter the topic name in the local.bro script.

$ cat /usr/bro/share/bro/site/local.bro
...
redef Kafka::topic_name = "bro";

Then restart Bro. Make sure to install or deploy it. Otherwise the configuration change will not take effect.

broctl stop
broctl deploy

YAF

(Q) How do I run YAF with one worker?

KAFKA_BROKERS=kafka1:9092,kafka2:9092
YAF_TOPIC=yaf

export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/metron/0.4.0/client_jaas.conf"
export PATH=$PATH:/usr/hdp/current/kafka-broker/bin:/usr/local/bin

yaf --in ens2f0 --live pcap | \
  yafscii --tabular | \
  kafka-console-producer.sh \
	  --broker-list $KAFKA_BROKERS \
	  --topic $YAF_TOPIC \
	  --security-protocol SASL_PLAINTEXT

(Q) How do I start YAF with a load balancer?

  1. Start the load balancer.

       SNIFF_IFACE=ens2f0
       YAF_TOPIC=yafpoc
       CLUSTER_NUM=99
       LOG_DIR=/var/log/yaf
       NUM_WORKERS=2
    
       mkdir -p $LOG_DIR
       modprobe pf_ring
       yafzcbalance --in=$SNIFF_IFACE \
           --cluster $CLUSTER_NUM \
           --num $NUM_WORKERS \
           --stats 15 \
           --daemon \
           --log $LOG_DIR/yafzcbalance.log
    
  2. Start the workers. Increment the $WORKER_NUM to start multiple workers. There should be at least 2 workers running.

       WORKER_NUM=0
       screen -S yaf-$WORKER_NUM
    
       WORKER_NUM=0
       CLUSTER_NUM=99
       KAFKA_BROKERS=kafka1:9092,kafka2:9092
       YAF_TOPIC=yaf
    
       export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/metron/0.4.0/client_jaas.conf"
       export PATH=$PATH:/usr/hdp/current/kafka-broker/bin:/usr/local/bin
    
       yaf --in $CLUSTER_NUM:$WORKER_NUM --live zc | \
       yafscii --tabular | \
       kafka-console-producer.sh --broker-list $KAFKA_BROKERS --topic $YAF_TOPIC --security-protocol SASL_PLAINTEXT
    

(Q) How do I stop YAF?

killall yafzcbalance
killall yaf

(Q) How do I check if the load balancer is working?

First start the yafzcbalancer. Then attach a worker that will simply consume and print the YAF output.

WORKER_NUM=0
screen -S yaf-$WORKER_NUM

WORKER_NUM=0
CLUSTER_NUM=99
KAFKA_BROKERS=kafka1:9092,kafka2:9092
YAF_TOPIC=yaf

export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/metron/0.4.0/client_jaas.conf"
export PATH=$PATH:/usr/hdp/current/kafka-broker/bin:/usr/local/bin

yaf --in $CLUSTER_NUM:$WORKER_NUM --live zc | yafscii --tabular

(Q) The load balancer is not working. What should I do?

Be sure to completely shut down the load balancer and workers. Then restart them, but change the $CLUSTER_NAME. This will often fix oddities with PF_Ring's ring buffers.

Fastcapa

(Q) How do I start Fastcapa?

  1. Ensure that the NIC is bound to DPDK

    export PATH=$PATH:/usr/local/dpdk/sbin
    dpdk-devbind --status
    

    If it is not bound, bind it.

    ifdown ens3f0
    modprobe uio_pci_generic
    dpdk-devbind --bind=uio_pci_generic "81:00.0"
    
  2. Ensure that THPs are available

    $ grep -e "^Huge" /proc/meminfo
    HugePages_Total:      16
    HugePages_Free:       16
    HugePages_Rsvd:        0
    HugePages_Surp:        0
    Hugepagesize:    1048576 kB
    
  3. Ensure the THPs are mounted

    $ mount | grep hugetlbfs
    nodev on /mnt/huge_1GB type hugetlbfs (rw,relatime,pagesize=1GB)
    

    Mount them if they are not.

    umount -a -t hugetlbfs
    mount -t hugetlbfs nodev /mnt/hugepages
    
  4. Start Fastcapa in its own screen session.

    screen -S fastcapa
    
    TOPIC=pcappoc
    CONFIG=/root/fastcapa.conf
    THP_MNT=/mnt/hugepages
    
    fastcapa -l 8-15,24 --huge-dir $THP_MNT -- -t $TOPIC -c $CONFIG -b 192 -x 262144 -q 4 -r 4096
    

(Q) How do I stop Fastcapa?

killall fastcapa

(Q) How do I change the topic Fastcapa is writing to?

Simply change the name passed to Fastcapa on the command line using the -t command-line switch.

(Q) What are these numbers coming out of Fastcapa?

https://github.com/apache/metron/tree/master/metron-sensors/fastcapa#output

(Q) What is this error?

https://github.com/apache/metron/tree/master/metron-sensors/fastcapa#faqs

Kafka

(Q) How do I list all Kafka ACLs?

export PATH=$PATH:/usr/hdp/current/kafka-broker/bin
kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer \
	-authorizer-properties zookeeper.connect=$ZOOKEEPER \
	--list

(Q) How do I consume data from a topic?

Run the following command as the metron user. The metron user already has the JAAS configuration setup.

export PATH=$PATH:/usr/hdp/current/kafka-broker/bin
kafka-console-consumer.sh --zookeeper $ZOOKEEPER \
	--topic $TOPIC \
	--security-protcol SASL_PLAINTEXT
@dantidote
Copy link

[root@hpcc02r01n09 src]# /root/PF_RING-6.6.0/userland/examples_zc/zcount -i ens2f0
pfring_zc_create_cluster error [No buffer space available] Please check that pf_ring.ko is loaded and hugetlb fs is mounted

@dantidote
Copy link

[ams-hbase-secure-2420, registry, brokers, zookeeper, infra-solr, kafka-acl, kafka-acl-changes, admin, isr_change_notification, hiveserver2-ranger, templeton-hadoop, hiveserver2-attunity, storm2530, metron, hiveserver2, controller_epoch, rmstore, solr, hbase-secure, consumers, config, ams-hbase-secure]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment