forked from pmacct/pmacct
-
Notifications
You must be signed in to change notification settings - Fork 1
/
QUICKSTART
1669 lines (1377 loc) · 70.4 KB
/
QUICKSTART
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
pmacct (Promiscuous mode IP Accounting package)
pmacct is Copyright (C) 2003-2016 by Paolo Lucente
TABLE OF CONTENTS:
I. Plugins included with pmacct distribution
II. Configuring pmacct for compilation and installing
III. Brief SQL (MySQL, PostgreSQL, SQLite 3.x) and noSQL (MongoDB) setup examples
IV. Running the libpcap-based daemon (pmacctd)
V. Running the NetFlow and sFlow daemons (nfacctd/sfacctd)
VI. Running the NFLOG-based daemon (uacctd)
VII. Running the pmacct client (pmacct)
VIII. Running the RabbitMQ/AMQP plugin
IX. Running the Kafka plugin
X. Internal buffering and queueing
XI. Quickstart guide to packet/stream classifiers
XII. Quickstart guide to setup a NetFlow agent/probe
XIII. Quickstart guide to setup a sFlow agent/probe
XIV. Quickstart guide to setup the BGP daemon
XV. Quickstart guide to setup a NetFlow/sFlow replicator
XVI. Quickstart guide to setup the IS-IS daemon
XVII. Quickstart guide to setup the BMP daemon
XVIII. Running the print plugin to write to flat-files
XIX. Quickstart guide to setup GeoIP lookups
XX. Using pmacct as traffic/event logger
XXI. Miscellaneous notes and troubleshooting tips
I. Plugins included with pmacct distribution
Given its open and pluggable architecture, pmacct is easily extensible with new
plugins. Here is a list of plugins included in the official pmacct distribution:
'memory': data is stored in a memory table and can be fetched via the pmacct
command-line client tool, 'pmacct'. This plugin also allows easily to
inject data into 3rd party tools like GNUplot, RRDtool or a Net-SNMP
server. The plugin is good for prototype solutions and smaller-scale
environments.
'mysql': a working MySQL installation can be used for data storage.
'pgsql': a working PostgreSQL installation can be used for data storage.
'sqlite3': a working SQLite 3.x or BerkeleyDB 5.x (compiled in with the SQLite
API) installation can be used for data storage.
'print': data is printed at regular intervals to flat-files or standard output
in tab-spaced, CSV and JSON formats.
'mongodb': a working MongoDB installation can be used for data storage. It is
required to install the MongoDB API C driver.
'amqp': data is sent to a RabbitMQ message exchange, running AMQP protocol,
for delivery to consumer applications or tools. Popular consumers
are ElasticSearch, Cassandra and CouchDB.
'kafka': data is sent to a Kafka broker for delivery to consumer applications
or tools.
'tee': applies to nfacctd and sfacctd daemons only. It's a featureful packet
replicator for NetFlow/IPFIX/sFlow data.
'nfprobe': applies to pmacctd and uacctd daemons only. Exports collected data via
NetFlow v5/v9 or IPFIX.
'sfprobe': applies to pmacctd and uacctd daemons only. Exports collected data via
sFlow v5.
II. Configuring pmacct for compilation and installing
The simplest way to configure the package for compilation is to let the configure
script to probe default headers and libraries for you. A first round of guessing
is done via pkg-config then, for some libraries, "typical" default locations
are checked, ie. /usr/local/lib. Switches you are likely to want enabled are
already set so, ie. 64 bits counters and multi-threading (pre- requisite for
the BGP, BMP, and IGP daemon codes). SQL plugins and IPv6 support are by default
disabled instead. A few examples will follow; as usual to get the list of available
switches, you can use the following command-line:
shell> ./configure --help
Examples on how to enable the support for (1) MySQL, (2) PostgreSQL, (3) SQLite,
(4) MongoDB and any (5) mixed compilation:
(1) shell> ./configure --enable-mysql
(2) shell> ./configure --enable-pgsql
(3) shell> ./configure --enable-sqlite3
(4) shell> ./configure --enable-mongodb
(5) shell> ./configure --enable-mysql --enable-pgsql
Then to compile and install simply typing:
shell> make; make install
But, for example, should you want to compile pmacct with PostgreSQL support and
have installed PostgreSQL in /usr/local/postgresql and pkg-config is unable to
help, you can supply this non-default location as follows (assuming you are
running the bash shell):
shell> export PGSQL_LIBS="-L/usr/local/postgresql/lib -lpq"
shell> export PGSQL_CFLAGS="-I/usr/local/postgresql/include"
shell> ./configure --enable-pgsql
Once daemons are installed you can check:
* how to instrument each daemon via its usage help page:
shell> pmacctd -h
* review version and build details:
shell> sfacctd -V
* supported traffic aggregation primitives by the daemon, and their description:
shell> nfacctd -a
III. Brief SQL and noSQL setup examples
RDBMS require a table schema to manage data. pmacct offers two options: use one
of the few pre-determined table schemas available (sections IIIa, b and c) or
compose a custom schema to fit your needs (section IIId). If you are blind to
SQL the former approach is recommended, although it can pose scalability issues
in larger deployments; if you know some SQL the latter is definitely the way to
go. Scripts for setting up RDBMS are located in the 'sql/' tree of the pmacct
distribution tarball. For further guidance read the relevant README files in
such directory. One of the crucial concepts to deal with, when using default
table schemas, is table versioning: please read more about this topic in the
FAQS document (Q16).
IIIa. MySQL examples
shell> cd sql/
- To create v1 tables:
shell> mysql -u root -p < pmacct-create-db_v1.mysql
shell> mysql -u root -p < pmacct-grant-db.mysql
Data will be available in 'acct' table of 'pmacct' DB.
- To create v2 tables:
shell> mysql -u root -p < pmacct-create-db_v2.mysql
shell> mysql -u root -p < pmacct-grant-db.mysql
Data will be available in 'acct_v2' table of 'pmacct' DB.
... And so on for the newer versions.
IIIb. PostgreSQL examples
Which user has to execute the following two scripts and how to autenticate with the
PostgreSQL server depends upon your current configuration. Keep in mind that both
scripts need postgres superuser permissions to execute some commands successfully:
shell> cp -p *.pgsql /tmp
shell> su - postgres
To create v1 tables:
shell> psql -d template1 -f /tmp/pmacct-create-db.pgsql
shell> psql -d pmacct -f /tmp/pmacct-create-table_v1.pgsql
To create v2 tables:
shell> psql -d template1 -f /tmp/pmacct-create-db.pgsql
shell> psql -d pmacct -f /tmp/pmacct-create-table_v2.pgsql
... And so on for the newer versions.
A few tables will be created into 'pmacct' DB. 'acct' ('acct_v2' or 'acct_v3') table is
the default table where data will be written when in 'typed' mode (see 'sql_data' option
in CONFIG-KEYS document; default value is 'typed'); 'acct_uni' ('acct_uni_v2' or
'acct_uni_v3') is the default table where data will be written when in 'unified' mode.
Since v6, PostgreSQL tables are greatly simplified: unified mode is no longer supported
and an unique table ('acct_v6', for example) is created instead.
IIIc. SQLite examples
shell> cd sql/
- To create v1 tables:
shell> sqlite3 /tmp/pmacct.db < pmacct-create-table.sqlite3
Data will be available in 'acct' table of '/tmp/pmacct.db' DB. Of course, you can change
the database filename basing on your preferences.
- To create v2 tables:
shell> sqlite3 /tmp/pmacct.db < pmacct-create-table_v2.sqlite3
Data will be available in 'acct_v2' table of '/tmp/pmacct.db' DB.
... And so on for the newer versions.
IIId. Custom SQL tables
Custom tables can be built by creating your own SQL schema and indexes. This
allows to mix-and-match the primitives relevant to your accounting scenario.
To flag intention to build a custom table the sql_optimize_clauses directive
must be set to true, ie.:
sql_optimize_clauses: true
sql_table: <table name>
aggregate: <aggregation primitives list>
How to build the custom schema? Let's say the aggregation method of choice
(aggregate directive) is "vlan, in_iface, out_iface, etype" the table name is
"acct" and the database of choice is MySQL. The SQL schema is composed of four
main parts, explained below:
1) A fixed skeleton needed by pmacct logics:
CREATE TABLE <table_name> (
packets INT UNSIGNED NOT NULL,
bytes BIGINT UNSIGNED NOT NULL,
stamp_inserted DATETIME NOT NULL,
stamp_updated DATETIME,
);
2) Indexing: primary key (of your choice, this is only an example) plus
any additional index you may find relevant.
3) Primitives enabled in pmacct, in this specific example the ones below; should
one need more/others, these can be looked up in the sql/README.mysql file in
the section named "Aggregation primitives to SQL schema mapping:" :
vlan INT(2) UNSIGNED NOT NULL,
iface_in INT(4) UNSIGNED NOT NULL,
iface_out INT(4) UNSIGNED NOT NULL,
etype INT(2) UNSIGNED NOT NULL,
4) Any additional fields, ignored by pmacct, that can be of use, these can be
for lookup purposes, auto-increment, etc. and can be of course also part of
the indexing you might choose.
Putting the pieces together, the resulting SQL schema is below along with the
required statements to create the database:
DROP DATABASE IF EXISTS pmacct;
CREATE DATABASE pmacct;
USE pmacct;
DROP TABLE IF EXISTS acct;
CREATE TABLE acct (
vlan INT(2) UNSIGNED NOT NULL,
iface_in INT(4) UNSIGNED NOT NULL,
iface_out INT(4) UNSIGNED NOT NULL,
etype INT(2) UNSIGNED NOT NULL,
packets INT UNSIGNED NOT NULL,
bytes BIGINT UNSIGNED NOT NULL,
stamp_inserted DATETIME NOT NULL,
stamp_updated DATETIME,
PRIMARY KEY (vlan, iface_in, iface_out, etype, stamp_inserted)
);
To grant default pmacct user permission to write into the database look at the
file sql/pmacct-grant-db.mysql
IIIe. Historical accounting
Enabling historical accounting allows to aggregate data over time (ie. 5 mins, hourly,
daily) in a flexible and fully configurable way. Timestamps are lodged into two fields:
'stamp_inserted' which represents the basetime of the timeslot and 'stamp_updated' which
says when a given timeslot was updated for the last time. Following there is a pretty
standard configuration fragment to slice data into nicely aligned (or rounded-off) 5
minutes timeslots:
sql_history: 5m
sql_history_roundoff: m
IIIf. INSERTs-only
UPDATE queries are demanding in terms of resources; this is why, even if they are
supported by pmacct, a savy approach is to cache data for longer times in memory and
write them off once per timeslot (sql_history): this produces a much lighter INSERTs-
only environemnt. This is an example based on 5 minutes timeslots:
sql_refresh_time: 300
sql_history: 5m
sql_history_roundoff: m
sql_dont_try_update: true
Note that sql_refresh_time is always expressed in seconds. An alternative approach
for cases where sql_refresh_time must be kept shorter than sql_history (for example
because a) of long sql_history periods, ie. hours or days, and/or because b) near
real-time data feed is a requirement) is to set up a synthetic auto-increment 'id'
field: it successfully prevents duplicates but comes at the expenses of GROUP BY
queries when retrieving data.
IIIg. MongoDB examples
MongoDB if a document-oriented noSQL database. Defining feature of document-oriented
databases is that they are schemaless hence this section will only need to focus on a
simple configuration with historical accounting support:
...
plugins: mongodb
aggregate: ...
mongo_history: 5m
mongo_history_roundoff: m
mongo_refresh_time: 300
mongo_table: pmacct.acct
...
MongoDB release >= 2.2.0 is recommended. Installation of the MongoDB C driver 0.8,
also referred as legacy, is required. Version 0.9 of the driver and laters (also
referred as current) is not supported (yet). The legacy driver can be downloaded
at the following URL: https://github.com/mongodb/mongo-c-driver-legacy .
IV. Running the libpcap-based daemon (pmacctd)
pmacctd, like the other daemons, can be run with commandline options, using a config
file or a mix of the two. Sample configuration files are in examples/ tree. Note also
that most of the new features are available only as configuration directives. To be
aware of the existing configuration directives, please read the CONFIG-KEYS document.
Show all available pmacctd commandline switches:
shell> pmacctd -h
Run pmacctd reading configuration from a specified file (see examples/ tree for a brief
list of some commonly useed keys; divert your eyes to CONFIG-KEYS for the full list).
This example applies to all daemons:
shell> pmacctd -f pmacctd.conf
Daemonize the process; listen on eth0; aggregate data by src_host/dst_host; write to a
MySQL server; limit traffic matching only source ip network 10.0.0.0/16; note that
filters work the same as tcpdump. So, refer to libpcap/tcpdump man pages for examples
and further reading.
shell> pmacctd -D -c src_host,dst_host -i eth0 -P mysql src net 10.0.0.0/16
Or written the configuration way:
!
daemonize: true
plugins: mysql
aggregate: src_host, dst_host
interface: eth0
pcap_filter: src net 10.0.0.0/16
! ...
Print collected traffic data aggregated by src_host/dst_host over the screen; refresh
data every 30 seconds and listen on eth0.
shell> pmacctd -P print -r 30 -i eth0 -c src_host,dst_host
Or written the configuration way:
!
plugins: print
print_refresh_time: 30
aggregate: src_host, dst_host
interface: eth0
! ...
Daemonize the process; let pmacct aggregate traffic in order to show in vs out traffic
for network 192.168.0.0/16; send data to a PostgreSQL server. This configuration is not
possible via commandline switches; the corresponding configuration follows:
!
daemonize: true
plugins: pgsql[in], pgsql[out]
aggregate[in]: dst_host
aggregate[out]: src_host
aggregate_filter[in]: dst net 192.168.0.0/16
aggregate_filter[out]: src net 192.168.0.0/16
sql_table[in]: acct_in
sql_table[out]: acct_out
! ...
The previous example looks nice! But how to make data historical ? Simple enough, let's
suppose you want to split traffic by hour and write data into the DB every 60 seconds.
!
daemonize: true
plugins: pgsql[in], pgsql[out]
aggregate[in]: dst_host
aggregate[out]: src_host
aggregate_filter[in]: dst net 192.168.0.0/16
aggregate_filter[out]: src net 192.168.0.0/16
sql_table[in]: acct_in
sql_table[out]: acct_out
sql_refresh_time: 60
sql_history: 1h
sql_history_roundoff: h
! ...
Let's now translate the same example in the memory plugin world. It's use is valuable
expecially when it's required to feed bytes/packets/flows counters to external programs.
Examples about the client program will follow later in this document. Now, note that
each memory table need its own pipe file in order to get correctly contacted by the
client:
!
daemonize: true
plugins: memory[in], memory[out]
aggregate[in]: dst_host
aggregate[out]: src_host
aggregate_filter[in]: dst net 192.168.0.0/16
aggregate_filter[out]: src net 192.168.0.0/16
imt_path[in]: /tmp/pmacct_in.pipe
imt_path[out]: /tmp/pmacct_out.pipe
! ...
As a further note, check the CONFIG-KEYS document about more imt_* directives as they
will support in the task of fine tuning the size and boundaries of memory tables, if
default values are not ok for your setup.
Now, fire multiple instances of pmacctd, each on a different interface; again, because
each instance will have its own memory table, it will require its own pipe file for
client queries aswell (as explained in the previous examples):
shell> pmacctd -D -i eth0 -m 8 -s 65535 -p /tmp/pipe.eth0
shell> pmacctd -D -i ppp0 -m 0 -s 32768 -p /tmp/pipe.ppp0
Run pmacctd logging what happens to syslog and using "local2" facility:
shell> pmacctd -c src_host,dst_host -S local2
NOTE: superuser privileges are needed to execute pmacctd correctly.
V. Running the NetFlow and sFlow daemons (nfacctd/sfacctd)
All examples about pmacctd are also valid for nfacctd and sfacctd with the exception
of directives that apply exclusively to libpcap. If you've skipped examples in section
'IV', please read them before continuing. All configuration keys available are in the
CONFIG-KEYS document. Some examples:
Run nfacctd reading configuration from a specified file.
shell> nfacctd -f nfacctd.conf
Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound
traffic); write to a local MySQL server. Listen on port 5678 for incoming Netflow
datagrams (from one or multiple NetFlow agents). Let's make pmacct refresh data each
two minutes and let's make data historical, divided into timeslots of 10 minutes each.
Finally, let's make use of a SQL table, version 4.
shell> nfacctd -D -c sum_host -P mysql -l 5678
And now written the configuration way:
!
daemonize: true
plugins: mysql
aggregate: sum_host
nfacctd_port: 5678
sql_refresh_time: 120
sql_history: 10m
sql_history_roundoff: mh
sql_table_version: 4
! ...
Va. NetFlow daemon & accounting NetFlow v9/IPFIX options
NetFlow v9/IPFIX can send option records other than flow ones, typically used to send
to a collector mappings among interface SNMP ifIndexes to interface names or VRF ID's
to VRF names. nfacctd_account_options enables accounting of option records then these
should be split from regular flow records. Below is a sample config:
nfacctd_time_new: true
nfacctd_account_options: true
!
plugins: print[data], print[data_options]
!
pre_tag_filter[data]: 100
aggregate[data]: peer_src_ip, in_iface, out_iface, tos, vrf_id_ingress, vrf_id_egress
print_refresh_time[data]: 300
print_history[data]: 300
print_history_roundoff[data]: m
print_output_file_append[data]: true
print_output_file[data]: /path/to/flow_%s
print_output[data]: csv
!
pre_tag_filter[data_options]: 200
aggregate[data_options]: vrf_id_ingress, vrf_name
print_refresh_time[data_options]: 300
print_history[data_options]: 300
print_history_roundoff[data_options]: m
print_output_file_append[data_options]: true
print_output_file[data_options]: /path/to/options_%s
print_output[data_options]: event_csv
!
aggregate_primitives: /path/to/primitives.lst
pre_tag_map: /path/to/pretag.map
maps_refresh: true
Below is the referenced pretag.map:
set_tag=100 ip=0.0.0.0/0 sample_type=flow
set_tag=200 ip=0.0.0.0/0 sample_type=option
Below is the referenced primitives.lst:
name=vrf_id_ingress field_type=234 len=4 semantics=u_int
name=vrf_id_egress field_type=235 len=4 semantics=u_int
name=vrf_name field_type=236 len=32 semantics=str
VI. Running the NFLOG-based daemon (uacctd)
All examples about pmacctd are also valid for uacctd with the exception of directives
that apply exclusively to libpcap. If you've skipped examples in section 'IV', please
read them before continuing. All configuration keys available are in the CONFIG-KEYS
document.
The daemon depends on the package libnetfilter-log-dev (in Debian/Ubuntu or equivalent
in the prefered Linux distribution). The Linux NFLOG infrastructure requires a couple
parameters in order to work properly: the NFLOG multicast group (uacctd_group) to
which captured packets have to be sent to and the Netlink buffer size (uacctd_nl_size).
The default buffer settings (128KB) typically works OK for small environments. The
traffic is captured with an iptables rule. For example in one of the following ways:
* iptables -t mangle -I POSTROUTING -j NFLOG --nflog-group 5
* iptables -t raw -I PREROUTING -j NFLOG --nflog-group 5
Apart from determining how and what traffic to capture with iptables, which is topic
outside the scope of this document, the most relevant point is the "--nflog-nlgroup"
iptables setting has to match with the "uacctd_group" uacctd one.
A couple examples follow:
Run uacctd reading configuration from a specified file.
shell> uacctd -f uacctd.conf
Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound
traffic); write to a local MySQL server. Listen on NFLOG multicast group #5. Let's make
pmacct divide data into historical time-bins of 5 minutes. Let's disable UPDATE queries
and hence align refresh time with the timeslot length. Finally, let's make use of a SQL
table, version 4:
!
uacctd_group: 5
daemonize: true
plugins: mysql
aggregate: sum_host
sql_refresh_time: 300
sql_history: 5m
sql_history_roundoff: mh
sql_table_version: 4
sql_dont_try_update: true
! ...
VII. Running the pmacct client (pmacct)
The pmacct client is used to retrieve data from memory tables. Requests and answers
are exchanged via a pipe file: authorization is strictly connected to permissions on
the pipe file. Note: while writing queries commandline, it may happen to write chars
with a special meaning for the shell itself (ie. ; or *). Mind to either escape ( \;
or \* ) them or put in quotes ( " ).
Show all available pmacct client commandline switches:
shell> pmacct -h
Fetch data stored into the memory table:
shell> pmacct -s
Match data between source IP 192.168.0.10 and destination IP 192.168.0.3 and return
a formatted output; display all fields (-a), this way the output is easy to be parsed
by tools like awk/sed; each unused field will be zero-filled:
shell> pmacct -c src_host,dst_host -M 192.168.0.10,192.168.0.3 -a
Similar to the previous example; it is requested to reset data for matched entries;
the server will return the actual counters to the client, then will reset them:
shell> pmacct -c src_host,dst_host -M 192.168.0.10,192.168.0.3 -r
Fetch data for IP address dst_host 10.0.1.200; we also ask for a 'counter only' output
('-N') suitable, this time, for injecting data in tools like MRTG or RRDtool (sample
scripts are in the examples/ tree). Bytes counter will be returned (but the '-n' switch
allows also select which counter to display). If multiple entries match the request (ie
because the query is based on dst_host but the daemon is actually aggregating traffic
as "src_host, dst_host") their counters will be summed:
shell> pmacct -c dst_host -N 10.0.1.200
Another query; this time let's contact the server listening on pipe file /tmp/pipe.eth0:
shell> pmacct -c sum_port -N 80 -p /tmp/pipe.eth0
Find all data matching host 192.168.84.133 as either their source or destination address.
In particular, this example shows how to use wildcards and how to spawn multiple queries
(each separated by the ';' symbol). Take care to follow the same order when specifying
the primitive name (-c) and its actual value ('-M' or '-N'):
shell> pmacct -c src_host,dst_host -N "192.168.84.133,*;*,192.168.84.133"
Find all web and smtp traffic; we are interested in have just the total of such traffic
(for example, to split legal network usage from the total); the output will be a unique
counter, sum of the partial (coming from each query) values.
shell> pmacct -c src_port,dst_port -N "25,*;*,25;80,*;*,80" -S
Show traffic between the specified hosts; this aims to be a simple example of a batch
query; note that as value of both '-N' and '-M' switches it can be supplied a value like:
'file:/home/paolo/queries.list': actual values will be read from the specified file (and
they need to be written into it, one per line) instead of commandline:
shell> pmacct -c src_host,dst_host -N "10.0.0.10,10.0.0.1;10.0.0.9,10.0.0.1;10.0.0.8,10.0.0.1"
shell> pmacct -c src_host,dst_host -N "file:/home/paolo/queries.list"
VIII. Running the RabbitMQ/AMQP plugin
The Advanced Message Queuing Protocol (AMQP) is an open standard for passing business
messages between applications. RabbitMQ is a messaging broker, an intermediary for
messaging, which implementes AMQP. pmacct RabbitMQ/AMQP plugin is designed to send
aggregated network traffic data, in JSON format, through a RabbitMQ server to 3rd
party applications. Requirements to use the plugin are:
* A working RabbitMQ server: http://www.rabbitmq.com/
* RabbitMQ C API, rabbitmq-c: https://github.com/alanxz/rabbitmq-c/
* Libjansson to cook JSON objects: http://www.digip.org/jansson/
Once these elements are installed, pmacct can be configured for compiling. pmacct
makes use of pkg-config for finding libraries and headers location and checks some
"typical" default locations, ie. /usr/local/lib and /usr/local/include. So all
you should do is just:
./configure --enable-rabbitmq --enable-jansson
But, for example, should you have installed RabbitMQ in /usr/local/rabbitmq and
pkg-config is unable to help, you can supply this non-default location as follows
(assuming you are running the bash shell):
export RABBITMQ_LIBS="-L/usr/local/rabbitmq/lib -lrabbitmq"
export RABBITMQ_CFLAGS="-I/usr/local/rabbitmq/include"
./configure --enable-rabbitmq --enable-jansson
Then "make; make install" as usual. Following a configuration snippet showing a
basic RabbitMQ/AMQP plugin configuration (assumes: RabbitMQ server is available
at localhost; look all configurable directives up in the CONFIG-KEYS document):
! ..
plugins: amqp
!
aggregate: src_host, dst_host, src_port, dst_port, proto, tos
amqp_exchange: pmacct
amqp_routing_key: acct
amqp_refresh_time: 300
amqp_history: 5m
amqp_history_roundoff: m
! ..
pmacct will only declare a message exchange and provide a routing key, ie. it
will not get involved with queues at all. A basic consumer script, in Python,
is provided as sample to: declare a queue, bind the queue to the exchange and
show consumed data on the screen. The script is located in the pmacct default
distribution tarball in: examples/amqp/amqp_receiver.py and requires the pika
Python module installed. Should this not be available you can read on the
following page how to get it installed:
http://www.rabbitmq.com/tutorials/tutorial-one-python.html
Improvements to the basic Python script provided and/or examples in different
languages are very welcome at this stage.
IX. Running the Kafka plugin
Apache Kafka is publish-subscribe messaging rethought as a distributed commit log.
Its qualities being: fast, scalable, durable and distributed by design. pmacct
Kafka plugin is designed to send aggregated network traffic data, in JSON format,
through a Kafka broker to 3rd party applications. Requirements to use the plugin
are:
* A working Kafka broker (and Zookeper server): http://kafka.apache.org/
* Librdkafka: https://github.com/edenhill/librdkafka/
* Libjansson to cook JSON objects: http://www.digip.org/jansson/
Once these elements are installed, pmacct can be configured for compiling. pmacct
makes use of pkg-config for finding libraries and headers location and checks some
"typical" default locations, ie. /usr/local/lib and /usr/local/include. So all
you should do is just:
./configure --enable-kafka --enable-jansson
But, for example, should you have installed Kafka in /usr/local/kafka and pkg-
config is unable to help, you can supply this non-default location as follows
(assuming you are running the bash shell):
export KAFKA_LIBS="-L/usr/local/kafka/lib -lrdkafka"
export KAFKA_CFLAGS="-I/usr/local/kafka/include"
./configure --enable-kafka --enable-jansson
Then "make; make install" as usual. Following a configuration snippet showing a
basic Kafka plugin configuration (assumes: Kafka broker is available at 127.0.0.1
on port 9092; look all configurable directives up in the CONFIG-KEYS document):
! ..
plugins: kafka
!
aggregate: src_host, dst_host, src_port, dst_port, proto, tos
kafka_topic: pmacct.acct
kafka_refresh_time: 300
kafka_history: 5m
kafka_history_roundoff: m
! ..
A basic consumer script, in Python, is provided as sample to: declare a group_id
and bind it to the topic and show consumed data on the screen. The script is located
in the pmacct default distribution tarball in: examples/kafka/kafka_consumer.py and
requires the python-kafka Python module installed. Should this not be available you
can read on the following page how to get it installed:
http://kafka-python.readthedocs.org/
X. Internal buffering and queueing
Two options are provided for internal buffering and queueing: 1) a home-grown circular
queue implementation available since day one of pmacct (configured via plugin_pipe_size
and documented in docs/INTERNALS) and 2) from release 1.5.2, use a RabbitMQ broker for
queueing purposes (configured via plugin_pipe_amqp and plugin_pipe_amqp_* directives).
For a quick comparison: while relying on a RabbitMQ broker for queueing introduces an
external dependency (rabbitmq-c library, RabbitMQ server, etc.), it reduces the amount
of fine tuning needed by the home-grown circular queue implementation, for example
trial and error tasks like determine a value for plugin_pipe_size and find a viable
ratio among plugin_pipe_size and plugin_buffer_size.
The home-grown cicular queue has no external dependencies and is configured, for
example, as:
plugins: print[blabla]
plugin_buffer_size[blabla]: 10240
plugin_pipe_size[blabla]: 1024000
For more information about the home-grown circular queue, consult plugin_buffer_size
and plugin_pipe_size entries in CONFIG-KEYS and docs/INTERNALS "Communications between
core process and plugins" chapter.
The RabbitMQ queue has the same dependencies as the AMQP plugin; consult the "Running
the RabbitMQ/AMQP plugin" chapter in this document for where to download the required
packages/libraries and how to compile pmacct against these. When plugin_pipe_amqp is
set to true, following is how data exchange via a RabbitMQ broker is configured under
default settings:
plugins: print[blabla]
plugin_buffer_size[blabla]: 10240
!
plugin_pipe_amqp[blabla]: true
plugin_pipe_amqp_user[blabla]: guest
plugin_pipe_amqp_passwd[blabla]: guest
plugin_pipe_amqp_exchange[blabla]: pmacct
plugin_pipe_amqp_host[blabla]: localhost
plugin_pipe_amqp_vhost[blabla]: "/"
plugin_pipe_amqp_routing_key[blabla]: blabla-print
plugin_pipe_amqp_retry[blabla]: 60
XI. Quickstart guide to packet classifiers
pmacct 0.10.0 sees the introduction of a packet classification feature. The approach
is fully extensible: classification patterns are based over regular expressions (RE),
must be placed into a common directory and have a .pat file extension. Patterns for
well-known protocols are available and are just a click away. Furthermore, you can
write your own patterns (and share them with the active L7-filter project's community).
Below the quickstarter guide:
a) download pmacct
shell> wget http://www.pmacct.net/pmacct-x.y.z.tar.gz
b) compile pmacct
shell> cd pmacct-x.y.z; ./configure && make && make install
c-1) download regular expression (RE) classifiers as-you-need them: you just need to
point your browser to http://l7-filter.sourceforge.net/protocols/ then:
shell> cd /path/to/classifiers/
shell> wget http://l7-filter.sourceforge.net/layer7-protocols/protocols/[ protocol ].pat
c-2) download all the RE classifiers available: you just need to point your browser to
http://sourceforge.net/projects/l7-filter (and take to the latest L7-protocol
definitions tarball). Pay attention to remove potential catch-all patterns which
might be part of the downloaded package (ie. unknown.pat and unset.pat).
c-3) download shared object (SO) classifiers (written in C) as-you-need them: you need
just to point your browser to http://www.pmacct.net/classification/ , download the
available package, extract files and compile things following INSTALL instructions.
When everything is finished, install the produced shared objects:
shell> mv *.so /path/to/classifiers/
d-1) build pmacct configuration, a memory table example:
!
daemonize: true
interface: eth0
aggregate: flows, class
plugins: memory
classifiers: /path/to/classifiers/
snaplen: 700
!...
d-2) build pmacct configuration, a SQL example:
!
daemonize: true
interface: eth0
aggregate: flows, class
plugins: mysql
classifiers: /path/to/classifiers/
snaplen: 700
sql_history: 1h
sql_history_roundoff: h
sql_table_version: 5
sql_aggressive_classification: true
!...
e) Ok, we are done! Fire the pmacct collector daemon:
shell> pmacctd -f /path/to/configuration/file
You can now play with the SQL or pmacct client; furthermore, you can add/remove/write
patterns and load them by restarting the pmacct daemon. If using the memory plugin
you can check out the list of loaded plugins with 'pmacct -C'. Don't underestimate
the importance of 'snaplen', 'pmacctd_flow_buffer_size', 'pmacctd_flow_buffer_buckets'
values; get the time to take a read about them in the CONFIG-KEYS document.
XII. Quickstart guide to setup a NetFlow agent/probe
pmacct 0.11.0 sees the introduction of traffic data export capabilities, through both
NetFlow and sFlow protocols. While NetFlow v5 is fixed by nature, v9 adds flexibility
by allowing to transport custom informations (for example, L7-classification tags to a
remote collector). Below the quickstarter guide:
a) usual initial steps: download pmacct, unpack it, compile it.
b) build NetFlow probe configuration, using pmacctd:
!
daemonize: true
interface: eth0
aggregate: src_host, dst_host, src_port, dst_port, proto, tos
plugins: nfprobe
nfprobe_receiver: 1.2.3.4:2100
nfprobe_version: 9
! nfprobe_engine: 1:1
! nfprobe_timeouts: tcp=120:maxlife=3600
!
! networks_file: /path/to/networks.lst
!...
This is a basic working configuration. Additional probe features include:
1) generate ASNs by using a networks_file pointing to a valid Networks File (see
examples/ directory) and adding src_as, dst_as primitives to the 'aggregate'
directive; alternatively, as of release 0.12.0rc2, it's possible to generate ASNs
from the pmacctd BGP thread. The following fragment can be added to the config
above:
pmacctd_as: bgp
bgp_daemon: true
bgp_daemon_ip: 127.0.0.1
bgp_agent_map: /path/to/agent_to_peer.map
bgp_daemon_port: 17917
The bgp_daemon_port can be changed from the standard BGP port (179/TCP) in order to
co-exist with other BGP routing software which might be running on the same host.
Furthermore, they can safely peer each other by using 127.0.0.1 as bgp_daemon_ip.
In pmacctd, bgp_agent_map does the trick of mapping 0.0.0.0 to the IP address of
the BGP peer (ie. 127.0.0.1: 'set_tag=127.0.0.1 ip=0.0.0.0'); this setup, while
generic, was tested working in conjunction with Quagga 0.99.14. Following a relevant
fragment of the Quagga configuration:
router bgp Y
bgp router-id X.X.X.X
neighbor 127.0.0.1 remote-as Y
neighbor 127.0.0.1 port 17917
neighbor 127.0.0.1 update-source X.X.X.X
!
NOTE: if configuring a BGP neighbor over localhost via Quagga CLI the following
message is returned: "% Can not configure the local system as neighbor". This
is not returned when configuring the neighborship directly in the bgpd config
file.
2) encode flow classification information in NetFlow v9 like Cisco does with its
NBAR/NetFlow v9 integration. This can be done by introducing the 'class' primitive
to the afore mentioned 'aggregate' and add the extra configuration directives:
aggregate: class, src_host, dst_host, src_port, dst_port, proto, tos
classifiers: /path/to/classifiers/
snaplen: 700
Further information on this topic can be found in the section of this document about
stream classification.
3) add direction (ingress, egress) awareness to measured IP traffic flows. Direction
can be defined statically (in, out) or inferred dinamically (tag, tag2) via the use
of the nfprobe_direction directive. Let's look at a dynamic example using tag2;
first, add the following lines to the daemon configuration:
nfprobe_direction[plugin_name]: tag2
pre_tag_map: /path/to/pretag.map
then edit the tag map as follows. A return value of '1' means ingress while '2' is
translated to egress. It is possible to define L2 and/or L3 addresses to recognize
flow directions. The 'set_tag2' primitive (tag2) will be used to carry the return
value:
set_tag2=1 filter='dst host XXX.XXX.XXX.XXX'
set_tag2=2 filter='src host XXX.XXX.XXX.XXX'
set_tag2=1 filter='ether src XX:XX:XX:XX:XX:XX'
set_tag2=2 filter='ether dst XX:XX:XX:XX:XX:XX'
Indeed in such a case, the 'set_tag' primitive (tag) can be leveraged to other uses
(ie. filter sub-set of the traffic for flow export);
4) add interface (input, output) awareness to measured IP traffic flows. Interfaces
can be defined only in addition to direction. Interface can be either defined
statically (<1-4294967295>) or inferred dynamically (tag, tag2) with the use of the
nfprobe_ifindex directive. Let's look at a dynamic example using tag; first add the
following lines to the daemon config:
nfprobe_direction[plugin_name]: tag
nfprobe_ifindex[plugin_name]: tag2
pre_tag_map: /path/to/pretag.map
then edit the tag map as follows:
set_tag=1 filter='dst net XXX.XXX.XXX.XXX/WW' jeq=eval_ifindexes
set_tag=2 filter='src net XXX.XXX.XXX.XXX/WW' jeq=eval_ifindexes
set_tag=1 filter='dst net YYY.YYY.YYY.YYY/ZZ' jeq=eval_ifindexes
set_tag=2 filter='src net YYY.YYY.YYY.YYY/ZZ' jeq=eval_ifindexes
set_tag=1 filter='ether src YY:YY:YY:YY:YY:YY' jeq=eval_ifindexes
set_tag=2 filter='ether dst YY:YY:YY:YY:YY:YY' jeq=eval_ifindexes
set_tag=999 filter='net 0.0.0.0/0'
!
set_tag2=100 filter='dst host XXX.XXX.XXX.XXX' label=eval_ifindexes
set_tag2=100 filter='src host XXX.XXX.XXX.XXX'
set_tag2=200 filter='dst host YYY.YYY.YYY.YYY'
set_tag2=200 filter='src host YYY.YYY.YYY.YYY'
set_tag2=200 filter='ether src YY:YY:YY:YY:YY:YY'
set_tag2=200 filter='ether dst YY:YY:YY:YY:YY:YY'
The set_tag=999 works as a catch all for undefined L2/L3 addresses so
to prevent searching further in the map. In the example above direction
is set first then, if found, interfaces are set, using the jeq/label
pre_tag_map construct.
c) build NetFlow collector configuration, using nfacctd:
!
daemonize: true
nfacctd_ip: 1.2.3.4
nfacctd_port: 2100
plugins: memory[display]
aggregate[display]: src_host, dst_host, src_port, dst_port, proto
!
! classifiers: /path/to/classifiers
d) Ok, we are done ! Now fire both daemons:
shell a> pmacctd -f /path/to/configuration/pmacctd-nfprobe.conf
shell b> nfacctd -f /path/to/configuration/nfacctd-memory.conf
XIII. Quickstart guide to setup a sFlow agent/probe
pmacct 0.11.0 sees the introduction of traffic data export capabilities via sFlow; such
protocol is quite different from NetFlow: in short, it works by exporting portions of
sampled packets rather than building uni-directional flows as it happens in NetFlow;
this less-stateful approach makes sFlow a light export protocol well-tailored for high-
speed networks. Further, sFlow v5 can be extended much like NetFlow v9: meaning, ie.,
L7 classification or basic Extended Gateway information (ie. src_as, dst_as) can be
embedded in the record structure being exported. Below the quickstarter guide:
b) build sFlow probe configuration, using pmacctd:
!
daemonize: true
interface: eth0
plugins: sfprobe
sampling_rate: 20
sfprobe_agentsubid: 1402
sfprobe_receiver: 1.2.3.4:6343
!
! networks_file: /path/to/networks.lst
! classifiers: /path/to/classifiers/
! snaplen: 700
!...
XIV. Quickstart guide to setup the BGP daemon
The BGP daemon is run as a thread within the collector core process. The idea is
to receive data-plane information, ie. via NetFlow, sFlow, etc., and control
plane information, ie. full routing tables via BGP, from edge routers. Per-peer
BGP RIBs are maintained to ensure local views of the network, a behaviour close
to that of a BGP route-server. In case of routers with default-only or partial
BGP views, the default route can be followed up (bgp_default_follow); also it
might be desirable in certain situations, for example trading-off resources to
accuracy, to ntirely map one or a set of agents to a BGP peer (bgp_agent_map).
Pre-requisite is that the pmacct package is configured for compiling with support
for threads. Nowadays this is the default setting hence the following line will
do it:
shell> ./configure
The following configuration fragment is alone sufficient to set up a BGP daemon
which will bind to an IP address and will support up to a maximum number of 100
peers. Once PE routers start sending telemetry data and peer up, it should be
possible to see the BGP-related fields, ie. as_path, peer_as_dst, local_pref,
med, etc., correctly populated while querying the memory table:
bgp_daemon: true
bgp_daemon_ip: X.X.X.X
bgp_daemon_max_peers: 100
nfacctd_as: bgp
[ ... ]
plugins: memory
aggregate: src_as, dst_as, local_pref, med, as_path, peer_dst_as
The BGP daemon reads the remote ASN upon receipt of a BGP OPEN message and dynamically
presents itself as part of the same Autonomous System - to ensure an iBGP relationship
is established all the times. Also, the BGP daemon acts as a passive BGP neighbor and
hence will never try to re-establish a fallen peering session.
For debugging purposes related to the BGP feed(s), bgp_daemon_msglog_* configuration
directives can be enabled in order to log BGP messaging.
XIVa. Limiting AS-PATH and BGP community attributes length
AS-PATH and BGP communities can by nature get easily long, when represented as strings.
Sometimes only a small portion of their content is relevant to the accounting task and
hence a filtering layer was developed to take special care of these attributes. The
bgp_aspath_radius cuts the AS-PATH down after a specified amount of hops; whereas the
bgp_stdcomm_pattern does a simple sub-string matching against standard BGP communities,
filtering in only those that match (optionally, for better precision, a pre-defined
number of characters can be wildcarded by employing the '.' symbol, like in regular
expressions). See a typical usage example below:
bgp_aspath_radius: 3
bgp_stdcomm_pattern: 12345:
A detailed description of these configuration directives is, as usual, included in
the CONFIG-KEYS document.
XIVb. The source peer AS case
The peer_src_as primitive adds useful insight in understanding where traffic enters
the observed routing domain; but asymmetric routing impacts accuracy delivered by
devices configured with either NetFlow or sFlow and the peer-as feature (as it only
performs a reverse lookup, ie. a lookup on the source IP address, in the BGP table
hence saying where it would route such traffic). pmacct offers a few ways to perform
some mapping to tackle this issue and easily model both private and public peerings,
both bi-lateral or multi-lateral. Find below how to use a map, reloadable at runtime,
and its contents (for full syntax guide lines, please see the 'peers.map.example'
file within the examples section):
bgp_peer_src_as_type: map
bgp_peer_src_as_map: /path/to/peers.map
[/path/to/peers.map]