Compare commits

...

87 Commits

Author SHA1 Message Date
fumingwei
20b3719fbe 修改main.yml中的voip引用的table info表项 2021-05-19 18:19:05 +08:00
fumingwei
e15494d7e4 增加安装内核后,重启操作 2021-05-06 15:02:39 +08:00
fumingwei
628b0bbf04 修改变量log_mino 为 pangu_pxy.log_cache 2021-04-27 18:42:21 +08:00
fumingwei
b0dc10d139 增加sapp conflist的mesa_sip,rtp,fw_voip_plug 插件 2021-04-27 10:14:06 +08:00
fumingwei
f64240fcbf 增加package-dump 监控 2021-04-26 18:13:53 +08:00
fumingwei
ff90a94d4b 修改packet_dump_server 使用的配置文件 2021-04-26 11:45:09 +08:00
fumingwei
fb1c66c76c 1、新增dump_rtp_pcap安装2、整合配置变量 2021-04-25 17:19:23 +08:00
fumingwei
6e495828f0 修改和增加sapp的配置文件 2021-04-23 18:07:30 +08:00
fumingwei
2c58349922 21.04 版本更新,更新记录:https://docs.geedge.net/pages/viewpage.action?pageId=28803144 2021-04-22 19:45:01 +08:00
fumingwei
04cea8afd4 Merge branch 'tsg-version21.04-deploy' of https://git.mesalab.cn/tsg/tsg-scripts into tsg-version21.04-deploy 2021-04-17 10:09:35 +08:00
fumingwei
9dcd0cfbdd 修改atca vxlan流量属性接入配置 2021-04-17 10:08:40 +08:00
fumingwei
8338693e40 更新hos-client-cpp rpm 包 2021-04-16 17:54:39 +08:00
fumingwei
88664464f9 修复21.03 bug,相关链接:https://docs.geedge.net/pages/viewpage.action?pageId=30869129 2021-04-16 15:52:37 +08:00
fumingwei
6a98bc17b8 修改atca vxlan流量属性接入配置 2021-04-13 09:42:11 +08:00
fumingwei
1ed3568b7f 增加支持hos 公共库的rpm包 2021-03-23 16:49:12 +08:00
fumingwei
0a16f4dc3d 增加在tun模式下开启tfe-env-tun-mode service 2021-03-23 09:30:22 +08:00
fumingwei
131bb95a1e i更新mesa_ip 到最新版本 2021-03-19 15:14:14 +08:00
fumingwei
14b3be388a 21.03 版本更新,更新内容请参考;https://docs.geedge.net/pages/viewpage.action?pageId=23042804 2021-03-19 14:24:42 +08:00
fumingwei
f8d24abd4c 修改自检部署脚本位置 2021-02-08 09:23:58 +08:00
fumingwei
bd3bcd1e91 添加自检安装部署 2021-02-08 09:21:47 +08:00
fumingwei
41f8a0c8da 更新tsg_master,sapp,libmaatframe,tfe,app_control_plug,app_master
rpm包
2021-02-07 19:47:38 +08:00
fumingwei
6dfaf41870 20.11.rc3 rebase version 20.11 2021-01-31 22:50:33 +08:00
fumingwei
bcf5049ecb 晚上服务器部署模式 2021-01-29 19:41:26 +08:00
fumingwei
5267b73590 tsg scripts version 20.11 上传 2021-01-29 18:03:04 +08:00
刘学利
8beaf16134 Update conflist.inf.j2;更新conlist.inf,调整插件挂载的顺序 2020-10-20 16:36:58 +08:00
fumingwei
43d1a13cde tsg-dignose 自动部署脚本追加到tfe 可信证书文件中 2020-10-20 16:26:33 +08:00
fumingwei
5349fd24fb 1、增加tsg_master_entrance_id 2、修改sapp configlist.inf 3、kni rpm install 强制安装 2020-10-19 21:56:57 +08:00
fumingwei
344c734f70 Merge branch 'tsg-version20.11.rc1-deploy' of https://git.mesalab.cn/tsg/tsg-scripts into tsg-version20.11.rc1-deploy 2020-10-19 20:55:59 +08:00
fumingwei
ed6f5c3d3b Merge branch 'tsg-version20.11.rc1-deploy-firewall' into tsg-version20.11.rc1-deploy
# Conflicts:
#	roles/sapp/tasks/main.yml

更新firewall相关RPM包
2020-10-19 20:55:52 +08:00
liuxueli
93fc4a94b8 更新rpm 2020-10-19 20:50:35 +08:00
fengweihao
aeee8afab9 app-sketch-global升级 2020-10-19 20:43:17 +08:00
fengweihao
67ae52725b cerstore升级 2020-10-19 20:42:42 +08:00
fumingwei
b0c9ea045b 更新kni 2020-10-19 19:33:31 +08:00
luwenpeng
9d9b8ad83c 升级 tfe 到 4.3.14 2020-10-19 18:30:26 +08:00
fumingwei
1c5ea5b740 1、增加内存限制 2020-10-19 14:52:08 +08:00
fumingwei
7800356765 修改telegraf 安装失败问题 2020-10-17 18:04:44 +08:00
liuxueli
18410aa84a 修正app_proto_identify安装路径 2020-10-17 18:03:27 +08:00
fumingwei
11bf3dfa8e Merge branch 'tsg-version20.11.rc1-deploy-firewall' into tsg-version20.11.rc1-deploy 2020-10-17 14:12:48 +08:00
liuxueli
a517b99219 更新app_proto_identify和packet_dump 2020-10-17 14:03:11 +08:00
fumingwei
3fdae02a52 1、增加telegraf collect 部署 2、修改telegrafaf 配置文件 2020-10-17 13:59:56 +08:00
liuxueli
2b2cbf4113 更新tsg_master及packet_dump相应的RPM包 2020-10-17 10:36:19 +08:00
fumingwei
f0725b0e02 修改 clotho 为 package_dump 2020-10-16 16:57:20 +08:00
liuxueli
0f2b89512f Merge branch 'tsg-version20.11.rc1-deploy' into tsg-version20.11.rc1-deploy-firewall 2020-10-16 13:58:58 +08:00
liuxueli
924df3f5fd 更新packet_dump的安装 2020-10-16 13:57:59 +08:00
liuxueli
0aaff59a37 Merge branch 'tsg-version20.11.rc1-deploy-firewall' of https://git.mesalab.cn/tsg/tsg-scripts into tsg-version20.11.rc1-deploy-firewall 2020-10-16 13:13:16 +08:00
fumingwei
451677775d 合并提交分支 2020-10-16 10:28:48 +08:00
fumingwei
0fe01beaf5 1、增加libbreakpad_mini 安装 2、修改kni 部署 2020-10-16 10:22:16 +08:00
liuxueli
dc050b2e79 更新sapp配置文件模板; 更新pcapng存储程序; 2020-10-16 10:16:01 +08:00
liuxueli
a6a13adc07 更新fw_ssl_plug的RPM 2020-10-16 10:16:01 +08:00
luwenpeng
470194eb2d 升级 tfe 到 4.3.12 2020-10-16 10:12:27 +08:00
liuxueli
9c1e8fb655 更新sapp配置文件模板; 更新pcapng存储程序; 2020-10-16 10:08:11 +08:00
liuxueli
27f242ec8f Merge branch 'tsg-version20.11.rc1-deploy-firewall' of https://git.mesalab.cn/tsg/tsg-scripts into tsg-version20.11.rc1-deploy-firewall
# Conflicts:
#	roles/firewall/tasks/main.yml
2020-10-16 09:55:34 +08:00
liuxueli
b2c9836677 更新fw_ssl_plug的RPM 2020-10-16 09:53:46 +08:00
fumingwei
f1f5f29fe1 修改kni 部署脚本 2020-10-15 18:29:05 +08:00
fumingwei
deeb575b7b 1、修改breakpad_upload_url为全局变量 2、修改自检rpm包和部署脚本 2020-10-15 16:52:08 +08:00
liuxueli
44885b6f02 发布firewall的20.11版本 2020-10-15 16:40:56 +08:00
liuxueli
1a173bddcf 发布firewall的20.11版本 2020-10-15 15:50:19 +08:00
fengweihao
fe5852ce1c app-sketch-global更新 2020-10-14 19:05:55 +08:00
fengweihao
f49bc21400 添加zlog模板 2020-10-14 19:05:37 +08:00
fengweihao
88d6fda48f 更新RPM安装包
修改配置文件
2020-10-14 19:05:37 +08:00
luwenpeng
de0992db4d 更新 TFE 20.11 版配置文件,升级 tfe 到 4.3.11 2020-10-14 17:45:06 +08:00
fumingwei
fcb6118c31 1、co telegraf_statistic 更新到最新版本 2020-10-13 16:25:16 +08:00
fumingwei
d9ebec0f1c 增加telegraf collect 部署 2020-10-10 17:43:33 +08:00
fumingwei
381ef27011 更新自检程序为20.10 2020-10-10 14:13:57 +08:00
luqiuwen
da9b09ad08 升级mrzcpd到4.3.28 2020-10-09 20:32:03 -07:00
zhangzhihan
4ae7c7e329 update 2020-09-28 21:55:04 +08:00
zhangzhihan
c9abe87819 update 2020-09-28 20:41:30 +08:00
zhangzhihan
ac1e11b722 update 2020-09-25 16:05:10 +08:00
zhangzhihan
03b37a86d8 update 2020-09-25 15:24:41 +08:00
zhangzhihan
5aba47de31 update 2020-09-25 15:10:14 +08:00
zhangzhihan
b57e742be8 update 2020-09-25 12:12:25 +08:00
zhangzhihan
4177c779ef update 2020-09-24 15:36:49 +08:00
zhangzhihan
e522e090b5 update 2020-09-23 15:27:49 +08:00
zhangzhihan
92ed83217a update 2020-09-23 14:56:28 +08:00
zhangzhihan
c84cf9fa02 update 2020-09-23 14:07:56 +08:00
zhangzhihan
37dab8e842 update 2020-09-21 23:14:14 +08:00
zhangzhihan
05b56cb4ec update 2020-09-21 18:33:10 +08:00
zhangzhihan
27d3231a6e update 2020-09-14 21:55:36 +08:00
zhangzhihan
b4735332f4 update 2020-09-14 21:48:27 +08:00
zhangzhihan
f70cf73628 update 2020-09-10 20:19:30 +08:00
zhangzhihan
1d0943fdb0 update 2020-09-10 20:12:17 +08:00
zhangzhihan
1d210d18c4 update new 20.08 2020-09-10 03:22:39 +08:00
zhangzhihan
e088bc922b update 2020-09-04 10:55:01 +08:00
zhangzhihan
845a73e69f update 2020-09-03 20:20:04 +08:00
zhangzhihan
0f1d3dac47 update dpi 20.08 2020-09-01 10:59:05 +08:00
zhangzhihan
198f0ab8a0 20.07 2020-07-28 14:55:32 +08:00
zhangzhihan
4ea95f7201 20.07.rc1 2020-07-24 16:06:23 +08:00
266 changed files with 6515 additions and 906 deletions

100
adc_deploy.yml Normal file
View File

@@ -0,0 +1,100 @@
- hosts: adc_mxn
remote_user: root
roles:
- {role: adc_exporter, tags: adc_exporter}
- {role: adc_exporter_proxy, tags: adc_exporter_proxy}
# - {role: switch_rule, tags: switch_rule}
- hosts: adc_mcn0
remote_user: root
vars_files:
- install_config/group_vars/adc_global.yml
- install_config/group_vars/adc_mcn0.yml
roles:
- {role: framework, tags: framework}
- {role: kernel-ml, tags: kernel-ml}
- {role: mrzcpd, tags: mrzcpd}
- {role: sapp, tags: sapp}
- {role: tsg_master, tags: tsg_master}
- {role: kni, tags: kni}
- {role: firewall, tags: firewall}
- {role: tsg_app, tags: tsg_app}
- {role: http_healthcheck,tags: http_healthcheck}
- {role: redis, tags: redis}
- {role: cert-redis, tags: cert-redis}
- {role: maat-redis, tags: maat-redis, when: deploy_mode == "cluster"}
- {role: certstore, tags: certstore}
- {role: telegraf_statistic, tags: telegraf_statistic}
- {role: adc_exporter, tags: adc_exporter}
# - {role: switch_control, tags: switch_control}
- {role: tsg-env-patch, tags: tsg-env-patch}
- {role: docker-env, tags: docker-env}
- {role: tsg-diagnose, tags: tsg-diagnose}
- hosts: adc_mcn1
remote_user: root
vars_files:
- install_config/group_vars/adc_global.yml
- install_config/group_vars/adc_mcn1.yml
roles:
# - tsg-env-mcn1
- {role: framework, tags: framework}
- {role: kernel-ml, tags: kernel-ml}
- {role: mrzcpd, tags: mrzcpd}
- {role: tfe, tags: tfe}
- {role: adc_exporter, tags: adc_exporter}
# - {role: switch_control, tags: switch_control}
- {role: tsg-env-patch, tags: tsg-env-patch}
- {role: tsg-diagnose_sync_ca, tags: tsg-diagnose_sync_ca}
- hosts: adc_mcn2
remote_user: root
vars_files:
- install_config/group_vars/adc_global.yml
- install_config/group_vars/adc_mcn2.yml
roles:
# - tsg-env-mcn2
- {role: framework, tags: framework}
- {role: kernel-ml, tags: kernel-ml}
- {role: mrzcpd, tags: mrzcpd}
- {role: tfe, tags: tfe}
- {role: adc_exporter, tags: adc_exporter}
# - {role: switch_control, tags: switch_control}
- {role: tsg-env-patch, tags: tsg-env-path}
- {role: tsg-diagnose_sync_ca, tags: tsg-diagnose_sync_ca}
- hosts: adc_mcn3
remote_user: root
vars_files:
- install_config/group_vars/adc_global.yml
- install_config/group_vars/adc_mcn3.yml
roles:
- {role: framework, tags: framework}
- {role: kernel-ml, tags: kernel-ml}
- {role: mrzcpd, tags: mrzcpd}
- {role: tfe, tags: tfe}
# - {role: adc_exporter, tags: adc_exporter}
- {role: switch_control, tags: switch_control}
- {role: tsg-env-patch, tags: tsg-env-patch}
- {role: tsg-diagnose_sync_ca, tags: tsg-diagnose_sync_ca}
- hosts: adc_mcn0
remote_user: root
roles:
- {role: tsg-diagnose_stop_sync, tags: tsg-diagnose_stop_sync}
- hosts: packet_dump_server
remote_user: root
vars_files:
- install_config/group_vars/adc_global.yml
roles:
- {role: framework, tags: framework}
- {role: packet_dump, tags: packet_dump}
- {role: dump_rtp_pcap, tags: dump_rtp_pcap}
- hosts: app_global
remote_user: root
vars_files:
- install_config/group_vars/app_global.yml
roles:
- {role: app_global, tags: app_global}

View File

@@ -1,58 +0,0 @@
- hosts: Functional_Host
roles:
- framework
- kernel-ml
- hosts: blade-00
roles:
# - tsg-env-mcn0
- mrzcpd
- sapp
- tsg_master
- kni
- firewall
- http_healthcheck
- clotho
- certstore
- cert-redis
- telegraf_statistic
- hosts: blade-01
roles:
# - tsg-env-mcn1
- mrzcpd
- tfe
- hosts: blade-02
roles:
# - tsg-env-mcn2
- mrzcpd
- tfe
- hosts: blade-03
roles:
# - tsg-env-mcn3
- mrzcpd
- tfe
- hosts: blade-mxn
roles:
# - tsg-env-mxn
- hosts: pc-as-tun-mode
roles:
- kernel-ml
- framework
- mrzcpd
- tsg-env-tun-mode
- sapp
- tsg_master
- kni
- firewall
- http_healthcheck
- clotho
- certstore
- cert-redis
- tfe
- telegraf_statistic
- proxy_status

View File

@@ -0,0 +1,155 @@
#########################################
#####1: Inline_device; 2: Allot; 3: ADC_Tun_mode;
tsg_access_type: 2
#####2: ADC;
tsg_running_type: 2
#####deploy mode: cluster, single
deploy_mode: "cluster"
########################################
#Deploy_finished_reboot
Deploy_finished_reboot: 0
########################################
#IP Config
maat_redis_city_server:
address: "10.4.62.253"
port: 7002
maat_redis_server:
address: "192.168.100.1"
port: 7002
port_num: 1
db: 0
dynamic_maat_redis_server:
address: "192.168.100.1"
port: 7002
port_num: 1
db: 1
cert_store_server:
address: "192.168.100.1"
port: 9991
log_kafkabrokers:
address: ['1.1.1.1:9092','2.2.2.2:9092']
#log_minio:
# address: "10.4.62.253"
# port: 9090
pangu_pxy:
log_cache:
address: "10.9.62.253"
port: 9090
#########################################
#Log Level Config
#日志等级 10:DEBUG 20:INFO 30:FATAL
fw_voip_log_level: 10
fw_ftp_log_level: 10
fw_mail_log_level: 10
fw_http_log_level: 10
fw_dns_log_level: 10
fw_quic_log_level: 10
app_control_log_level: 10
capture_packet_log_level: 10
tsg_log_level: 10
tsg_master_log_level: 10
kni_log_level: 10
#日志等级 DEBUG INFO FATAL
tfe_log_level: FATAL
tfe_http_log_level: FATAL
pangu_log_level: FATAL
doh_log_level: FATAL
certstore_log_level: FATAL
packet_dump_log_level: 10
#######################################
#Sapp Performance Config
#Sapp工作在ADC计算板0时建议使用如下30+8的配置以保证更高的处理性能
sapp:
worker_threads: 42
send_only_threads_max: 1
bind_mask: 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43
inbound_route_dir: 1
prometheus_enable: 1
prometheus_port: 9273
prometheus_url_path: "/metrics"
########################################
#Kni Config
kni:
global:
tfe_node_count: 3
watch_dog:
switch: 1
maat:
readconf_mode: 2
send_logger:
switch: 1
tfe_nodes:
tfe0_enabled: 1
tfe1_enabled: 1
tfe2_enabled: 1
########################################
#Tfe Config
tfe:
nr_threads: 32
mirror_enable: 1
########################################
#Marsio Config
#marsio工作在ADC计算板时建议使用如下配置以保证更高的处理性能
mcn0_mrzcpd:
iocore: 52,53,54,55
mcn123_mrzcpd:
iocore: 54,55
mrtunnat:
lcore_id: 48,49,50,51
#########################################
#Tsg_app
tsg_app:
enable: 0
breakpad_upload_url: http://10.4.63.4:9000/api/2/minidump/?sentry_key=3203b43fd5384a7dbe6a48ecb1f3c595
data_center: Kyzylorda
tsg_master_entrance_id: 9
nic_mgr:
name: em1
firewall:
hos_serverip: "192.168.40.223"
hos_serverport: 9098
hos_accesskeyid: "default"
hos_secretkey: "default"
hos_poolsize: 100
hos_thread_sum: 32
hos_cache_size: 102400
hos_fs2_serverip: "127.0.0.1"
hos_fs2_serverport: 10086
APP_SKETCH_LOG_LEVEL: 10
APP_SKETCH_LOG_PATH: "./tsglog/app_sketch_local/app_sketch_local"
APP_SKETCH_L7_PROTOCOL_LABEL: "BASIC_PROTO_LABEL"
APP_SKETCH_QOS: 1
APP_SKETCH_PUBLISH_TOPIC: "APP_SIGNATURE_ID"
APP_SKETCH_BROKER_LIST: "tcp://192.168.40.161:1883"
dump_rtp_pcap:
aws_access_key_id: "default"
aws_secret_access_key: "default"
aws_session_token: "c21f969b5f03d33d43e04f8f136e7682"
consume_bootstrap_servers: ['192.168.44.14:9092']
endpoint_url: "http://192.168.44.67:9098/hos/"
produce_bootstrap_servers: "192.168.44.14:9092"
queue_size: 5000000
coroutine_max_num: 200
coroutine_num: 100
qfull_mode: 0
qfull_interval: 5

View File

@@ -0,0 +1,41 @@
#########################################
#Mcn0管理口网卡名
nic_mgr:
name: ens1f3
#########################################
#Mcn0流量接入网卡固定配置
nic_data_incoming:
name: ens1f4
#########################################
#Mcn0其他数据口网卡名配置固定配置
nic_inner_ctrl:
name: ens1.100
nic_to_tfe:
tfe0:
name: ens1f5
tfe1:
name: ens1f6
tfe2:
name: ens1f7
#########################################
#串联设备接入相关配置
inline_device_config:
keepalive_ip: 192.168.1.30
keepalive_mask: 255.255.255.252
#########################################
#Allot接入相关配置
AllotAccess:
#virturlInterface_1: ens1f2.103
#virturlInterface_2: ens1f2.104
virturlID_1: 1201
virturlID_2: 1202
virturlID_3: 1301
virturlID_4: 1302
#vvipv4_mask: 24
#vvipv6_mask: 64
bladename: mcn0

View File

@@ -0,0 +1,19 @@
#########################################
#Mcn1管理口网卡名
nic_mgr:
name: ens1f3
#########################################
#Mcn1流量接入网卡固定配置
nic_data_incoming:
name: ens1f1
#########################################
#Mcn1其他数据口网卡名配置固定配置
nic_inner_ctrl:
name: ens1.100
nic_traffic_mirror:
name: ens1f2
use_mrzcpd: 1
bladename: mcn1

View File

@@ -0,0 +1,19 @@
#########################################
#Mcn2管理口网卡名
nic_mgr:
name: ens8f3
#########################################
#Mcn2流量接入网卡固定配置
nic_data_incoming:
name: ens8f1
#########################################
#Mcn2其他数据口网卡名配置固定配置
nic_inner_ctrl:
name: ens8.100
nic_traffic_mirror:
name: ens8f2
use_mrzcpd: 1
bladename: mcn2

View File

@@ -0,0 +1,19 @@
#########################################
#Mcn3管理口网卡名
nic_mgr:
name: ens8f3
#########################################
#Mcn3流量接入网卡固定配置
nic_data_incoming:
name: ens8f1
#########################################
#Mcn3其他数据口网卡名配置固定配置
nic_inner_ctrl:
name: ens8.100
nic_traffic_mirror:
name: ens8f2
use_mrzcpd: 1
bladename: mcn3

View File

@@ -1,90 +0,0 @@
#########################################
#####0: Pcap; 1: Inline_device; 2: Allot; 3: ADC_Tun_mode; 4: ATCA;
tsg_access_type: 4
#####0: Tun_mode; 1: normal; 2: ADC;
tsg_running_type: 1
########################################
maat_redis_server:
address: "192.168.40.168"
port: 7002
db: 0
dynamic_maat_redis_server:
address: "192.168.40.168"
port: 7002
db: 0
cert_store_server:
address: "192.168.100.1"
port: 9991
log_kafkabrokers:
address: "1.1.1.1:9092,2.2.2.2:9092"
log_minio:
address: "192.168.40.168;"
port: 9090
fs_remote:
switch: 1
address: "192.168.100.1"
port: 58125
########################################
sapp:
worker_threads: 16
send_only_threads_max: 8
bind_mask: 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16
inbound_route_dir: 1
########################################
kni:
global:
log_level: 30
tfe_node_count: 3
watch_dog:
switch: 1
maat:
readconf_mode: 2
send_logger:
switch: 1
tfe_nodes:
tfe0_enabled: 1
tfe1_enabled: 1
tfe2_enabled: 1
########################################
tfe:
nr_threads: 32
mc_cache_eth: lo
keykeeper:
mode: "normal"
no_cache: 0
########################################
mrzcpd:
iocore: 39
mrtunnat:
lcore_id: 38
nic_data_incoming:
ethname: enp1s0
vf0_name: enp1s2
vf1_name: enp1s2f1
vf2_name: enp1s2f2
VlanFlipping:
vlanID_1: 100
vlanID_2: 101
vlanID_3: 103
vlanID_4: 104
########################################
server:
ethname: eth0
tun_name: eth0.100
internal_interface: "eth2"
external_interface: "eth3"

View File

@@ -0,0 +1,10 @@
#########################################
app_sketch_global_log_level: 10
maat_redis_server:
address: "192.168.40.168"
port: 7002
db: 0
file_stat_ip: "1.1.1.1"

View File

@@ -1,23 +0,0 @@
nic_mgr:
name: enp6s0
nic_data_incoming:
name: ens1f4
ip: 192.168.1.30
mask: 255.255.255.252
nic_inner_ctrl:
name: ens1.100
nic_to_tfe:
tfe0:
name: ens1f5
tfe1:
name: ens1f6
tfe2:
name: ens1f7
AllotAccess:
virturlInterface_1: ens1f2.103
virturlInterface_2: ens1f2.104
virturlID_1: 103
virturlID_2: 104
vvipv4_mask: 24
vvipv6_mask: 64

View File

@@ -1,11 +0,0 @@
nic_mgr:
name: enp6s0
nic_data_incoming:
name: ens1f1
mac: AA:BB:CC:DD:EE:FF
address: 127.0.0.1
nic_inner_ctrl:
name: ens1.100
nic_traffic_mirror:
name: ens1f2
use_mrzcpd: 1

View File

@@ -1,10 +0,0 @@
nic_mgr:
name: enp6s0
nic_data_incoming:
name: ens8f1
mac: AA:BB:CC:DD:EE:FF
nic_inner_ctrl:
name: ens8.100
nic_traffic_mirror:
name: ens8f2
use_mrzcpd: 1

View File

@@ -1,10 +0,0 @@
nic_mgr:
name: enp6s0
nic_data_incoming:
name: ens8f1
mac: AA:BB:CC:DD:EE:FF
nic_inner_ctrl:
name: ens8.100
nic_traffic_mirror:
name: ens8f2
use_mrzcpd: 1

View File

@@ -0,0 +1,200 @@
#########################################
#####0: Pcap; 1: Inline_device; 5:ATCA_VXLAN;
tsg_access_type: 0
#####0: Tun_mode; 1: normal;
tsg_running_type: 0
#####deploy mode: cluster, single
deploy_mode: "single"
########################################
#Deploy_finished_reboot
Deploy_finished_reboot: 0
########################################
#Server Basic Config
nic_mgr:
name: eth0
nic_inner_ctrl:
name: eth0.100
#########################################
#IP Config
#maat_redis_city_serve相关配置只在部署集群模式时使用
maat_redis_city_server:
address: ""
port:
maat_redis_server:
address: "#Bifang IP#"
port: 7002
port_num: 1
db: 0
dynamic_maat_redis_server:
address: "#Bifang IP#"
port: 7002
port_num: 1
db: 1
cert_store_server:
address: "192.168.100.1"
port: 9991
log_kafkabrokers:
address: ['1.1.1.1:9092','2.2.2.2:9092']
#log_minio:
# address: "10.9.62.253"
# port: 9090
#########################################
#Log Level Config
#日志等级 10:DEBUG 20:INFO 30:FATAL
fw_voip_log_level: 10
fw_ftp_log_level: 10
fw_mail_log_level: 10
fw_http_log_level: 10
fw_dns_log_level: 10
fw_quic_log_level: 10
app_control_log_level: 10
capture_packet_log_level: 10
tsg_log_level: 10
tsg_master_log_level: 10
kni_log_level: 10
#日志等级 DEBUG INFO FATAL
tfe_log_level: FATAL
tfe_http_log_level: FATAL
pangu_log_level: FATAL
doh_log_level: FATAL
certstore_log_level: 10
packet_dump_log_level: 10
#########################################
#Sapp Performance Config
#如果tsg_access_type=0sapp跑在pcap模式则以下配置可忽略
sapp:
worker_threads: 23
send_only_threads_max: 1
bind_mask: 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24
inbound_route_dir: 1
prometheus_enable: 1
prometheus_port: 9273
prometheus_url_path: "/metrics"
#########################################
#Sapp Double-Arm Config
packet_io:
internal_interface: eth2
external_interface: eth3
#########################################
#Kni Config
kni:
global:
tfe_node_count: 1
watch_dog:
switch: 1
maat:
readconf_mode: 2
send_logger:
switch: 1
tfe_nodes:
tfe0_enabled: 1
tfe1_enabled: 0
tfe2_enabled: 0
#########################################
#Tfe Config
tfe:
nr_threads: 32
mirror_enable: 1
#########################################
#Marsio Config
mrzcpd:
iocore: 39
mrtunnat:
lcore_id: 38
#########################################
#Tsg_app
tsg_app:
enable: 1
#########################################
#ATCA Config
#下列配置只在tsg_access_type=4 or 5时生效
ATCA_data_incoming:
ethname: enp1s0
vf0_name: enp1s2
vf1_name: enp1s2f1
vf2_name: enp1s2f2
ATCA_VlanFlipping:
vlanID_1: 100
vlanID_2: 101
vlanID_3: 103
vlanID_4: 104
#下列配置只在tsg_access_type=5时生效
ATCA_VXLAN:
keepalive_ip: "10.254.19.1"
keepalive_mask: "255.255.255.252"
#########################################
#Inline Device Config
inline_device_config:
keepalive_ip: 192.168.1.30
keepalive_mask: 255.255.255.252
data_incoming: eth5
#########################################
#新增配置项,均为默认值不用改
breakpad_upload_url: http://127.0.0.1:9000/api/2/minidump/?sentry_key=3556bac347c74585a994eb6823faf5c6
data_center: Beijing
tsg_master_entrance_id: 0
pangu_pxy:
log_cache:
address: "10.9.62.253"
port: 9090
firewall:
hos_serverip: "192.168.40.223"
hos_serverport: 9098
hos_accesskeyid: "default"
hos_secretkey: "default"
hos_poolsize: 100
hos_thread_sum: 32
hos_cache_size: 102400
hos_fs2_serverip: "127.0.0.1"
hos_fs2_serverport: 10086
APP_SKETCH_LOG_LEVEL: 10
APP_SKETCH_LOG_PATH: "./tsglog/app_sketch_local/app_sketch_local"
APP_SKETCH_L7_PROTOCOL_LABEL: "BASIC_PROTO_LABEL"
APP_SKETCH_QOS: 1
APP_SKETCH_PUBLISH_TOPIC: "APP_SIGNATURE_ID"
APP_SKETCH_BROKER_LIST: "tcp://192.168.40.161:1883"
dump_rtp_pcap:
aws_access_key_id: "default"
aws_secret_access_key: "default"
aws_session_token: "c21f969b5f03d33d43e04f8f136e7682"
consume_bootstrap_servers: ['192.168.44.14:9092']
endpoint_url: "http://192.168.44.67:9098/hos/"
produce_bootstrap_servers: "192.168.44.14:9092"
queue_size: 5000000
coroutine_max_num: 200
coroutine_num: 100
qfull_mode: 0
qfull_interval: 5

View File

@@ -1,26 +1,45 @@
[all:vars]
ansible_user=root
package_source=local
###################
# For example #
###################
#变量device_id根据设备序号设置即可
#变量vvipv4_1、vvipv4_2、vvipv6_1、vvipv6_2为Allot相关配置其他环境可不填或直接删除变量
#
#20.09版本新增APP部署
#[app_global]
#0.0.0.0
[pc-as-tun-mode]
#[server_as_tun_mode]
#1.1.1.1 device_id=device_1
#
#[adc_mxn]
#10.3.72.1
#10.3.72.2
#
#[adc_mcn0]
#10.3.73.1 device_id=device_1 vvipv4_1=10.3.61.1 vvipv4_2=10.3.62.1 vvipv6_1=fc00::61:1 vvipv6_2=fc00::62:1
#10.3.73.2 device_id=device_2 vvipv4_1=10.3.61.2 vvipv4_2=10.3.62.2 vvipv6_1=fc00::61:2 vvipv6_2=fc00::62:2
#
#[adc_mcn1]
#10.3.74.1 device_id=device_1
#10.3.74.2 device_id=device_2
#
#[adc_mcn2]
#10.3.75.1 device_id=device_1
#10.3.75.2 device_id=device_2
#
#[adc_mcn3]
#10.3.76.1 device_id=device_1
#10.3.76.2 device_id=device_2
[blade-mxn]
192.168.40.170
#[app_global]
#[server_as_tun_mode]
#broken warning:
#10.4.52.71
[adc_mcn0]
[adc_mcn1]
[adc_mcn2]
[adc_mcn3]
[app_global]
[server_as_tun_mode]
[blade-00]
192.168.40.166 vvipv4_1= vvipv4_2= vvipv6_1= vvipv6_2=
[blade-01]
192.168.40.167
[blade-02]
192.168.40.168
[blade-03]
192.168.40.169
[Functional_Host:children]
blade-00
blade-01
blade-02
blade-03

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,72 @@
- name: "copy freeipmi tools"
copy:
src: '{{ role_path }}/files/freeipmi-1.5.7-3.el7.x86_64.rpm'
dest: /tmp/ansible_deploy/
- name: "Install freeipmi rpm package"
yum:
name:
- "/tmp/ansible_deploy/freeipmi-1.5.7-3.el7.x86_64.rpm"
state: present
- name: "mkdir /opt/adc-exporter/"
file:
path: /opt/adc-exporter/
state: directory
- name: "copy node_exporter"
copy:
src: '{{ role_path }}/files/node_exporter'
dest: /opt/adc-exporter/node_exporter
mode: 0755
- name: "copy systemd_exporter"
copy:
src: '{{ role_path }}/files/systemd_exporter'
dest: /opt/adc-exporter/systemd_exporter
mode: 0755
- name: "copy ipmi_exporter"
copy:
src: '{{ role_path }}/files/ipmi_exporter'
dest: /opt/adc-exporter/ipmi_exporter
mode: 0755
- name: "templates adc-exporter-node.service"
template:
src: "{{role_path}}/templates/adc-exporter-node.service.j2"
dest: /usr/lib/systemd/system/adc-exporter-node.service
tags: template
- name: "templates adc-exporter-systemd.service"
template:
src: "{{role_path}}/templates/adc-exporter-systemd.service.j2"
dest: /usr/lib/systemd/system/adc-exporter-systemd.service
tags: template
- name: "templates adc-exporter-ipmi.service"
template:
src: "{{role_path}}/templates/adc-exporter-ipmi.service.j2"
dest: /usr/lib/systemd/system/adc-exporter-ipmi.service
tags: template
- name: 'adc-exporter-node service start'
systemd:
name: adc-exporter-node
enabled: yes
daemon_reload: yes
state: started
- name: 'adc-exporter-systemd service start'
systemd:
name: adc-exporter-systemd
enabled: yes
daemon_reload: yes
state: restarted
- name: 'adc-exporter-ipmi service start'
systemd:
name: adc-exporter-ipmi
enabled: yes
daemon_reload: yes
state: restarted

View File

@@ -0,0 +1,11 @@
[Unit]
Description=IPMI Exporter
After=network.target
[Service]
Type=simple
ExecStart=/opt/adc-exporter/ipmi_exporter
Restart=always
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,11 @@
[Unit]
Description=Node Exporter
After=network.target
[Service]
Type=simple
ExecStart=/opt/adc-exporter/node_exporter
Restart=always
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,11 @@
[Unit]
Description=Systemd Exporter
After=network.target
[Service]
Type=simple
ExecStart=/opt/adc-exporter/systemd_exporter --web.disable-exporter-metrics
Restart=always
[Install]
WantedBy=multi-user.target

Binary file not shown.

View File

@@ -0,0 +1,23 @@
- name: "mkdir /opt/adc-exporter/"
file:
path: /opt/adc-exporter/
state: directory
- name: "copy ping_exporter"
copy:
src: '{{ role_path }}/files/ping_exporter'
dest: /opt/adc-exporter/ping_exporter
mode: 0755
- name: "templates ping_exporter.service"
template:
src: "{{role_path}}/templates/adc-exporter-ping.service.j2"
dest: /usr/lib/systemd/system/adc-exporter-ping.service
tags: template
- name: 'adc-exporter-ping service start'
systemd:
name: adc-exporter-ping
enabled: yes
daemon_reload: yes
state: restarted

View File

@@ -0,0 +1,11 @@
[Unit]
Description=Ping Exporter
After=network.target
[Service]
Type=simple
ExecStart=/opt/adc-exporter/ping_exporter {{ ping_test.target|join(" ")}} --ping.size=512 --ping.interval=0.5s
Restart=always
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,34 @@
- name: "mkdir /opt/adc-exporter-proxy/"
file:
path: /opt/adc-exporter-proxy/
state: directory
- name: "copy file to device"
copy:
src: '{{ role_path }}/files/'
dest: /tmp/ansible_deploy/
- name: "unarchive adc-exporter-proxy(NGINX)"
unarchive:
src: /tmp/ansible_deploy/adc_exporter_proxy.tar.gz
dest: /opt/adc-exporter-proxy
remote_src: yes
- name: "templates adc-exporter-proxy.service"
template:
src: "{{role_path}}/templates/adc-exporter-proxy.service.j2"
dest: /usr/lib/systemd/system/adc-exporter-proxy.service
tags: template
- name: "template nginx.conf"
template:
src: "{{role_path}}/templates/nginx.conf.j2"
dest: /opt/adc-exporter-proxy/adc-exporter-proxy/conf/nginx.conf
tags: template
- name: 'adc-exporter-proxy service start'
systemd:
name: adc-exporter-proxy
enabled: yes
daemon_reload: yes
state: restarted

View File

@@ -0,0 +1,12 @@
[Unit]
Description=ADC Exporter Proxy (NGINX) for NEZHA
After=network.target remote-fs.target nss-lookup.target
[Service]
Type=simple
ExecStart=/opt/adc-exporter-proxy/adc-exporter-proxy/sbin/nginx -p /opt/adc-exporter-proxy/adc-exporter-proxy
ExecReload=/opt/adc-exporter-proxy/adc-exporter-proxy/sbin/nginx -p /opt/adc-exporter-proxy/adc-exporter-proxy -s reload
ExecStop=/opt/adc-exporter-proxy/adc-exporter-proxy/sbin/nginx -p /opt/adc-exporter-proxy/adc-exporter-proxy -s stop
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,152 @@
user nobody;
worker_processes 1;
daemon off;
error_log logs/error.log;
error_log logs/error.log notice;
error_log logs/error.log info;
pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
gzip on;
server {
listen 9000;
server_name localhost;
location /metrics/blade/mcn0/node_exporter {
proxy_pass http://192.168.100.1:9100/metrics;
}
location /metrics/blade/mcn1/node_exporter {
proxy_pass http://192.168.100.2:9100/metrics;
}
location /metrics/blade/mcn2/node_exporter {
proxy_pass http://192.168.100.3:9100/metrics;
}
location /metrics/blade/mcn3/node_exporter {
proxy_pass http://192.168.100.4:9100/metrics;
}
location /metrics/blade/mxn/node_exporter {
proxy_pass http://192.168.100.5:9100/metrics;
}
location /metrics/blade/mcn0/systemd_exporter {
proxy_pass http://192.168.100.1:9558/metrics;
}
location /metrics/blade/mcn1/systemd_exporter {
proxy_pass http://192.168.100.2:9558/metrics;
}
location /metrics/blade/mcn2/systemd_exporter {
proxy_pass http://192.168.100.3:9558/metrics;
}
location /metrics/blade/mcn3/systemd_exporter {
proxy_pass http://192.168.100.4:9558/metrics;
}
location /metrics/blade/mcn0/ipmi_exporter {
proxy_pass http://192.168.100.1:9290/metrics;
}
location /metrics/blade/mcn1/ipmi_exporter {
proxy_pass http://192.168.100.2:9290/metrics;
}
location /metrics/blade/mcn2/ipmi_exporter {
proxy_pass http://192.168.100.3:9290/metrics;
}
location /metrics/blade/mcn3/ipmi_exporter {
proxy_pass http://192.168.100.4:9290/metrics;
}
location /metrics/blade/mxn/ipmi_exporter {
proxy_pass http://192.168.100.5:9290/metrics;
}
location /metrics/blade/mcn0/certstore {
proxy_pass http://192.168.100.1:9002/metrics;
}
location /metrics/blade/mcn1/tfe {
proxy_pass http://192.168.100.2:9001/metrics;
}
location /metrics/blade/mcn2/tfe {
proxy_pass http://192.168.100.3:9001/metrics;
}
location /metrics/blade/mcn3/tfe {
proxy_pass http://192.168.100.4:9001/metrics;
}
location /metrics/blade/mcn0/sapp {
proxy_pass http://192.168.100.1:9273/metrics;
}
location /metrics/blade/mcn0/mrapm_device {
proxy_pass http://192.168.100.1:8901/metrics;
}
location /metrics/blade/mcn0/mrapm_stream {
proxy_pass http://192.168.100.1:8902/metrics;
}
location /metrics/blade/mcn1/mrapm_device {
proxy_pass http://192.168.100.2:8901/metrics;
}
location /metrics/blade/mcn1/mrapm_stream {
proxy_pass http://192.168.100.2:8902/metrics;
}
location /metrics/blade/mcn2/mrapm_device {
proxy_pass http://192.168.100.3:8901/metrics;
}
location /metrics/blade/mcn2/mrapm_stream {
proxy_pass http://192.168.100.3:8902/metrics;
}
location /metrics/blade/mcn3/mrapm_device {
proxy_pass http://192.168.100.4:8901/metrics;
}
location /metrics/blade/mcn3/mrapm_stream {
proxy_pass http://192.168.100.4:8902/metrics;
}
location /metrics/blade/mcn0/maat_redis {
proxy_pass http://192.168.100.1:9121/metrics;
}
location /metrics/blade/mcn0/ping_exporter {
proxy_pass http://192.168.100.1:9427/metrics;
}
}
}

Binary file not shown.

View File

@@ -0,0 +1,36 @@
- name: "copy app_global rpm to destination server"
copy:
src: "{{ role_path }}/files/"
dest: /tmp/ansible_deploy/
- name: "install app rpms from localhost"
yum:
name:
- /tmp/ansible_deploy/emqx-centos7-v4.1.2.x86_64.rpm
- /tmp/ansible_deploy/app-sketch-global-1.0.3.202010.a7b2e40-1.el7.x86_64.rpm
state: present
- name: "template the app_sketch_global.conf"
template:
src: "{{ role_path }}/templates/app_sketch_global.conf.j2"
dest: /opt/tsg/app-sketch-global/conf/app_sketch_global.conf
- name: "template the zlog.conf"
template:
src: "{{ role_path }}/templates/zlog.conf.j2"
dest: /opt/tsg/app-sketch-global/conf/zlog.conf
- name: "Start emqx"
systemd:
name: emqx.service
state: started
enabled: yes
daemon_reload: yes
- name: "Start app-sketch-global"
systemd:
name: app-sketch-global.service
state: started
enabled: yes
daemon_reload: yes

View File

@@ -0,0 +1,41 @@
[SYSTEM]
#1:print on screen, 0:don't
DEBUG_SWITCH = 1
RUN_LOG_PATH = "conf/zlog.conf"
[breakpad]
disable_coredump=0
enable_breakpad=1
breakpad_minidump_dir=/tmp/app-sketch-global/crashreport
enable_breakpad_upload=0
breakpad_upload_url={{ breakpad_upload_url }}
[CONFIG]
#Number of running threads
thread-nu = 1
timeout = 3600
address="tcp://127.0.0.1:1883"
topic_name="APP_SIGNATURE_ID"
client_name="ExampleClientSub"
[maat]
# 0:json 1: redis 2: iris
maat_input_mode=1
table_info=./resource/table_info.conf
json_cfg_file=./resource/gtest.json
stat_file=logs/verify-policy.status
full_cfg_dir=verify-policy/
inc_cfg_dir=verify-policy/
maat_redis_server={{ maat_redis_server.address }}
maat_redis_port_range={{ maat_redis_server.port }}
maat_redis_db_index={{ maat_redis_server.db }}
effect_interval_s=1
accept_tags={"tags":[{"tag":"location","value":"Astana"}]}
[stat]
statsd_server={{ file_stat_ip }}
statsd_port=8100
statsd_cycle=5
# FS_OUTPUT_STATSD=1, FS_OUTPUT_INFLUX_LINE=2
statsd_format=2

View File

@@ -0,0 +1,12 @@
[global]
default format = "%d(%c), %V, %F, %U, %m%n"
[levels]
DEBUG=10
INFO=20
FATAL=30
[rules]
*.fatal "./logs/error.log.%d(%F)";
*.{{ app_sketch_global_log_level }} "./logs/app_sketch_global.log.%d(%F)"

View File

@@ -160,7 +160,7 @@ loglevel notice
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile "/home/tsg/cert-redis/6379/6379.log"
#logfile "/opt/tsg/cert-redis/6379/6379.log"
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
@@ -244,7 +244,7 @@ dbfilename dump.rdb
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /home/tsg/cert-redis/6379/
#dir /opt/tsg/cert-redis/6379/
################################# REPLICATION #################################

View File

@@ -0,0 +1,12 @@
[Unit]
Description=Redis persistent key-value database
After=network.target
[Service]
ExecStart=/usr/bin/redis-server /etc/cert-redis.conf --supervised systemd
ExecStop=/usr/libexec/redis-shutdown cert-redis
Type=notify
[Install]
WantedBy=multi-user.target

View File

@@ -1,16 +0,0 @@
[Unit]
Description=Redis persistent key-value database
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
ExecStart=/usr/local/bin/start-cert-redis
ExecStop=killall redis-server
Type=forking
RuntimeDirectory=redis
RuntimeDirectoryMode=0755
[Install]
WantedBy=multi-user.target

View File

@@ -1,6 +0,0 @@
#!/bin/bash
#
cp -rf redis-server /usr/local/bin/
cp -rf redis-cli /usr/local/bin
cp -rf cert-redis.service /usr/lib/systemd/system/
cp -rf start-cert-redis /usr/local/bin

View File

@@ -1,4 +0,0 @@
#!/bin/bash
#
/usr/local/bin/redis-server /home/tsg/cert-redis/6379/6379.conf

View File

@@ -1,11 +1,11 @@
- name: "copy cert-redis to destination server"
- name: "copy cert-redis file to dest"
copy:
src: "{{ role_path }}/files/"
dest: /home/tsg
mode: 0755
- name: "install cert-redis"
shell: cd /home/tsg/cert-redis;sh install.sh
dest: "{{ item.dest }}"
mode: "{{ item.mode }}"
with_items:
- { src: "cert-redis.conf" , dest: "/etc" , mode: "0644" }
- { src: "cert-redis.service" , dest: "/usr/lib/systemd/system" , mode: "0644" }
- name: "start cert-redis"
systemd:

View File

@@ -0,0 +1,3 @@
[Service]
MemoryLimit=16G
ExecStartPost=/bin/bash -c "echo 16G > /sys/fs/cgroup/memory/system.slice/certstore.service/memory.memsw.limit_in_bytes"

View File

@@ -3,20 +3,31 @@
src: "{{ role_path }}/files/"
dest: "/tmp/ansible_deploy/"
- name: Ensures /home/tsg exists
file: path=/home/tsg state=directory
- name: Ensures /opt/tsg exists
file: path=/opt/tsg state=directory
tags: mkdir
- name: install certstore
yum:
name:
- /tmp/ansible_deploy/certstore-v20.05.0f61dde-1.el7.centos.x86_64.rpm
- /tmp/ansible_deploy/certstore-2.1.7.20210422.3f0c7ed-1.el7.x86_64.rpm
state: present
- name: template certstore configure file
template:
src: "{{ role_path }}/templates/cert_store.ini.j2"
dest: /home/tsg/certstore/conf/cert_store.ini
dest: /opt/tsg/certstore/conf/cert_store.ini
- name: template certstore zlog file
template:
src: "{{ role_path }}/templates/zlog.conf.j2"
dest: /opt/tsg/certstore/conf/zlog.conf
- name: "copy memory limit file to certstore.service.d"
copy:
src: "{{ role_path }}/files/memory.conf"
dest: /etc/systemd/system/certstore.service.d/
mode: 0644
- name: "start certstore"
systemd:

View File

@@ -1,9 +1,15 @@
[SYSTEM]
#1:print on screen, 0:don't
DEBUG_SWITCH = 1
#10:DEBUG, 20:INFO, 30:FATAL
RUN_LOG_LEVEL = 10
RUN_LOG_PATH = ./logs
RUN_LOG_PATH = "conf/zlog.conf"
[breakpad]
disable_coredump=0
enable_breakpad=1
breakpad_minidump_dir=/tmp/certstore/crashreport
enable_breakpad_upload=1
breakpad_upload_url= {{ breakpad_upload_url }}
[CONFIG]
#Number of running threads
thread-nu = 4
@@ -14,7 +20,8 @@ expire_after = 30
#Local default root certificate path
local_debug = 1
ca_path = ./cert/tango-ca-v3-trust-ca.pem
untrusted_ca_path = ./cert/mesalab-ca-untrust.pem
untrusted_ca_path = ./cert/tango-ca-v3-untrust-ca.pem
[MAAT]
#Configure the load mode,
#0: using the configuration distribution network
@@ -31,18 +38,23 @@ inc_cfg_dir=./rule/inc/index
full_cfg_dir=./rule/full/index
#Json file path when json schema is used
pxy_obj_keyring=./conf/pxy_obj_keyring.json
[LIBEVENT]
#Local monitor port number, default is 9991
port = 9991
[CERTSTORE_REDIS]
#The Redis server IP address and port number where the certificate is stored locally
ip = 127.0.0.1
port = 6379
[MAAT_REDIS]
#Maat monitors the Redsi server IP address and port number
ip = {{ maat_redis_server.address }}
port = {{ maat_redis_server.port }}
dbindex = {{ maat_redis_server.db }}
[stat]
statsd_server=192.168.100.1
statsd_port=8126
statsd_server=127.0.0.1
statsd_port=8100
statsd_set_prometheus_port=9002
statsd_set_prometheus_url_path=/metrics

View File

@@ -0,0 +1,10 @@
[global]
default format = "%d(%c), %V, %F, %U, %m%n"
[levels]
DEBUG=10
INFO=20
FATAL=30
[rules]
*.fatal "./logs/error.log.%d(%F)";
*.{{ certstore_log_level }} "./logs/certstore.log.%d(%F)"

View File

@@ -1,13 +0,0 @@
[Unit]
Description=clotho
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
ExecStart=/home/mesasoft/clotho/clotho
ExecStop=killall clotho
Type=forking
[Install]
WantedBy=multi-user.target

View File

@@ -1,29 +0,0 @@
- name: "copy clotho rpm to destination server"
copy:
src: "{{ role_path }}/files/clotho-debug-1.0.0.-1.el7.x86_64.rpm"
dest: /tmp/ansible_deploy/
- name: "copy clotho.service to destination server"
copy:
src: "{{ role_path }}/files/clotho.service"
dest: /usr/lib/systemd/system
mode: 0755
- name: "install clotho rpm from localhost"
yum:
name:
- /tmp/ansible_deploy/clotho-debug-1.0.0.-1.el7.x86_64.rpm
state: present
- name: "Template the clotho.conf"
template:
src: "{{ role_path }}/templates/clotho.conf.j2"
dest: /home/mesasoft/clotho/conf/clotho.conf
tags: template
- name: "start clotho"
systemd:
name: clotho.service
enabled: yes
daemon_reload: yes

View File

@@ -1,11 +0,0 @@
[KAFKA]
BROKER_LIST={{ log_kafkabrokers.address }}
[SYSTEM]
{% if tsg_running_type == 0 or 1 %}
NIC_NAME={{ server.ethname }}
{% else %}
NIC_NAME={{ nic_mgr.name }}
{% endif %}
LOG_LEVEL=10
LOG_PATH=log/clotho

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,38 @@
---
- name: "docker-ce: copy docker-ce.zip to dest device"
copy:
src: '{{ role_path }}/files/docker-ce.zip'
dest: /tmp/ansible_deploy/
- name: "docker-ce: unarchive docker-ce.zip"
unarchive:
src: /tmp/ansible_deploy/docker-ce.zip
dest: /tmp/ansible_deploy/
remote_src: yes
- name: "docker-ce: install docker-ce rpm package and dependencies"
yum:
name:
- /tmp/ansible_deploy/docker-ce/container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
- /tmp/ansible_deploy/docker-ce/docker-ce-19.03.13-3.el7.x86_64.rpm
- /tmp/ansible_deploy/docker-ce/docker-ce-cli-19.03.13-3.el7.x86_64.rpm
- /tmp/ansible_deploy/docker-ce/containerd.io-1.3.7-3.1.el7.x86_64.rpm
- /tmp/ansible_deploy/docker-ce/selinux-policy-targeted-3.13.1-266.el7_8.1.noarch.rpm
- /tmp/ansible_deploy/docker-ce/selinux-policy-3.13.1-266.el7_8.1.noarch.rpm
- /tmp/ansible_deploy/docker-ce/policycoreutils-python-2.5-34.el7.x86_64.rpm
- /tmp/ansible_deploy/docker-ce/policycoreutils-2.5-34.el7.x86_64.rpm
- /tmp/ansible_deploy/docker-ce/libselinux-utils-2.5-15.el7.x86_64.rpm
- /tmp/ansible_deploy/docker-ce/libselinux-python-2.5-15.el7.x86_64.rpm
- /tmp/ansible_deploy/docker-ce/libselinux-2.5-15.el7.x86_64.rpm
- /tmp/ansible_deploy/docker-ce/setools-libs-3.3.8-4.el7.x86_64.rpm
- /tmp/ansible_deploy/docker-ce/libsepol-2.5-10.el7.x86_64.rpm
- /tmp/ansible_deploy/docker-ce/libsemanage-python-2.5-14.el7.x86_64.rpm
- /tmp/ansible_deploy/docker-ce/libsemanage-2.5-14.el7.x86_64.rpm
state: present
- name: "docker-ce: systemctl start docker and enabled docker"
systemd:
name: docker
enabled: yes
daemon_reload: yes
state: started

View File

@@ -0,0 +1,18 @@
---
- name: "docker-compose: copy docker-compose.zip to dest device"
copy:
src: '{{ role_path }}/files/docker-compose.zip'
dest: /tmp/ansible_deploy/
- name: "docker-compose: unarchive docker-compose.zip"
unarchive:
src: /tmp/ansible_deploy/docker-compose.zip
dest: /tmp/ansible_deploy/
remote_src: yes
- name: "docker-compose: install docker-compose using pip3"
pip:
requirements: /tmp/ansible_deploy/docker-compose/requirements.txt
extra_args: "--no-index --find-links=file:///tmp/ansible_deploy/docker-compose"
state: forcereinstall
executable: pip3

View File

@@ -0,0 +1,4 @@
---
- include: docker-ce.yml
- include: python3.yml
- include: docker-compose.yml

View File

@@ -0,0 +1,21 @@
---
- name: "python3: copy python3.zip to dest device"
copy:
src: '{{ role_path }}/files/python3.zip'
dest: /tmp/ansible_deploy/
- name: "python3: unarchive python3.zip"
unarchive:
src: /tmp/ansible_deploy/python3.zip
dest: /tmp/ansible_deploy/
remote_src: yes
- name: "python3: install python3 rpm package and dependencies"
yum:
name:
- /tmp/ansible_deploy/python3/python3-libs-3.6.8-13.el7.x86_64.rpm
- /tmp/ansible_deploy/python3/python3-3.6.8-13.el7.x86_64.rpm
- /tmp/ansible_deploy/python3/python3-pip-9.0.3-7.el7_7.noarch.rpm
- /tmp/ansible_deploy/python3/python3-setuptools-39.2.0-10.el7.noarch.rpm
- /tmp/ansible_deploy/python3/libtirpc-0.2.4-0.16.el7.x86_64.rpm
state: present

View File

@@ -0,0 +1,22 @@
- name: "dump-rtp-pcap: copy dump-rtp-pcap rpm package to destination"
copy:
src: "{{ role_path }}/files/"
dest: /tmp/ansible_deploy/
- name: "dump-rtp-pcap: install dump-rtp-pcap rpm from localhost"
yum:
name:
- /tmp/ansible_deploy/dump_rtp_pcap-1.0.2.445da24-2.el7.x86_64.rpm
state: present
- name: "dump-rtp-pcap: Template the dump_rtp_pcap.json"
template:
src: "{{ role_path }}/templates/dump_rtp_pcap.json.j2"
dest: /home/mesasoft/dump_rtp_pcap/dump_rtp_pcap.json
tags: template
- name: "start dump_rtp_pcap"
systemd:
name: dump_rtp_pcap.service
enabled: yes
daemon_reload: yes

View File

@@ -0,0 +1,23 @@
{
"endian":"little",
"aws_access_key_id": "{{ dump_rtp_pcap.aws_access_key_id }}",
"aws_secret_access_key": "{{ dump_rtp_pcap.aws_secret_access_key }}",
"aws_session_token": "{{ dump_rtp_pcap.aws_session_token }}",
"bucket_name": "rtp-log",
"consume_auto_offset_reset":"latest",
"consume_bootstrap_servers": ["{{ dump_rtp_pcap.consume_bootstrap_servers | join("\",\"") }}"],
"consume_topic": "INTERNAL-RTP-LOG",
"endpoint_url": "{{ dump_rtp_pcap.endpoint_url }}",
"file_prefix":"rtp_log",
"group_id": "rtp-log-1",
"produce_bootstrap_servers": "{{ dump_rtp_pcap.produce_bootstrap_servers }}",
"produce_topic": "VOIP-RECORD-LOG",
"region_name": "us-east-1",
"save_speed_emit_interval":30,
"upload_speed_emit_interval":30,
"queue_size":{{ dump_rtp_pcap.queue_size }},
"coroutine_max_num":{{ dump_rtp_pcap.coroutine_max_num }},
"coroutine_num":{{ dump_rtp_pcap.coroutine_num }},
"qfull_mode":{{ dump_rtp_pcap.qfull_mode }},
"qfull_interval":{{ dump_rtp_pcap.qfull_interval }}
}

View File

@@ -11,21 +11,25 @@
skip_broken: yes
vars:
fw_packages:
- /tmp/ansible_deploy/dns-2.0.2.5effe72-2.el7.x86_64.rpm
- /tmp/ansible_deploy/ftp-1.0.4.5d3a283-2.el7.x86_64.rpm
- /tmp/ansible_deploy/http-2.0.1.e8f12ee-2.el7.x86_64.rpm
- /tmp/ansible_deploy/mail-1.0.3.cbc6034-2.el7.x86_64.rpm
- /tmp/ansible_deploy/ssl-1.0.0.73e5273-2.el7.x86_64.rpm
- /tmp/ansible_deploy/tsg_conn_record-1.0.0.2155660-1.el7.centos.x86_64.rpm
- /tmp/ansible_deploy/fw_dns_plug-debug-1.0.3.ea8e0f6-1.el7.centos.x86_64.rpm
- /tmp/ansible_deploy/fw_ftp_plug-1.1.0.74c9a05-2.el7.x86_64.rpm
- /tmp/ansible_deploy/fw_ssl_plug-1.0.3.30fcf35-2.el7.x86_64.rpm
- /tmp/ansible_deploy/fw_mail_plug-1.1.0.a42c5a0-2.el7.x86_64.rpm
- /tmp/ansible_deploy/fw_http_plug-1.2.0.a7e63c0-2.el7.x86_64.rpm
- /tmp/ansible_deploy/capture_packet_plug-debug-1.0.0.-1.el7.x86_64.rpm
- /tmp/ansible_deploy/clotho-debug-1.0.0.-1.el7.x86_64.rpm
- /tmp/ansible_deploy/quic-1.1.4.9c2e0ba-2.el7.x86_64.rpm
- /tmp/ansible_deploy/fw_quic_plug-1.0.1.e8cded4-2.el7.x86_64.rpm
- /tmp/ansible_deploy/capture_packet_plug-3.0.6.a2db4a4-2.el7.x86_64.rpm
- /tmp/ansible_deploy/conn_telemetry-1.0.2.8d6da43-2.el7.x86_64.rpm
- /tmp/ansible_deploy/dns-2.0.11.2265b5c-2.el7.x86_64.rpm
- /tmp/ansible_deploy/ftp-1.0.8.13d5fda-2.el7.x86_64.rpm
- /tmp/ansible_deploy/fw_dns_plug-3.0.5.2a25c20-2.el7.x86_64.rpm
- /tmp/ansible_deploy/fw_ftp_plug-3.0.1.0a78573-2.el7.x86_64.rpm
- /tmp/ansible_deploy/fw_http_plug-3.1.11.b0f7b8f-2.el7.x86_64.rpm
- /tmp/ansible_deploy/fw_mail_plug-3.0.9.d496513-2.el7.x86_64.rpm
- /tmp/ansible_deploy/fw_quic_plug-3.0.4.947ef77-2.el7.x86_64.rpm
- /tmp/ansible_deploy/fw_ssl_plug-3.0.6.a121701-2.el7.x86_64.rpm
- /tmp/ansible_deploy/http-2.0.5.c61ad9a-2.el7.x86_64.rpm
- /tmp/ansible_deploy/mail-1.0.9.c1d3bde-2.el7.x86_64.rpm
- /tmp/ansible_deploy/quic-1.1.17.8c22b4d-2.el7.x86_64.rpm
- /tmp/ansible_deploy/ssl-1.0.12.16b8fb5-2.el7.x86_64.rpm
- /tmp/ansible_deploy/tsg_conn_sketch-2.1.16.7b1b2d5-2.el7.x86_64.rpm
- /tmp/ansible_deploy/rtp-1.0.4.91b4ab7-2.el7.x86_64.rpm
- /tmp/ansible_deploy/mesa_sip-1.0.15.77d2f1a-2.el7.x86_64.rpm
- /tmp/ansible_deploy/fw_voip_plug-1.0.6.341fe83-2.el7.x86_64.rpm
- /tmp/ansible_deploy/app_proto_identify-1.0.10.1eeff1d-2.el7.x86_64.rpm
- name: "Template the tsgconf/main.conf"
template:
@@ -45,3 +49,15 @@
src: "{{ role_path }}/templates/capture_packet_plug.conf.j2"
dest: /home/mesasoft/sapp_run/conf/capture_packet_plug.conf
tags: template
- name: "Template the tsgconf/app_l7_proto_id.conf"
template:
src: "{{ role_path }}/templates/app_l7_proto_id.conf.j2"
dest: /home/mesasoft/sapp_run/tsgconf/app_l7_proto_id.conf
- name: "Template the /home/mesasoft/sapp_run/plug/business/tsg_conn_sketch/tsg_conn_sketch.inf"
template:
src: "{{ role_path }}/templates/tsg_conn_sketch.inf.j2"
dest: /home/mesasoft/sapp_run/plug/business/tsg_conn_sketch/tsg_conn_sketch.inf
tags: template

Some files were not shown because too many files have changed in this diff Show More