用mysqlslap对MySQL进行压力测试

MySQL从5.1.4版开始带有一个压力测试工具mysqlslap,通过模拟多个并发客户端访问mysql来执行测试。

[root@test-db data]# mysqlslap -a --concurrency=10000 --number-of-queries 10000 --iterations=10 --engine=innodb –debug-info -uroot -pyueworldtest
mysqlslap: [Warning] Using a password on the command line interface can be insecure.
Benchmark
        Running for engine innodb
        Average number of seconds to run all queries: 6.451 seconds
        Minimum number of seconds to run all queries: 1.963 seconds
        Maximum number of seconds to run all queries: 24.031 seconds
        Number of clients running queries: 10000
        Average number of queries per client: 1

不同的存储引擎的性能进行对比

[root@ecs-98c4 ~]# mysqlslap -a --concurrency=1000,5000 --number-of-queries 5000 --iterations=10000 --engine=myisam,innodb  -uroot -pxxxxxx

000和5000个并发分别得到一次测试结果(Benchmark),并发数越多,执行完所有查询的时间越长

[root@ecs-98c4 ~]# mysqlslap -a --concurrency=1000,5000 --number-of-queries 5000 --iterations=10000 -uroot -pxxxxxx

参数说明:

–auto-generate-sql, -a
自动生成测试表和数据
–auto-generate-sql-load-type=type
测试语句的类型。取值包括:read,key,write,update和mixed(默认)。
–number-char-cols=N, -x N
自动生成的测试表中包含多少个字符类型的列,默认1
–number-int-cols=N, -y N
自动生成的测试表中包含多少个数字类型的列,默认1
–number-of-queries=N
总的测试查询次数(并发客户数×每客户查询次数)
–query=name,-q
使用自定义脚本执行测试,例如可以调用自定义的一个存储过程或者sql语句来执行测试。
–create-schema
测试的schema,MySQL中schema也就是database
–commint=N
多少条DML后提交一次
–compress, -C
如果服务器和客户端支持都压缩,则压缩信息传递
–concurrency=N, -c N
并发量,也就是模拟多少个客户端同时执行select。可指定多个值,以逗号或者–delimiter参数指定的值做为分隔符
–engine=engine_name, -e engine_name
创建测试表所使用的存储引擎,可指定多个
–iterations=N, -i N
测试执行的迭代次数
–detach=N
执行N条语句后断开重连
–debug-info, -T
打印内存和CPU的信息
–only-print
只打印测试语句而不实际执行

iostat命令参数说明

Linux系统中的 iostat是I/O statistics(输入/输出统计)的缩写,iostat工具将对系统的磁盘操作活动进行监视。它的特点是汇报磁盘活动统计情况,同时也会汇报出CPU使用情况。同vmstat一样,iostat也有一个弱点,就是它不能对某个进程进行深入分析,仅对系统的整体情况进行分析。iostat属于sysstat软件包。
用yum install sysstat 直接安装。

1.命令格式:
iostat[参数][时间][次数]

2.命令功能:
通过iostat方便查看CPU、网卡、tty设备、磁盘、CD-ROM 等等设备的活动情况,负载信息。

3.命令参数:

-C 显示CPU使用情况
-d 显示磁盘使用情况
-k 以 KB 为单位显示
-m 以 M 为单位显示
-N 显示磁盘阵列(LVM) 信息
-n 显示NFS 使用情况
-p[磁盘] 显示磁盘和分区的情况
-t 显示终端和CPU的信息
-x 显示详细信息
-V 显示版本信息

[root@huafadb1 ~]# iostat 1
Linux 2.6.32-642.el6.x86_64 (huafadb1) 05/14/2018 x86_64 (64 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle

       0.01    0.00    0.01    0.00    0.00   99.98

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 0.44 1.42 5.86 376234 1550788
dm-0 0.80 1.30 5.86 343586 1550704
dm-1 0.00 0.01 0.00 2600 0
up-1 2.93 0.74 41.03 195418 10859128
sdb 5.86 1.51 81.92 399706 21683632
up-3 2.93 0.77 40.90 204288 10824504
dm-2 0.00 0.01 0.00 3298 24

avg-cpu: %user %nice %system %iowait %steal %idle

       0.02    0.00    0.03    0.02    0.00   99.94
参数说明:

rrqms:每秒这个设备相关的读取请求有多少被Merge了(当系统调用需要读取数据的时候,VFS将请求发到各个FS,如果FS发现不同的读取请求读取的是相同Block的数据,FS会将这个请求合并Merge)
wrqm/s:每秒这个设备相关的写入请求有多少被Merge了。
rsec/s:The number of sectors read from the device per second.
wsec/s:The number of sectors written to the device per second.
rKB/s:The number of kilobytes read from the device per second.
wKB/s:The number of kilobytes written to the device per second.
avgrq-sz:平均请求扇区的大小,The average size (in sectors) of the requests that were issued to the device.
avgqu-sz:是平均请求队列的长度。毫无疑问,队列长度越短越好,The average queue length of the requests that were issued to the device.
await:每一个IO请求的处理的平均时间(单位是微秒毫秒)。这里可以理解为IO的响应时间,一般地系统IO响应时间应该低于5ms,如果大于10ms就比较大了。
这个时间包括了队列时间和服务时间,也就是说,一般情况下,await大于svctm,它们的差值越小,则说明队列时间越短,反之差值越大,队列时间越长,说明系统出了问题。
svctm:表示平均每次设备I/O操作的服务时间(以毫秒为单位)。如果svctm的值与await很接近,表示几乎没有I/O等待,磁盘性能很好。
如果await的值远高于svctm的值,则表示I/O队列等待太长,系统上运行的应用程序将变慢。
%util: 在统计时间内所有处理IO时间,除以总共统计时间。例如,如果统计间隔1秒,该设备有0.8秒在处理IO,而0.2秒闲置,那么该设备的%util = 0.8/1 = 80%,
所以该参数暗示了设备的繁忙程度,一般地,如果该参数是100%表示磁盘设备已经接近满负荷运行了(当然如果是多磁盘,即使%util是100%,因为磁盘的并发能力,所以磁盘使用未必就到了瓶颈)。

使用FIO测试云主机IOPS及写入读取速度

先安装fio工具:

yum install fio -y

fio参数说明:

filename=/dev/emcpowerb 支持文件系统或者裸设备,-filename=/dev/sda2或-filename=/dev/sdb
direct=1                 测试过程绕过机器自带的buffer,使测试结果更真实
rw=randwread             测试随机读的I/O
rw=randwrite             测试随机写的I/O
rw=randrw                测试随机混合写和读的I/O
rw=read                  测试顺序读的I/O
rw=write                 测试顺序写的I/O
rw=rw                    测试顺序混合写和读的I/O
bs=4k                    单次io的块文件大小为4k
bsrange=512-2048         同上,提定数据块的大小范围
size=5g                  本次的测试文件大小为5g,以每次4k的io进行测试
numjobs=30               本次的测试线程为30
runtime=1000             测试时间为1000秒,如果不写则一直将5g文件分4k每次写完为止
ioengine=psync           io引擎使用pync方式,如果要使用libaio引擎,需要yum install libaio-devel包
rwmixwrite=30            在混合读写的模式下,写占30%
group_reporting          关于显示结果的,汇总每个进程的信息
此外
lockmem=1g               只使用1g内存进行测试
zero_buffers             用0初始化系统buffer
nrfiles=8                每个进程生成文件的数量

测试命令(创建100G容量大小的文件)

fio -direct=1 -iodepth=128 -rw=write -ioengine=libaio -bs=4k -size=100G -numjobs=1 -runtime=1000 -group_reporting -name=test -filename=/data/test111
运行结果:
test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=128
fio-2.0.13
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 102400MB)
Jobs: 1 (f=1): [W] [100.0% done] [0K/129.3M/0K /s] [0 /33.1K/0  iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=594: Mon May 14 10:27:54 2018
  write: io=102400MB, bw=129763KB/s, iops=32440 , runt=808070msec
    slat (usec): min=0 , max=80322 , avg=12.06, stdev=24.73
    clat (usec): min=222 , max=254410 , avg=3932.47, stdev=5007.90
     lat (usec): min=622 , max=254416 , avg=3944.82, stdev=5007.26
    clat percentiles (usec):
     |  1.00th=[ 1288],  5.00th=[ 1560], 10.00th=[ 1736], 20.00th=[ 1992],
     | 30.00th=[ 2224], 40.00th=[ 2480], 50.00th=[ 2736], 60.00th=[ 3088],
     | 70.00th=[ 3536], 80.00th=[ 4320], 90.00th=[ 6624], 95.00th=[ 9664],
     | 99.00th=[21120], 99.50th=[37632], 99.90th=[54528], 99.95th=[78336],
     | 99.99th=[177152]
    bw (KB/s)  : min=20720, max=215424, per=100.00%, avg=129801.50, stdev=19878.75
    lat (usec) : 250=0.01%, 750=0.01%, 1000=0.09%
    lat (msec) : 2=20.47%, 4=56.01%, 10=18.78%, 20=3.20%, 50=1.33%
    lat (msec) : 100=0.09%, 250=0.02%, 500=0.01%
  cpu          : usr=4.88%, sys=36.64%, ctx=9409837, majf=0, minf=23
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued    : total=r=0/w=26214400/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=102400MB, aggrb=129763KB/s, minb=129763KB/s, maxb=129763KB/s, mint=808070msec, maxt=808070msec

Disk stats (read/write):
  vdb: ios=0/26203032, merge=0/3962, ticks=0/90681446, in_queue=90672667, util=100.00%

100%随机,100%读,4K

[root@test-db data]# fio -filename=/dev/vdb1 -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=4k -size=100G -numjobs=50 -runtime=180 -group_reporting -name=rand_100read_4k
运行结果:
rand_100read_4k: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
...
rand_100read_4k: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.0.13
Starting 50 threads
Jobs: 50 (f=50): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr] [100.0% done] [47544K/0K/0K /s] [11.9K/0 /0  iops] [eta 00m:00s]
rand_100read_4k: (groupid=0, jobs=50): err= 0: pid=25884: Mon May 14 14:59:44 2018
  read : io=8454.3MB, bw=48091KB/s, iops=12022 , runt=180018msec
    clat (usec): min=318 , max=167471 , avg=4156.60, stdev=8363.94
     lat (usec): min=318 , max=167472 , avg=4156.89, stdev=8363.95
    clat percentiles (usec):
     |  1.00th=[ 1032],  5.00th=[ 1176], 10.00th=[ 1240], 20.00th=[ 1352],
     | 30.00th=[ 1464], 40.00th=[ 1688], 50.00th=[ 1832], 60.00th=[ 1944],
     | 70.00th=[ 2064], 80.00th=[ 2288], 90.00th=[15808], 95.00th=[18560],
     | 99.00th=[20608], 99.50th=[39168], 99.90th=[134144], 99.95th=[144384],
     | 99.99th=[156672]
    bw (KB/s)  : min=  231, max= 7248, per=2.00%, avg=961.78, stdev=384.46
    lat (usec) : 500=0.05%, 750=0.20%, 1000=0.51%
    lat (msec) : 2=64.37%, 4=19.62%, 10=2.88%, 20=10.62%, 50=1.52%
    lat (msec) : 100=0.02%, 250=0.23%
  cpu          : usr=0.00%, sys=0.02%, ctx=1813752, majf=18446744073709550866, minf=18446744073698419008
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=2164296/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=8454.3MB, aggrb=48090KB/s, minb=48090KB/s, maxb=48090KB/s, mint=180018msec, maxt=180018msec

Disk stats (read/write):
  vdb: ios=2163559/42, merge=23/10, ticks=8925036/223, in_queue=8922540, util=99.94%

100%随机,100%写, 4K

[root@test-db data]# fio -filename=/dev/vdb1 -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=4k -size=100G -numjobs=50 -runtime=180 -group_reporting -name=rand_100write_4k
运行结果:
rand_100write_4k: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
...
rand_100write_4k: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.0.13
Starting 50 threads
Jobs: 50 (f=50): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww] [100.0% done] [0K/47996K/0K /s] [0 /11.1K/0  iops] [eta 00m:00s]
rand_100write_4k: (groupid=0, jobs=50): err= 0: pid=25963: Mon May 14 15:08:43 2018
  write: io=8445.5MB, bw=48039KB/s, iops=12009 , runt=180024msec
    clat (usec): min=558 , max=248717 , avg=4160.45, stdev=8315.33
     lat (usec): min=559 , max=248717 , avg=4161.04, stdev=8315.34
    clat percentiles (usec):
     |  1.00th=[ 1256],  5.00th=[ 1528], 10.00th=[ 1656], 20.00th=[ 1816],
     | 30.00th=[ 1928], 40.00th=[ 2024], 50.00th=[ 2128], 60.00th=[ 2224],
     | 70.00th=[ 2384], 80.00th=[ 2704], 90.00th=[ 8768], 95.00th=[17792],
     | 99.00th=[24448], 99.50th=[38656], 99.90th=[130560], 99.95th=[191488],
     | 99.99th=[226304]
    bw (KB/s)  : min=  220, max= 4272, per=2.00%, avg=960.02, stdev=343.97
    lat (usec) : 750=0.04%, 1000=0.28%
    lat (msec) : 2=36.58%, 4=48.13%, 10=5.45%, 20=6.74%, 50=2.59%
    lat (msec) : 100=0.05%, 250=0.14%
  cpu          : usr=0.00%, sys=0.08%, ctx=1907994, majf=0, minf=18446744073699280913
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=2162044/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=8445.5MB, aggrb=48039KB/s, minb=48039KB/s, maxb=48039KB/s, mint=180024msec, maxt=180024msec

Disk stats (read/write):
  vdb: ios=148/2158380, merge=22/10, ticks=256/8928184, in_queue=8926408, util=99.96%

100%顺序,100%读 ,4K

fio -filename=/dev/vdb1 -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=4k -size=100G -numjobs=50 -runtime=180 -group_reporting -name=sqe_100read_4k
运行结果:
sqe_100read_4k: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.0.13
Starting 50 threads
Jobs: 50 (f=50): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [100.0% done] [59996K/0K/0K /s] [14.1K/0 /0  iops] [eta 00m:00s]
sqe_100read_4k: (groupid=0, jobs=50): err= 0: pid=26047: Mon May 14 15:13:02 2018
  read : io=10599MB, bw=60295KB/s, iops=15073 , runt=180006msec
    clat (usec): min=253 , max=550591 , avg=3315.70, stdev=26406.30
     lat (usec): min=254 , max=550591 , avg=3315.95, stdev=26406.30
    clat percentiles (usec):
     |  1.00th=[ 1176],  5.00th=[ 1240], 10.00th=[ 1256], 20.00th=[ 1288],
     | 30.00th=[ 1336], 40.00th=[ 1368], 50.00th=[ 1416], 60.00th=[ 1480],
     | 70.00th=[ 1528], 80.00th=[ 1592], 90.00th=[ 1736], 95.00th=[ 2800],
     | 99.00th=[17792], 99.50th=[18816], 99.90th=[522240], 99.95th=[528384],
     | 99.99th=[536576]
    bw (KB/s)  : min=   15, max= 2475, per=2.00%, avg=1206.47, stdev=634.32
    lat (usec) : 500=0.01%, 750=0.01%, 1000=0.04%
    lat (msec) : 2=92.64%, 4=3.87%, 10=1.05%, 20=2.03%, 50=0.01%
    lat (msec) : 250=0.01%, 500=0.17%, 750=0.15%
  cpu          : usr=0.05%, sys=0.23%, ctx=2697397, majf=0, minf=18446744073708900853
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=2713362/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=10599MB, aggrb=60294KB/s, minb=60294KB/s, maxb=60294KB/s, mint=180006msec, maxt=180006msec

Disk stats (read/write):
  vdb: ios=2711304/55, merge=1805/10, ticks=8899022/202, in_queue=8898860, util=100.00%

100%顺序,100%写 ,4K

fio -filename=/dev/vdb1 -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=4k -size=100G -numjobs=50 -runtime=180 -group_reporting -name=sqe_100write_4k
运行结果:
sqe_100write_4k: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
...
sqe_100write_4k: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.0.13
Starting 50 threads
Jobs: 50 (f=50): [WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW] [100.0% done] [0K/55788K/0K /s] [0 /13.1K/0  iops] [eta 00m:00s]
sqe_100write_4k: (groupid=0, jobs=50): err= 0: pid=26100: Mon May 14 15:17:24 2018
  write: io=10002MB, bw=56896KB/s, iops=14224 , runt=180019msec
    clat (usec): min=583 , max=457330 , avg=3513.01, stdev=9958.13
     lat (usec): min=584 , max=457331 , avg=3513.60, stdev=9958.13
    clat percentiles (usec):
     |  1.00th=[ 1224],  5.00th=[ 1384], 10.00th=[ 1480], 20.00th=[ 1592],
     | 30.00th=[ 1672], 40.00th=[ 1752], 50.00th=[ 1832], 60.00th=[ 1928],
     | 70.00th=[ 2096], 80.00th=[ 2480], 90.00th=[ 6368], 95.00th=[12480],
     | 99.00th=[20608], 99.50th=[37120], 99.90th=[166912], 99.95th=[268288],
     | 99.99th=[346112]
    bw (KB/s)  : min=  152, max= 2432, per=2.01%, avg=1141.03, stdev=401.10
    lat (usec) : 750=0.01%, 1000=0.11%
    lat (msec) : 2=65.06%, 4=21.62%, 10=7.36%, 20=4.40%, 50=1.24%
    lat (msec) : 100=0.08%, 250=0.06%, 500=0.07%
  cpu          : usr=0.05%, sys=0.38%, ctx=2547790, majf=0, minf=18446744073708899536
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=2560604/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=10002MB, aggrb=56896KB/s, minb=56896KB/s, maxb=56896KB/s, mint=180019msec, maxt=180019msec

Disk stats (read/write):
  vdb: ios=63/2558949, merge=0/275, ticks=12/8897256, in_queue=8895411, util=100.00%

100%随机,70%读,30%写 4K

fio -filename=/dev/vdb1 -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=4k -size=100G -numjobs=50 -runtime=180 -group_reporting -name=randrw_70read_4k
运行结果:
randrw_70read_4k: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
...
randrw_70read_4k: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.0.13
Starting 50 threads
Jobs: 50 (f=50): [mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [100.0% done] [48012K/20800K/0K /s] [12.3K/5200 /0  iops] [eta 00m:00s]
randrw_70read_4k: (groupid=0, jobs=50): err= 0: pid=26162: Mon May 14 15:29:18 2018
  read : io=8430.4MB, bw=47954KB/s, iops=11988 , runt=180020msec
    clat (usec): min=304 , max=177969 , avg=3045.60, stdev=5103.08
     lat (usec): min=304 , max=177970 , avg=3045.87, stdev=5103.08
    clat percentiles (usec):
     |  1.00th=[  860],  5.00th=[ 1048], 10.00th=[ 1160], 20.00th=[ 1320],
     | 30.00th=[ 1448], 40.00th=[ 1560], 50.00th=[ 1656], 60.00th=[ 1768],
     | 70.00th=[ 1896], 80.00th=[ 2128], 90.00th=[ 4384], 95.00th=[17792],
     | 99.00th=[20352], 99.50th=[28544], 99.90th=[41216], 99.95th=[57088],
     | 99.99th=[127488]
    bw (KB/s)  : min=  206, max= 4424, per=2.00%, avg=959.77, stdev=408.18
  write: io=3613.8MB, bw=20556KB/s, iops=5138 , runt=180020msec
    clat (usec): min=569 , max=157336 , avg=2617.23, stdev=2821.20
     lat (usec): min=570 , max=157337 , avg=2617.81, stdev=2821.20
    clat percentiles (usec):
     |  1.00th=[ 1096],  5.00th=[ 1432], 10.00th=[ 1592], 20.00th=[ 1768],
     | 30.00th=[ 1912], 40.00th=[ 2040], 50.00th=[ 2160], 60.00th=[ 2288],
     | 70.00th=[ 2416], 80.00th=[ 2576], 90.00th=[ 2864], 95.00th=[ 3472],
     | 99.00th=[19328], 99.50th=[20352], 99.90th=[22400], 99.95th=[37632],
     | 99.99th=[52992]
    bw (KB/s)  : min=   62, max= 1888, per=2.00%, avg=411.23, stdev=178.61
    lat (usec) : 500=0.06%, 750=0.22%, 1000=2.28%
    lat (msec) : 2=61.13%, 4=27.68%, 10=3.02%, 20=4.48%, 50=1.07%
    lat (msec) : 100=0.04%, 250=0.01%
  cpu          : usr=0.01%, sys=0.08%, ctx=2743663, majf=0, minf=18446744073698904078
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=2158165/w=925122/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=8430.4MB, aggrb=47953KB/s, minb=47953KB/s, maxb=47953KB/s, mint=180020msec, maxt=180020msec
  WRITE: io=3613.8MB, aggrb=20555KB/s, minb=20555KB/s, maxb=20555KB/s, mint=180020msec, maxt=180020msec

Disk stats (read/write):
  vdb: ios=2154562/923535, merge=0/0, ticks=6503566/2394699, in_queue=8896553, util=99.93%

执行结果说明:

io=执行了多少M的IO

bw=平均IO带宽
iops=IOPS
runt=线程运行时间
slat=提交延迟
clat=完成延迟
lat=响应时间
bw=带宽
cpu=利用率
IO depths=io队列
IO submit=单个IO提交要提交的IO数
IO complete=Like the above submit number, but for completions instead.
IO issued=The number of read/write requests issued, and how many of them were short.
IO latencies=IO完延迟的分布

io=总共执行了多少size的IO
aggrb=group总带宽
minb=最小.平均带宽.
maxb=最大平均带宽.
mint=group中线程的最短运行时间.
maxt=group中线程的最长运行时间.

ios=所有group总共执行的IO数.
merge=总共发生的IO合并数.
ticks=Number of ticks we kept the disk busy.
io_queue=花费在队列上的总共时间.
util=磁盘利用率

Nginx请求报Not Allowed 405解决方法

nginx不允许向静态文件提交post方式的请求,否则会返回“HTTP/1.1 405 Method not allowed”错误,
405.jpg
解决方法有三种:
一、重定向405错误码到200
在nginx server{}里面添加以下内容,root为站点的根目录

   location ~ (.*\.json) {
        root  html;
        error_page 405 =200 $1;
   }

最后reload nginx即可。
二、转换静态文件接收的POST请求到GET方法

upstream static80 {
    server localhost:80;
}

server {
    listen 80;
    ...

    error_page 405 =200 @405;
    location @405 {
        root  html;
        proxy_method GET;
        proxy_pass http://static80;
    }
}

三、安装编译的时候修改源码(不推荐此方法)
源码文件位于/nginx源码目录/src/http/modules/ngx_http_static_module.c,找到如下代码:

if (r->method & NGX_HTTP_POST) {
     return NGX_HTTP_NOT_ALLOWED;
}

整段注释掉,然后重新编译 make,不需要make install,把编译生成的nginx文件复制到sbin下的nginx文件,重启nginx即可。

ORACLE内核参数说明

服务器为16核16G虚拟机配置,oracle11gR2推荐的参数设置为:

kernel.shmmax = 4294967296
//公式:4G*1024*1024*1024=4294967296(字节) 
//表示最大共享内存,如果小的话可以按实际情况而定(单位:字节) 

kernel.shmall = 2097152
//公式:8G*1024*1024/4K = 2097152(页) 
//表示所有内存大小(单位:页) 一般为物理内存的一半

kernel.shmmni = 4096
//表示最小共享内存固定4096KB(由于32位操作系统默认一页为4K) 

net.ipv4.ip_local_port_range = 9000 65500
//ip_local_port_range表示端口的范围,为指定的内容 

net.core.wmem_max = 1048576
//最大的TCP数据发送窗口大小(字节)

kernel.sem = 250 32000 100 128
//4个参数依次是SEMMSL:每个用户拥有信号量最大数,SEMMNS:系统信号量最大数,SEMOPM:每次semopm系统调用操作数,SEMMNI:系统辛苦量集数最大数。这4个参数为固定内容大小 

fs.file-max = 6815744
//file-max固定大小65536 

net.core.rmem_default = 262144
//默认的TCP数据接收窗口大小(字节)

net.core.wmem_default = 262144
//默认的TCP数据发送窗口大小(字节)

net.core.rmem_max = 4194304
//最大的TCP数据接收窗口大小(字节)

fs.aio-max-nr = 1048576
//aio最大值

Could not execute auto check for display colors using command /usr/bin/xdpyinfo. Check if the DISPLAY variable is set.

CentOS7.4安装Oracle11GR2的时候,执行 ./runInstaller 安装时报错:

Checking Temp space: must be greater than 120 MB.   Actual 179056 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 15359 MB    Passed
Checking monitor: must be configured to display at least 256 colors    Failed <<<<
    >>> Could not execute auto check for display colors using command /usr/bin/xdpyinfo. Check if the DISPLAY variable is set.

Some requirement checks failed. You must fulfill these requirements before

continuing with the installation,Continue? (y/n) [n] 

>>> Ignoring required pre-requisite failures. Continuing...
解决方法:

使用root登陆VNC窗口,打开终端:

sh-4.2# su -l root    #切换到root账号下
xhost +SI:localuser:oracle   #执行xhost
su - oracle  #切换oracle
export DISPLAY=:1   #设置DISPLAY
./runInstaller  #安装即可

使用gotop查看系统负载情况

Gotop 是一个 TUI 图形活动监视器,使用 Go 语言编写。它是完全免费、开源的,受到了 gtop 和 vtop 的启发。
在此简要的指南中,我们将讨论如何安装和使用 Gotop 来监视 Linux 系统的活动。
Gotop 是用 Go 编写的,所以我们需要先安装它。要在 Linux 中安装 Go 语言,请参阅以下指南。
安装 Go 之后,使用以下命令下载最新的 Gotop 二进制文件。
安装:

sh -c "$(curl https://raw.githubusercontent.com/cjbassi/gotop/master/download.sh)"

将下载的二进制文件移动到您的 $PATH 中:

cp gotop /usr/local/bin

赋予执行权限

chmod +x /usr/local/bin/gotop

从终端直接运行gotop命令即可:
命令参数:

c – CPU
m – 内存
p – PID

对于进程浏览,请使用以下键。

上/下 箭头或者 j/k 键用于上移下移。
Ctrl-d 和 Ctrl-u – 上移和下移半页。
Ctrl-f 和 Ctrl-b – 上移和下移整页。
gg 和 G – 跳转顶部和底部。

按下 TAB 切换进程分组。要杀死选定的进程或进程组,请输入 dd。要选择一个进程,只需点击它。要向下/向上滚动,请使用鼠标滚动按钮。要放大和缩小 CPU 和内存的图形,请使用 h 和 l。要显示帮助菜单,只需按 ?。
gotop.png

via: https://www.ostechnix.com/gotop-yet-another-tui-graphical-activity-monitor-written-in-go/
github: https://github.com/cjbassi/gotop

mysqldump导出报错-Got error: 1449错误解决办法

mysqldump -uroot -pPasswd DBName > /home/lsf/DB_Backup.sql

报错,显示:

Got error: 1449: The user specified as a definer ('xxx'@'') does not exist when using LOCK TABLES

或者直接报:

the user specified as a definer ('xxx'@'') does not exist

解决办法:

给xxx用户再添加一个对全部host都有可以访问的权限:

mysql -uroot -pPasswd
mysql >grant all privileges on *.* to xxx@"%" identified by "Passwd";
mysql >flush privileges;

oem启动报错解决方法及dbconsole的重新配置

Win10下安装的ORACLE11g R2,修改了主机名重启笔记本以后,oem启动报错:

C:\app\ice\product\11.2.0\dbhome_1\BIN>emctl start dbcpnsole
Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name.

解决方法:
根据提示先设置ORACLE_UNQNAME,然后再尝试启动

SET ORACLE_UNQNAME=ORCL(orcl为SID)
emctl start dbconsole

如果启动成功即可使用https://localhost:1158/em登陆oem管理面板了;
如果不行可以尝试重新创建DBCONSOLE:

emca -config dbcontrol db -repos recreate
根据提示,先输入SID,再输入Y继续;
输入端口1521,输入SYS密码,输入DBSNMP密码,输入SYSMAN 密码,输入Y继续
完成。
C:\app\ice\product\11.2.0\dbhome_1\BIN>emca -config dbcontrol db -repos recreate

EMCA 开始于 2018-5-6 16:21:22
EM Configuration Assistant, 11.2.0.0.2 正式版
版权所有 (c) 2003, 2005, Oracle。保留所有权利。

输入以下信息:
数据库 SID: unixsodb
监听程序端口号: 1521
监听程序 ORACLE_HOME [ c:\app\ice\product\11.2.0\dbhome_1 ]:
SYS 用户的口令:
DBSNMP 用户的口令:
SYSMAN 用户的口令:
通知的电子邮件地址 (可选):
通知的发件 (SMTP) 服务器 (可选):
-----------------------------------------------------------------

已指定以下设置

数据库 ORACLE_HOME ................ c:\app\ice\product\11.2.0\dbhome_1

本地主机名 ................ ice110
监听程序 ORACLE_HOME ................ c:\app\ice\product\11.2.0\dbhome_1
监听程序端口号 ................ 1521
数据库 SID ................ unixsodb
通知的电子邮件地址 ...............
通知的发件 (SMTP) 服务器 ...............

-----------------------------------------------------------------
是否继续? [是(Y)/否(N)]: y
2018-5-6 16:22:24 oracle.sysman.emcp.EMConfig perform
信息: 正在将此操作记录到 c:\app\ice\cfgtoollogs\emca\unixsodb\emca_2018_05_06_16_21_22.log。
2018-5-6 16:22:25 oracle.sysman.emcp.EMReposConfig invoke
信息: 正在删除 EM 资料档案库 (此操作可能需要一段时间)...
2018-5-6 16:23:06 oracle.sysman.emcp.EMReposConfig invoke
信息: 已成功删除资料档案库
2018-5-6 16:23:27 oracle.sysman.emcp.EMReposConfig createRepository
信息: 正在创建 EM 资料档案库 (此操作可能需要一段时间)...
2018-5-6 16:26:00 oracle.sysman.emcp.EMReposConfig invoke
信息: 已成功创建资料档案库
2018-5-6 16:26:02 oracle.sysman.emcp.EMReposConfig uploadConfigDataToRepository
信息: 正在将配置数据上载到 EM 资料档案库 (此操作可能需要一段时间)...
2018-5-6 16:26:23 oracle.sysman.emcp.EMReposConfig invoke
信息: 已成功上载配置数据
2018-5-6 16:26:32 oracle.sysman.emcp.util.DBControlUtil configureSoftwareLib
信息: 软件库已配置成功。
2018-5-6 16:26:32 oracle.sysman.emcp.EMDBPostConfig configureSoftwareLibrary
信息: 正在部署预配档案...
2018-5-6 16:26:58 oracle.sysman.emcp.EMDBPostConfig configureSoftwareLibrary
信息: 预配档案部署成功。
2018-5-6 16:26:58 oracle.sysman.emcp.util.DBControlUtil secureDBConsole
信息: 正在保护 Database Control (此操作可能需要一段时间)...
2018-5-6 16:27:04 oracle.sysman.emcp.util.DBControlUtil secureDBConsole
信息: 已成功保护 Database Control。
2018-5-6 16:27:04 oracle.sysman.emcp.util.DBControlUtil startOMS
信息: 正在启动 Database Control (此操作可能需要一段时间)...
2018-5-6 16:27:50 oracle.sysman.emcp.EMDBPostConfig performConfiguration
信息: 已成功启动 Database Control
2018-5-6 16:27:50 oracle.sysman.emcp.EMDBPostConfig performConfiguration
信息: >>>>>>>>>>> Database Control URL 为 https://ice110:5500/em <<<<<<<<<<<
2018-5-6 16:27:52 oracle.sysman.emcp.EMDBPostConfig invoke
警告:
************************  WARNING  ************************

管理资料档案库已置于安全模式下, 在此模式下将对 Enterprise Manager 数据进行加密。加密密钥已放置在文件 c:/app/ice/product/11.2.0/dbhome_1/ice110_unixsodb/sysman/config/emkey.ora 中。请务必备份此文件, 因为如果此文件丢失, 则加密数据将不可用。

***********************************************************
已成功完成 Enterprise Manager 的配置
EMCA 结束于 2018-5-6 16:27:52

C:\app\ice\product\11.2.0\dbhome_1\BIN>

查看状态:

emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
https://ice110:5500/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory c:\app\ice\product\11.2.0\dbhome_1/ice110_unixsodb/sysman/log

注意重建以后oem的端口变成了5500,默认端口是1158
oem.png
emca常用命令:

创建一个EM资料库
emca -repos create

重建一个EM资料库
emca -repos recreate

删除一个EM资料库
emca -repos drop

配置数据库的 Database Control
emca -config dbcontrol db

删除数据库的 Database Control配置
emca -deconfig dbcontrol db

重新配置db control的端口,默认端口在1158
emca -reconfig ports
emca -reconfig ports -dbcontrol_http_port 1160
emca -reconfig ports -agent_port 3940

启动EM console服务
SET ORACLE_UNQNAME=ORCL(orcl为SID)
emctl start dbconsole

停止EM console服务
emctl stop dbconsole

查看EM console服务的状态
emctl status dbconsole

配置dbconsole的步骤
emca -repos create
emca -config dbcontrol db
emctl start dbconsole

重新配置dbconsole的步骤
emca -repos drop
emca -repos create
emca -config dbcontrol db
emctl start dbconsole

SVN Skipped 'xxx' -- Node remains in conflict svn文件冲突解决方法

开发在执行svn up更新静态文件的时候报错,

# svn up
Updating '.':
Skipped 'xxx' -- Node remains in conflict
At revision 2635.
Summary of conflicts:
  Skipped paths: 1

xxx为文件名

处理方式:

svn remove --force filename
svn resolve --accept=working  filename
svn up

一般就可以了,如果还是不行可以把目录mv掉,重新全量拉下·

Kafka主要配置文件参数详解

官方文档地址:http://kafka.apache.org/documentation.html

############################# System #############################
#唯一标识在集群中的ID,要求是正数。
broker.id=0
#服务端口,默认9092
port=9092
#监听地址,不设为所有地址
host.name=debugo01

# 处理网络请求的最大线程数
num.network.threads=2
# 处理磁盘I/O的线程数
num.io.threads=8
# 一些后台线程数
background.threads = 4
# 等待IO线程处理的请求队列最大数
queued.max.requests = 500

#  socket的发送缓冲区(SO_SNDBUF)
socket.send.buffer.bytes=1048576
# socket的接收缓冲区 (SO_RCVBUF) 
socket.receive.buffer.bytes=1048576
# socket请求的最大字节数。为了防止内存溢出,message.max.bytes必然要小于
socket.request.max.bytes = 104857600

############################# Topic #############################
# 每个topic的分区个数,更多的partition会产生更多的segment file
num.partitions=2
# 是否允许自动创建topic ,若是false,就需要通过命令创建topic
auto.create.topics.enable =true
# 一个topic ,默认分区的replication个数 ,不能大于集群中broker的个数。
default.replication.factor =1
# 消息体的最大大小,单位是字节
message.max.bytes = 1000000

############################# ZooKeeper #############################
# Zookeeper quorum设置。如果有多个使用逗号分割
zookeeper.connect=debugo01:2181,debugo02,debugo03
# 连接zk的超时时间
zookeeper.connection.timeout.ms=1000000
# ZooKeeper集群中leader和follower之间的同步实际
zookeeper.sync.time.ms = 2000

############################# Log #############################
#日志存放目录,多个目录使用逗号分割
log.dirs=/var/log/kafka

# 当达到下面的消息数量时,会将数据flush到日志文件中。默认10000
#log.flush.interval.messages=10000
# 当达到下面的时间(ms)时,执行一次强制的flush操作。interval.ms和interval.messages无论哪个达到,都会flush。默认3000ms
#log.flush.interval.ms=1000
# 检查是否需要将日志flush的时间间隔
log.flush.scheduler.interval.ms = 3000

# 日志清理策略(delete|compact)
log.cleanup.policy = delete
# 日志保存时间 (hours|minutes),默认为7天(168小时)。超过这个时间会根据policy处理数据。bytes和minutes无论哪个先达到都会触发。
log.retention.hours=168
# 日志数据存储的最大字节数。超过这个时间会根据policy处理数据。
#log.retention.bytes=1073741824

# 控制日志segment文件的大小,超出该大小则追加到一个新的日志segment文件中(-1表示没有限制)
log.segment.bytes=536870912
# 当达到下面时间,会强制新建一个segment
log.roll.hours = 24*7
# 日志片段文件的检查周期,查看它们是否达到了删除策略的设置(log.retention.hours或log.retention.bytes)
log.retention.check.interval.ms=60000

# 是否开启压缩
log.cleaner.enable=false
# 对于压缩的日志保留的最长时间
log.cleaner.delete.retention.ms = 1 day

# 对于segment日志的索引文件大小限制
log.index.size.max.bytes = 10 * 1024 * 1024
#y索引计算的一个缓冲区,一般不需要设置。
log.index.interval.bytes = 4096

############################# replica #############################
# partition management controller 与replicas之间通讯的超时时间
controller.socket.timeout.ms = 30000
# controller-to-broker-channels消息队列的尺寸大小
controller.message.queue.size=10
# replicas响应leader的最长等待时间,若是超过这个时间,就将replicas排除在管理之外
replica.lag.time.max.ms = 10000
# 是否允许控制器关闭broker ,若是设置为true,会关闭所有在这个broker上的leader,并转移到其他broker
controlled.shutdown.enable = false
# 控制器关闭的尝试次数
controlled.shutdown.max.retries = 3
# 每次关闭尝试的时间间隔
controlled.shutdown.retry.backoff.ms = 5000

# 如果relicas落后太多,将会认为此partition relicas已经失效。而一般情况下,因为网络延迟等原因,总会导致replicas中消息同步滞后。如果消息严重滞后,leader将认为此relicas网络延迟较大或者消息吞吐能力有限。在broker数量较少,或者网络不足的环境中,建议提高此值.
replica.lag.max.messages = 4000
#leader与relicas的socket超时时间
replica.socket.timeout.ms= 30 * 1000
# leader复制的socket缓存大小
replica.socket.receive.buffer.bytes=64 * 1024
# replicas每次获取数据的最大字节数
replica.fetch.max.bytes = 1024 * 1024
# replicas同leader之间通信的最大等待时间,失败了会重试
replica.fetch.wait.max.ms = 500
# 每一个fetch操作的最小数据尺寸,如果leader中尚未同步的数据不足此值,将会等待直到数据达到这个大小
replica.fetch.min.bytes =1
# leader中进行复制的线程数,增大这个数值会增加relipca的IO
num.replica.fetchers = 1
# 每个replica将最高水位进行flush的时间间隔
replica.high.watermark.checkpoint.interval.ms = 5000
 
# 是否自动平衡broker之间的分配策略
auto.leader.rebalance.enable = false
# leader的不平衡比例,若是超过这个数值,会对分区进行重新的平衡
leader.imbalance.per.broker.percentage = 10
# 检查leader是否不平衡的时间间隔
leader.imbalance.check.interval.seconds = 300
# 客户端保留offset信息的最大空间大小
offset.metadata.max.bytes = 1024

#############################Consumer #############################
# Consumer端核心的配置是group.id、zookeeper.connect
# 决定该Consumer归属的唯一组ID,By setting the same group id multiple processes indicate that they are all part of the same consumer group.
group.id
# 消费者的ID,若是没有设置的话,会自增
consumer.id
# 一个用于跟踪调查的ID ,最好同group.id相同
client.id = <group_id>
 
# 对于zookeeper集群的指定,必须和broker使用同样的zk配置
zookeeper.connect=debugo01:2182,debugo02:2182,debugo03:2182
# zookeeper的心跳超时时间,查过这个时间就认为是无效的消费者
zookeeper.session.timeout.ms = 6000
# zookeeper的等待连接时间
zookeeper.connection.timeout.ms = 6000
# zookeeper的follower同leader的同步时间
zookeeper.sync.time.ms = 2000
# 当zookeeper中没有初始的offset时,或者超出offset上限时的处理方式 。
# smallest :重置为最小值 
# largest:重置为最大值 
# anything else:抛出异常给consumer
auto.offset.reset = largest

# socket的超时时间,实际的超时时间为max.fetch.wait + socket.timeout.ms.
socket.timeout.ms= 30 * 1000
# socket的接收缓存空间大小
socket.receive.buffer.bytes=64 * 1024
#从每个分区fetch的消息大小限制
fetch.message.max.bytes = 1024 * 1024
 
# true时,Consumer会在消费消息后将offset同步到zookeeper,这样当Consumer失败后,新的consumer就能从zookeeper获取最新的offset
auto.commit.enable = true
# 自动提交的时间间隔
auto.commit.interval.ms = 60 * 1000
 
# 用于消费的最大数量的消息块缓冲大小,每个块可以等同于fetch.message.max.bytes中数值
queued.max.message.chunks = 10

# 当有新的consumer加入到group时,将尝试reblance,将partitions的消费端迁移到新的consumer中, 该设置是尝试的次数
rebalance.max.retries = 4
# 每次reblance的时间间隔
rebalance.backoff.ms = 2000
# 每次重新选举leader的时间
refresh.leader.backoff.ms
 
# server发送到消费端的最小数据,若是不满足这个数值则会等待直到满足指定大小。默认为1表示立即接收。
fetch.min.bytes = 1
# 若是不满足fetch.min.bytes时,等待消费端请求的最长等待时间
fetch.wait.max.ms = 100
# 如果指定时间内没有新消息可用于消费,就抛出异常,默认-1表示不受限
consumer.timeout.ms = -1

#############################Producer#############################
# 核心的配置包括:
# metadata.broker.list
# request.required.acks
# producer.type
# serializer.class

# 消费者获取消息元信息(topics, partitions and replicas)的地址,配置格式是:host1:port1,host2:port2,也可以在外面设置一个vip
metadata.broker.list
 
#消息的确认模式
# 0:不保证消息的到达确认,只管发送,低延迟但是会出现消息的丢失,在某个server失败的情况下,有点像TCP
# 1:发送消息,并会等待leader 收到确认后,一定的可靠性
# -1:发送消息,等待leader收到确认,并进行复制操作后,才返回,最高的可靠性
request.required.acks = 0
 
# 消息发送的最长等待时间
request.timeout.ms = 10000
# socket的缓存大小
send.buffer.bytes=100*1024
# key的序列化方式,若是没有设置,同serializer.class
key.serializer.class
# 分区的策略,默认是取模
partitioner.class=kafka.producer.DefaultPartitioner
# 消息的压缩模式,默认是none,可以有gzip和snappy
compression.codec = none
# 可以针对默写特定的topic进行压缩
compressed.topics=null
# 消息发送失败后的重试次数
message.send.max.retries = 3
# 每次失败后的间隔时间
retry.backoff.ms = 100
# 生产者定时更新topic元信息的时间间隔 ,若是设置为0,那么会在每个消息发送后都去更新数据
topic.metadata.refresh.interval.ms = 600 * 1000
# 用户随意指定,但是不能重复,主要用于跟踪记录消息
client.id=""
 
# 异步模式下缓冲数据的最大时间。例如设置为100则会集合100ms内的消息后发送,这样会提高吞吐量,但是会增加消息发送的延时
queue.buffering.max.ms = 5000
# 异步模式下缓冲的最大消息数,同上
queue.buffering.max.messages = 10000
# 异步模式下,消息进入队列的等待时间。若是设置为0,则消息不等待,如果进入不了队列,则直接被抛弃
queue.enqueue.timeout.ms = -1
# 异步模式下,每次发送的消息数,当queue.buffering.max.messages或queue.buffering.max.ms满足条件之一时producer会触发发送。
batch.num.messages=200

MySQL创建函数-存储过程报“ERROR 1418 ”错误 解决方法

MySQL创建函数或存储过程的时候报error 1418错误:

This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)

是因为log_bin_trust_function_creators参数在起作用:
当二进制日志启用后,这个变量就会启用。它控制是否可以信任存储函数创建者,不会创建写入二进制日志引起不安全事件的存储函数。
如果设置为0(默认值),用户不得创建或修改存储函数,除非它们具有除CREATE ROUTINE或ALTER ROUTINE特权之外的SUPER权限。 设置为0还强制使用DETERMINISTIC特性或READS SQL DATA或NO SQL特性声明函数的限制。 如果变量设置为1,MySQL不会对创建存储函数实施这些限制,此变量也适用于触发器的创建。

mysql> show variables like 'log_bin';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| log_bin       | ON    |
+---------------+-------+
1 row in set (0.00 sec)
 
mysql>  show variables like '%log_bin_trust_function_creators%';
+---------------------------------+-------+
| Variable_name                   | Value |
+---------------------------------+-------+
| log_bin_trust_function_creators | OFF   |
+---------------------------------+-------+
1 row in set (0.00 sec)

如果数据库没有使用主从复制,那么就可以将参数log_bin_trust_function_creators设置为1。

mysql> set global log_bin_trust_function_creators=1;

这个动态设置的方式会在服务重启后失效,所以我们还必须在my.cnf中设置,加上

log_bin_trust_function_creators=1

这样就会永久生效。
注意:如果开启了主从复制,同时又打开了log_bin_trust_function_creators参数,可以创建函数、存储过程,可能会引起主从复制故障·

解决virt-manager启动管理器出错:unsupported format character

virt-manager出错,报错信息如下:

启动管理器出错:unsupported format character ‘��0xffffffef) at index 30

virt.png

系统版本:CentOS release 6.9 (Final)
解决方法如下:
先卸载0.9.0-34版本:

yum remove virt-manager1

找到virt-manager-0.9.0-31的CentOS版本,安装就可以了

wget http://vault.centos.org/6.7/cr/x86_64/Packages/virt-manager-0.9.0-31.el6.x86_64.rpm
rpm -ivh virt-manager-0.9.0-31.el6.x86_64.rpm

即可解决。

使用gprecoverseg修复Segment节点

greenplum环境中测试的时候, segment节点sdw2由于硬盘空间不足,显示宕机了,重新启动的时候节点报错,启动不了;
使用gpstate -m查看节点状态显示sdw2节点失败:

[gpadmin@dw01 gpmaster]$ gpstate -m

gpstate:dw01:gpadmin-[INFO]:-Starting gpstate with args: -m
gpstate:dw01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.12.0 build 1'
gpstate:dw01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.12.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Feb 27 2017 20:45:12'
gpstate:dw01:gpadmin-[INFO]:-Obtaining Segment details from master...
gpstate:dw01:gpadmin-[INFO]:--------------------------------------------------------------
gpstate:dw01:gpadmin-[INFO]:--Current GPDB mirror list and status
gpstate:dw01:gpadmin-[INFO]:--Type = Group
gpstate:dw01:gpadmin-[INFO]:--------------------------------------------------------------
gpstate:dw01:gpadmin-[INFO]:-   Mirror   Datadir                        Port    Status    Data Status    
gpstate:dw01:gpadmin-[WARNING]:-sdw2     /data/gpdata/gpdatam1/gpseg0   50000   Failed                   <<<<<<<<
gpstate:dw01:gpadmin-[WARNING]:-sdw2     /data/gpdata/gpdatam1/gpseg1   50001   Failed                   <<<<<<<<
gpstate:dw01:gpadmin-[INFO]:-   sdw1     /data/gpdata/gpdatam1/gpseg2   50000   Passive   Synchronized
gpstate:dw01:gpadmin-[INFO]:-   sdw1     /data/gpdata/gpdatam1/gpseg3   50001   Passive   Synchronized
gpstate:dw01:gpadmin-[INFO]:--------------------------------------------------------------
gpstate:dw01:gpadmin-[WARNING]:-2 segment(s) configured as mirror(s) have failed

gprecoverseg参数选项

-a (不提示)
不要提示用户确认。
-B parallel_processes
并行恢复的Segment数。如果未指定,则实用程序将启动最多四个并行进程,具体取决于需要恢复多少个Segment实例。
-d master_data_directory
可选。Master主机的数据目录。如果未指定,则使用为$MASTER_DATA_DIRECTORY设置的值。
-F (完全恢复)
可选。执行活动Segment实例的完整副本以恢复出现故障的Segment。 默认情况下,仅复制Segment关闭时发生的增量更改。
-i recover_config_file
指定文件的名称以及有关失效Segment要恢复的详细信息。文件中的每一行都是以下格式。SPACE关键字表示所需空间的位置。不要添加额外的空间。
filespaceOrder=[filespace1_fsname[, filespace2_fsname[, ...]]
<failed_host_address>:<port>:<data_directory>SPACE 
<recovery_host_address>:<port>:<replication_port>:<data_directory>
[:<fselocation>:...]

恢复所有失效的Segment实例:

gprecoverseg
恢复后,重新平衡用户的Greenplum数据库系统,将所有Segment重置为其首选角色。 首先检查所有Segment已启动并同步

将任何失效的Segment实例恢复到新配置的空闲Segment主机:

$ gprecoverseg -i recover_config_file

本例使用gprecoverseg修复:

20180420_172.28.95.255038[gpadmin@dw01 pg_log]$ gprecoverseg
20180420_172.28.95.25503820180420:21:50:37:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Starting gprecoverseg with args: 
20180420_172.28.95.25503820180420:21:50:37:002098 gprecoverseg:dw01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.12.0 build 1'
20180420_172.28.95.25503820180420:21:50:37:002098 gprecoverseg:dw01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.12.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Feb 27 2017 20:45:12'
20180420_172.28.95.25503820180420:21:50:37:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Checking if segments are ready to connect
20180420_172.28.95.25503820180420:21:50:37:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Obtaining Segment details from master...
20180420_172.28.95.25503820180420:21:50:37:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Obtaining Segment details from master...
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Greenplum instance recovery parameters
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:----------------------------------------------------------
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Recovery type              = Standard
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:----------------------------------------------------------
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Recovery 1 of 2
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:----------------------------------------------------------
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Synchronization mode                        = Incremental
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance host                        = dw04
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance address                     = sdw2
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance directory                   = /data/gpdata/gpdatam1/gpseg0
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance port                        = 50000
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance replication port            = 51000
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance host               = dw03
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance address            = sdw1
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance directory          = /data/gpdata/gpdatap1/gpseg0
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance port               = 40000
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance replication port   = 41000
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Target                             = in-place
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:----------------------------------------------------------
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Recovery 2 of 2
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:----------------------------------------------------------
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Synchronization mode                        = Incremental
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance host                        = dw04
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance address                     = sdw2
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance directory                   = /data/gpdata/gpdatam1/gpseg1
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance port                        = 50001
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance replication port            = 51001
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance host               = dw03
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance address            = sdw1
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance directory          = /data/gpdata/gpdatap1/gpseg1
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance port               = 40001
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance replication port   = 41001
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Target                             = in-place
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:----------------------------------------------------------
20180420_172.28.95.255039
20180420_172.28.95.255039Continue with segment recovery procedure Yy|Nn (default=N):
20180420_172.28.95.255041> y
20180420_172.28.95.25504120180420:21:50:40:002098 gprecoverseg:dw01:gpadmin-[INFO]:-2 segment(s) to recover
20180420_172.28.95.25504120180420:21:50:40:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Ensuring 2 failed segment(s) are stopped
20180420_172.28.95.255042 
20180420_172.28.95.25504220180420:21:50:41:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Ensuring that shared memory is cleaned up for stopped segments
20180420_172.28.95.255047updating flat files
20180420_172.28.95.25504720180420:21:50:46:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Updating configuration with new mirrors
20180420_172.28.95.25504720180420:21:50:46:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Updating mirrors
20180420_172.28.95.255048. 
20180420_172.28.95.25504820180420:21:50:47:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Starting mirrors
20180420_172.28.95.25504820180420:21:50:48:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait...
20180420_172.28.95.255052.... 
20180420_172.28.95.25505220180420:21:50:52:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Process results...
20180420_172.28.95.25505220180420:21:50:52:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Updating configuration to mark mirrors up
20180420_172.28.95.25505220180420:21:50:52:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Updating primaries
20180420_172.28.95.25505220180420:21:50:52:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Commencing parallel primary conversion of 2 segments, please wait...
20180420_172.28.95.255054.. 
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Process results...
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Done updating primaries
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-******************************************************************
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Updating segments for resynchronization is completed.
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-For segments updated successfully, resynchronization will continue in the background.
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Use  gpstate -s  to check the resynchronization progress.
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-******************************************************************

修复完成查看节点状态:

20180420_172.28.95.255110[gpadmin@dw01 pg_log]$ gpstate -m
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-Starting gpstate with args: -m
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.12.0 build 1'
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.12.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Feb 27 2017 20:45:12'
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-Obtaining Segment details from master...
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:--------------------------------------------------------------
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:--Current GPDB mirror list and status
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:--Type = Group
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:--------------------------------------------------------------
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-   Mirror   Datadir                        Port    Status    Data Status       
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-   sdw2     /data/gpdata/gpdatam1/gpseg0   50000   Passive   Resynchronizing
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-   sdw2     /data/gpdata/gpdatam1/gpseg1   50001   Passive   Resynchronizing
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-   sdw1     /data/gpdata/gpdatam1/gpseg2   50000   Passive   Synchronized
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-   sdw1     /data/gpdata/gpdatam1/gpseg3   50001   Passive   Synchronized
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:--------------------------------------------------------------

节点全部启动,sdw2节点正在重新同步,过一段时间一般几分钟即可,根据数据量大小而定,一般很很快同步完毕;
参考文档:
https://gp-docs-cn.github.io/docs/utility_guide/admin_utilities/gprecoverseg.html
http://mysql.taobao.org/monthly/2016/04/03/

jira7.x饼图中文乱码解决

jira7.x安装成功在导出选择饼图的时候,中午字符不显示,如下图:
jira-font.png
是因为系统缺少字体,安装字体以后,重启jenkins即可

yum -y install fonts-chinese fonts-ISO8859*

如果yum提示找不到,直接安装fonts-chinese-3.02-12.el5.noarch.rpm和fonts-ISO8859-2-75dpi-1.0-17.1.noarch.rpm包,随后重启jira即可:

rpm -ivh --force --nodeps fonts*.rpm

MySQL忽略区分大小写

在MySQL中,数据库对应数据目录中的目录。数据库中的每个表至少对应数据库目录中的一个文件(也可能是多个,取决于存储引擎)。因此,所使用操作系统的大小写敏感性决定了数据库名和表名的大小写敏感性。

在大多数Unix中数据库名和表名对大小写敏感,而在Windows中对大小写不敏感。一个显著的例外情况是Mac OS X,它基于Unix但使用默认文件系统类型(HFS+),对大小写不敏感。然而,Mac OS X也支持UFS卷,该卷对大小写敏感,就像Unix一样。

变量lower_case_file_system说明是否数据目录所在的文件系统对文件名的大小写敏感。

ON说明对文件名的大小写不敏感,OFF表示敏感。

一般线上不建议忽略大小写,仅在一些特殊场景下适用,

大小写区分规则

    linux下:
    数据库名与表名是严格区分大小写的;
    表的别名是严格区分大小写的;
    列名与列的别名在所有的情况下均是忽略大小写的;
    变量名也是严格区分大小写的;
    windows下:
    都不区分大小写
    Mac OS下(非UFS卷):
    都不区分大小写

MySQL默认是区分大小写的:

mysql> show variables like 'lower%';
+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| lower_case_file_system | OFF   |
| lower_case_table_names | 0     |
+------------------------+-------+
2 rows in set (0.01 sec)
lower_case_table_names = 0时,mysql会根据表名直接操作,大小写敏感。 
lower_case_table_names = 1时,mysql会先把表名转为小写,再执行操作。 

由大小写敏感转换为不敏感方法:

    如果原来所建立库及表都是对大小写敏感的,想要转换为对大小写不敏感,主要需要进行如下3步:
    1.将数据库数据通过mysqldump导出。
    2.在my.cnf中更改lower_case_tables_name = 1,并重启mysql数据库。
    3.将导出的数据导入mysql数据库。

mysqldump导出所有库,修改my.cnf, 在[mysqld]下加入一行:

lower_case_table_names=1;
/etc/init.d/mysql restart

重新查询

mysql> show variables like 'lower%';
+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| lower_case_file_system | OFF   |
| lower_case_table_names | 1     |
+------------------------+-------+
2 rows in set (0.01 sec)

source进刚才备份的数据;

MySQL错误ERROR 1786 (HY000)解决

业务上需要支持create table XXX as select * from XXX; 这种创建表的语法,但是MySQL5.7.x版本里面gtid是开启的,会报错

ERROR 1786 (HY000):Statement violates GTID consistency: CREATE TABLE ... SELECT.

官方说明:https://dev.mysql.com/doc/refman/5.7/en/replication-gtids-restrictions.html

CREATE TABLE ... SELECT statements.  CREATE TABLE ... SELECT is not safe for statement-based replication. When using row-based replication, this statement is actually logged as two separate events—one for the creation of the table, and another for the insertion of rows from the source table into the new table just created. When this statement is executed within a transaction, it is possible in some cases for these two events to receive the same transaction identifier, which means that the transaction containing the inserts is skipped by the slave. Therefore, CREATE TABLE ... SELECT is not supported when using GTID-based replication.

解决办法关闭GTID模式:
my.cnf里面修改参数为:

gtid_mode = OFF
enforce_gtid_consistency = OFF

重启MySQL,再次创建成功:

mysql> show variables like '%gtid_mode%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| gtid_mode     | OFF   |
+---------------+-------+
1 row in set (0.01 sec)

mysql> show variables like '%enforce_gtid_consistency%';
+--------------------------+-------+
| Variable_name            | Value |
+--------------------------+-------+
| enforce_gtid_consistency | OFF   |
+--------------------------+-------+
1 row in set (0.01 sec)

mysql> create table t1 as select * from BS_CONT;
Query OK, 0 rows affected (0.12 sec)

FastDFS配置参数tracker.conf、storage.conf详解

启动命令:

/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf
/usr/bin/fdfs_storaged /etc/fdfs/storage.conf

文件位置:/etc/fdfs/storage.conf

基本配置(基础配置,不考虑性能调优情况下)
group_name=group1                               #组名   指定 此 storage server 所在 组(卷)
port=23000                                                # 存储服务端口
base_path=/data/fastdfs-storage            #设置storage数据文件和日志目录,需预先创建
目录下自动生成 两个文件夹
 data  存储信息  
logs   日志信息 
store_path_count=1                                  #存储路径个数,需要和 store_path 个数匹配、
store_path0=/data/fastdfs-storage          #存储路径
#store_path1=/data/fastdfs-storage2

tracker_server=192.168.116.145:22122       (主动连接tracker_server )
                       # tracker 服务器的 IP地址和端口号,如果是单机搭建,IP不要写127.0.0.1,否则启动不成功。
http.server_port=8888                              #设置 http 端口号


storage.conf
# is this config file disabled
# false for enabled
# true for disabled
disabled=false        false为生效,true不生效

# the name of the group this storage server belongs to
#
# comment or remove this item for fetching from tracker server,
# in this case, use_storage_id must set to true in tracker.conf,
# and storage_ids.conf must be configed correctly.
group_name=group1
# 指定 此 storage server 所在 组(卷)

#如果 注释或者删除这个参数,而从tracker那里获取分组信息
#  则  use_storage_id参数必须设为true 在 tracker.conf 配置文件中
#  storage_ids.conf文件必须正确配置


# bind an address of this host
# empty for bind all addresses of this host
bind_addr=
# 绑定ip地址,空值为主机上全部地址

# if bind an address of this host when connect to other servers 
# (this storage server as a client)
# true for binding the address configed by above parameter: "bind_addr"
# false for binding any address of this host
client_bind=true

# the storage server port     存储服务端口
port=23000

# connect timeout in seconds          连接超时时间,针对socket套接字函数connect
# default value is 30s
connect_timeout=30

# network timeout in seconds
# default value is 30s
network_timeout=60
#  storage server 网络超时时间,单位为秒。发送或接收数据时,
# 如果在超时时间后还不能发送或接收数据,则本次网络通信失败

# heart beat interval in seconds
heart_beat_interval=30
# 心跳间隔时间,单位为秒 (这里是指主动向tracker server 发送心跳)

# disk usage report interval in seconds
stat_report_interval=60
# storage server向tracker server报告磁盘剩余空间的时间间隔,单位为秒

# the base path to store data and log files
base_path=/home/yuqing/fastdfs
# base_path 目录地址,根目录必须存在  子目录会自动生成 
#(注 :这里不是上传的文件存放的地址,之前是的,在某个版本后更改了)

# max concurrent connections the server supported
# default value is 256
# more max_connections means more memory will be used
max_connections=256
# 服务器支持的最大并发连接
# 默认为256
# 更大的值 意味着需要使用更大的内存

# the buff size to recv / send data
# this parameter must more than 8KB
# default value is 64KB
# since V2.00
buff_size = 256KB
# 发送/接收 数据的缓冲区大小
# 此参数必须大于8KB
# 设置队列结点的buffer大小。工作队列消耗的内存大小 = buff_size * max_connections
# 设置得大一些,系统整体性能会有所提升。
# 消耗的内存请不要超过系统物理内存大小。另外,对于32位系统,请注意使用到的内存不要超过3GB

# accept thread count
# default value is 1
# since V4.07
accept_threads=1
# 接受线程数 ???  默认为1 ???

# work thread count, should <= max_connections
# work thread deal network io
# default value is 4
# since V2.00
work_threads=4
# V2.0引入的这个参数,工作线程数 <=max_connections
# 此参数 处理 网络的 I/O
# 通常设置为CPU数

# if disk read / write separated
##  false for mixed read and write
##  true for separated read and write
# default value is true
# since V2.00
disk_rw_separated = true
# 磁盘是否 读写分离
# false为读写混合  true为读写分离
# 默认读写分离

# disk reader thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1
# 针对每个个存储路径的读线程数,缺省值为1
# 读写混合时 此值设为0
# 读写分离时,系统中的读线程数 = disk_reader_threads * store_path_count
# 读写混合时,系统中的读写线程数 = (disk_reader_threads + disk_writer_threads) * store_path_count

# disk writer thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1
# 针对每个个存储路径的写线程数,缺省值为1
# 读写混合时 此值设为0
# 读写分离时,系统中的写线程数 = disk_writer_threads * store_path_count
# 读写混合时,系统中的读写线程数 = (disk_reader_threads + disk_writer_threads) * store_path_count

# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec=50
# 同步文件时,如果从binlog中没有读到要同步的文件,休眠N毫秒后重新读取。0表示不休眠,立即再次尝试读取。
# 出于CPU消耗考虑,不建议设置为0。如何希望同步尽可能快一些,可以将本参数设置得小一些,比如设置为10ms

# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval=0
# 同步上一个文件后,再同步下一个文件的时间间隔,单位为毫秒,0表示不休眠,直接同步下一个文件

# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time=00:00
# 存储每天同步的开始时间 (默认是00:00开始) 
# 一般用于避免高峰同步产生一些问题而设定

# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time=23:59
# 存储每天同步的结束时间 (默认是23:59结束) 
# 一般用于避免高峰同步产生一些问题而设定

# write to the mark file after sync N files
# default value is 500
write_mark_file_freq=500
# 同步完N个文件后,把storage的mark文件同步到磁盘
# 注:如果mark文件内容没有变化,则不会同步

# path(disk or mount point) count, default value is 1
store_path_count=1
# 存放文件时storage server支持多个路径(例如磁盘)
# 这里配置存放文件的基路径数目,通常只配一个目录

# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
store_path0=/home/yuqing/fastdfs
#store_path1=/home/yuqing/fastdfs2
# 逐一配置store_path的路径,索引号基于0。注意配置方法后面有0,1,2 ......,需要配置0到store_path - 1。
# 如果不配置base_path0,那边它就和base_path对应的路径一样。
# 路径必须存在

# subdir_count  * subdir_count directories will be auto created under each 
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path=256
# N*N 个 目录会自动创建 在 每个磁盘路径下
# 值 可以设为1-256 默认256
# FastDFS存储文件时,采用了两级目录。这里配置存放文件的目录个数 (系统的存储机制)
# 如果本参数只为N(如:256),那么storage server在初次运行时,会自动创建 N * N 个存放文件的子目录。

# tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
tracker_server=192.168.209.121:22122
# tracker_server 的地址列表 要写端口的 (再次提醒是主动连接tracker_server )
# 有多个tracker server时,每个tracker server写一行

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info
# 日志级别

#unix group name to run this program, 
#not set (empty) means run by the group of current user
run_by_group=
# 操作系统运行FastDFS的用户组 (不填就是当前用户组,哪个启动进程就是哪个)

#unix username to run this program,
#not set (empty) means run by current user
run_by_user=
# 操作系统运行FastDFS的用户 (不填就是当前用户,哪个启动进程就是哪个)

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
# host[01-08,20-25].domain.com, for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
allow_hosts=*
# 允许连接本storage server的IP地址列表 (不包括自带HTTP服务的所有连接)
# 可以配置多行,每行都会起作用

# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode=0
#  文件在data目录下分散存储策略。
# 0: 轮流存放,在一个目录下存储设置的文件数后(参数file_distribute_rotate_count中设置文件数),使用下一个目录进行存储。
# 1: 随机存储,根据文件名对应的hash code来分散存储

# valid when file_distribute_to_path is set to 0 (round robin), 
# when the written file count reaches this number, then rotate to next path
# default value is 100
file_distribute_rotate_count=100
# 当上面的参数file_distribute_path_mode配置为0(轮流存放方式)时,本参数有效。
# 当一个目录下的文件存放的文件数达到本参数值时,后续上传的文件存储到下一个目录中。

# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes=0
# 当写入大文件时,每写入N个字节,调用一次系统函数fsync将内容强行同步到硬盘。0表示从不调用fsync  
# 其它值表示 写入字节为此值时,调用fsync函数

# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval=10
# 同步或刷新日志信息到硬盘的时间间隔,单位为秒
# 注意:storage server 的日志信息不是时时写硬盘的,而是先写内存

# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval=10
# 同步binglog(更新操作日志)到硬盘的时间间隔,单位为秒
# 本参数会影响新上传文件同步延迟时间

# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval=300
# 把storage的stat文件同步到磁盘的时间间隔,单位为秒。
# 注:如果stat文件内容没有变化,不会进行同步

# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size=512KB
# 线程栈的大小。FastDFS server端采用了线程方式。
# 对于V1.x,storage server线程栈不应小于512KB;对于V2.0,线程栈大于等于128KB即可。
# 线程栈越大,一个线程占用的系统资源就越多。
# 对于V1.x,如果要启动更多的线程(max_connections),可以适当降低本参数值。

# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority=10
#  本storage server作为源服务器,上传文件的优先级
#  可以为负数。值越小,优先级越高
#  这里就和 tracker.conf 中store_server= 2时的配置相对应了 

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix=
# 网卡别名前缀 在linux上 你可以通过ifconfig -a 看到它
# 多个别名需要用逗号隔开
# 空值表示根据操作系统类型自动设置



FastDHT 文件去重设置:
# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate=0
# 是否检测上传文件已经存在。如果已经存在,则不存储文件内容
# FastDHT 建立一个符号链接 以节省磁盘空间。 
# 这个应用要配合FastDHT 使用,所以打开前要先安装FastDHT 
# 1或yes 是检测, 0或no 是不检测

# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method=hash
# 检查文件重复时,文件内容的签名方式:
## hash: 4个hash code
## md5:MD5

# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace=FastDFS
# 存储文件符号连接的命名空间
# check_file_duplicate 参数必须设置为  true / on

# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive=0
# 与FastDHT servers 的连接方式 (是否为持久连接) ,默认是0(短连接方式)
# 可以考虑使用长连接,这要看FastDHT server的连接数是否够用

# you can use "#include filename" (not include double quotes) directive to 
# load FastDHT server list, when the filename is a relative path such as 
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf
# 你可以使用  "#include filename"指令(不包括双引号)来加载 FastDHT服务列表
# 文件名是一个相对路径 比如一个纯粹的文件名
# 主路径是当前配置文件的主路径
# 必须设置FastDHT服务列表 当此 check_file_duplicate 参数 是true / on
# 更多信息 参考FastDHT 的安装文件


日志类设置:
# if log to access log
# default value is false
# since V4.00
use_access_log = false
# 是否将文件操作记录到access log

# if rotate the access log every day
# default value is false
# since V4.00
rotate_access_log = false
# 是否定期轮转access log,目前仅支持一天轮转一次

# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time=00:00
# access log定期轮转的时间点,只有当rotate_access_log设置为true时有效

# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false
# 是否定期轮转error log,目前仅支持一天轮转一次

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00
# error log定期轮转的时间点,只有当rotate_error_log设置为true时有效

# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0
# access log按文件大小轮转
# 设置为0表示不按文件大小轮转,否则当access log达到该大小,就会轮转到新文件中

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0
# error log按文件大小轮转
# 设置为0表示不按文件大小轮转,否则当error log达到该大小,就会轮转到新文件中

# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0
# 是否保留日志文件
# 0 为不删除旧日志文件

# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record=false
# 文件同步的时候,是否忽略无效的binlog记录

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false
# 是否使用连接池

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600
# 关闭 闲置连接的 时间
# 默认3600秒

# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name=
# 如果存储服务器域名是空的 则使用ip地址
# storage server上web server域名,通常仅针对单独部署的web server
# 这样URL中就可以通过域名方式来访问storage server上的文件了


# the port of the web server on this storage server
http.server_port=8888
# 存储服务器 使用的web端口

文件位置:/etc/fdfs/tracker.conf

# is this config file disabled
# false for enabled
# true for disabled
#该配置文件是否生效
disabled=false

# bind an address of this host
# empty for bind all addresses of this host
# 绑定IP
# 后面为绑定的IP地址 (常用于服务器有多个IP但只希望一个IP提供服务)。
# 如果不填则表示所有的(一般不填就OK),相信较熟练的SA都常用到类似功能,
# 很多系统和应用都有
bind_addr=

# the tracker server port
# tracker server 服务端口
port=22122

# connect timeout in seconds
# default value is 30s
# 连接超时(秒)
connect_timeout=30

# network timeout in seconds
# default value is 30s
# 网络超时(秒)
network_timeout=60

# the base path to store data and log files
# Tracker数据/日志目录地址
# ${base_path}
#     |__data
#     |     |__storage_groups.dat:存储分组信息
#     |     |__storage_servers.dat:存储服务器列表
#     |__logs
#           |__trackerd.log:tracker server日志文件
base_path=/usr/local/src/fastdfs/tracker

# 数据文件storage_groups.dat和storage_servers.dat中的记录之间以换行符(\n)分隔,字段之间以西文逗号(,)分隔。
# storage_groups.dat中的字段依次为:
#   1. group_name:组名
#   2. storage_port:storage server端口号
# 
# storage_servers.dat中记录storage server相关信息,字段依次为:
#   1. group_name:所属组名
#   2. ip_addr:ip地址
#   3. status:状态
#   4. sync_src_ip_addr:向该storage server同步已有数据文件的源服务器
#   5. sync_until_timestamp:同步已有数据文件的截至时间(UNIX时间戳)
#   6. stat.total_upload_count:上传文件次数
#   7. stat.success_upload_count:成功上传文件次数
#   8. stat.total_set_meta_count:更改meta data次数
#   9. stat.success_set_meta_count:成功更改meta data次数
#   10. stat.total_delete_count:删除文件次数
#   11. stat.success_delete_count:成功删除文件次数
#   12. stat.total_download_count:下载文件次数
#   13. stat.success_download_count:成功下载文件次数
#   14. stat.total_get_meta_count:获取meta data次数
#   15. stat.success_get_meta_count:成功获取meta data次数
#   16. stat.last_source_update:最近一次源头更新时间(更新操作来自客户端)
#   17. stat.last_sync_update:最近一次同步更新时间(更新操作来自其他storage server的同步)

# max concurrent connections this server supported
# 最大连接数
max_connections=100

# accept thread count
# default value is 1
# since V4.07
# w线程数,通常设置CPU数,值 <= 最大连接数
accept_threads=1

# work thread count, should <= max_connections
# default value is 4
# since V2.00
work_threads=8

# the method of selecting group to upload files
# 0: round robin
# 1: specify group
# 2: load balance, select the max free space group to upload file
# 上传文件的选组方式,如果在应用层指定了上传到一个固定组,那么这个参数被绕过
# 0: 表示轮询
# 1: 表示指定组
# 2: 表示存储负载均衡(选择剩余空间最大的组)
store_lookup=0

# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
# 指定上传的组,如果在应用层指定了具体的组,那么这个参数将不会起效。
# 另外如果store_lookup如果是0或2,则此参数无效
store_group=group2

# which storage server to upload file
# 0: round robin (default)
# 1: the first server order by ip address
# 2: the first server order by priority (the minimal)
# 选择哪个storage server 进行上传操作
# 一个文件被上传后,这个storage server就相当于这个文件的storage server源,
# 会对同组的storage server推送这个文件达到同步效果
# 0: 轮询方式(默认)
# 1: 根据ip 地址进行排序选择第一个服务器(IP地址最小者)
# 2: 根据优先级进行排序(上传优先级由storage server来设置,参数名为upload_priority),优先级值越小优先级越高。
store_server=0

# which path(means disk or mount point) of the storage server to upload file
# 0: round robin
# 2: load balance, select the max free space path to upload file
# 上传路径的选择方式。storage server可以有多个存放文件的base path(可以理解为多个磁盘)
# 0: 轮流方式,多个目录依次存放文件
# 2: 选择剩余空间最大的目录存放文件(注意:剩余磁盘空间是动态的,因此存储到的目录或磁盘可能也是变化的)
store_path=0

# which storage server to download file
# 0: round robin (default)
# 1: the source storage server which the current file uploaded to
# 选择哪个 storage server 作为下载服务器 
# 0: 轮询方式,可以下载当前文件的任一storage server
# 1: 哪个为源storage server就用哪一个,就是之前上传到哪个storage server服务器就是哪个了
download_server=0

# storage server上保留的空间,保证系统或其他应用需求空间。
# 可以用绝对值或者百分比(V4开始支持百分比方式)
# 如果同组的服务器的硬盘大小一样,以最小的为准
# reserved storage space for system or other applications.
# if the free(available) space of any stoarge server in 
# a group <= reserved_storage_space, 
# no file can be uploaded to this group.
# bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
### XX.XX% as ratio such as reserved_storage_space = 10%
reserved_storage_space = 10%

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
# 选择日志级别
log_level=info

#unix group name to run this program, 
#not set (empty) means run by the group of current user
# 指定运行该程序的用户组(不填 就是当前用户组,哪个启动进程就是哪个)
run_by_group=root

#unix username to run this program,
#not set (empty) means run by current user
# 操作系统运行FastDFS的用户 (不填 就是当前用户,哪个启动进程就是哪个)
run_by_user=root

# 可以连接到此 tracker server 的ip范围(对所有类型的连接都有影响,包括客户端,storage server)
# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
# host[01-08,20-25].domain.com, for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
allow_hosts=*

# sync log buff to disk every interval seconds
# default value is 10 seconds
# 同步或刷新日志信息到硬盘的时间间隔,单位为秒
# 注意:tracker server 的日志不是时时写硬盘的,而是先写内存。
sync_log_buff_interval = 10

# 检测 storage server 存活的时间隔,单位为秒。
# storage server定期向tracker server 发心跳,
# 如果tracker server在一个check_active_interval内还没有收到storage server的一次心跳,
# 那边将认为该storage server已经下线。所以本参数值必须大于storage server配置的心跳时间间隔。
# 通常配置为storage server心跳时间间隔的2倍或3倍。
# check storage server alive interval seconds
check_active_interval = 120

# thread stack size, should >= 64KB
# default value is 64KB
# 线程栈的大小。FastDFS server端采用了线程方式。
# 线程栈不应小于64KB
# 线程栈越大,一个线程占用的系统资源就越多。如果要启动更多的线程可以适当降低本参数值。
thread_stack_size = 64KB

# auto adjust when the ip address of the storage server changed
# default value is true
# 这个参数控制当storage server IP地址改变时,集群是否自动调整。
# 注:只有在storage server进程重启时才完成自动调整。
storage_ip_changed_auto_adjust = true

# ===========================同步======================================
# storage sync file max delay seconds
# default value is 86400 seconds (one day)
# since V2.00
# V2.0引入的参数。存储服务器之间同步文件的最大延迟时间,缺省为1天。根据实际情况进行调整
# 注:本参数并不影响文件同步过程。本参数仅在下载文件时,判断文件是否已经被同步完成的一个阀值(经验值)
storage_sync_file_max_delay = 86400

# the max time of storage sync a file
# default value is 300 seconds
# since V2.00
# V2.0引入的参数。存储服务器同步一个文件需要消耗的最大时间,缺省为300s,即5分钟。
# 注:本参数并不影响文件同步过程。本参数仅在下载文件时,作为判断当前文件是否被同步完成的一个阀值(经验值)
storage_sync_file_max_time = 300

# ===========================trunk 和 slot============================
# if use a trunk file to store several small files
# default value is false
# since V3.00
# V3.0引入的参数。是否使用小文件合并存储特性,缺省是关闭的。
use_trunk_file = false 

# the min slot size, should <= 4KB
# default value is 256 bytes
# since V3.00
slot_min_size = 256

# trunk file分配的最小字节数。比如文件只有16个字节,系统也会分配slot_min_size个字节。
# the max slot size, should > slot_min_size
# store the upload file to trunk file when it's size <=  this value
# default value is 16MB
# since V3.00
slot_max_size = 16MB

# 只有文件大小<=这个参数值的文件,才会合并存储。
# 如果一个文件的大小大于这个参数值,将直接保存到一个文件中(即不采用合并存储方式)。
# the trunk file size, should >= 4MB
# default value is 64MB
# since V3.00
trunk_file_size = 64MB

# 是否提前创建trunk file。只有当这个参数为true,下面3个以trunk_create_file_打头的参数才有效。
# if create trunk file advancely
# default value is false
# since V3.06
trunk_create_file_advance = false

# 提前创建trunk file的起始时间点(基准时间),02:00表示第一次创建的时间点是凌晨2点
# the time base to create trunk file
# the time format: HH:MM
# default value is 02:00
# since V3.06
trunk_create_file_time_base = 02:00

# 创建trunk file的时间间隔,单位为秒。如果每天只提前创建一次,则设置为86400
# the interval of create trunk file, unit: second
# default value is 38400 (one day)
# since V3.06
trunk_create_file_interval = 86400

# 提前创建trunk file时,需要达到的空闲trunk大小
# 比如本参数为20G,而当前空闲trunk为4GB,那么只需要创建16GB的trunk file即可。
# the threshold to create trunk file
# when the free trunk file size less than the threshold, will create 
# the trunk files
# default value is 0
# since V3.06
trunk_create_file_space_threshold = 20G

# trunk初始化时,是否检查可用空间是否被占用
# if check trunk space occupying when loading trunk free spaces
# the occupied spaces will be ignored
# default value is false
# since V3.09
# NOTICE: set this parameter to true will slow the loading of trunk spaces 
# when startup. you should set this parameter to true when neccessary.
trunk_init_check_occupying = false

# 是否无条件从trunk binlog中加载trunk可用空间信息
# FastDFS缺省是从快照文件storage_trunk.dat中加载trunk可用空间,
# 该文件的第一行记录的是trunk binlog的offset,然后从binlog的offset开始加载
# if ignore storage_trunk.dat, reload from trunk binlog
# default value is false
# since V3.10
# set to true once for version upgrade when your version less than V3.10
trunk_init_reload_from_binlog = false

# the min interval for compressing the trunk binlog file
# unit: second
# default value is 0, 0 means never compress
# FastDFS compress the trunk binlog when trunk init and trunk destroy
# recommand to set this parameter to 86400 (one day)
# since V5.01
trunk_compress_binlog_min_interval = 0

# 是否使用server ID作为storage server标识
# if use storage ID instead of IP address
# default value is false
# since V4.00
use_storage_id = false

# use_storage_id 设置为true,才需要设置本参数
# 在文件中设置组名、server ID和对应的IP地址,参见源码目录下的配置示例:conf/storage_ids.conf
# specify storage ids filename, can use relative or absolute path
# since V4.00
storage_ids_filename = storage_ids.conf

# id type of the storage server in the filename, values are:
## ip: the ip address of the storage server
## id: the server id of the storage server
# this paramter is valid only when use_storage_id set to true
# default value is ip
# since V4.03
# 存储服务器的文件名中的id类型,取值如下
## IP:存储服务器的IP地址
## id:被存储服务器的服务器标识
# 只有当use_storage_id设置为true时此参数是有效的
# 默认值是IP
id_type_in_filename = ip

# 存储从文件是否采用symbol link(符号链接)方式
# 如果设置为true,一个从文件将占用两个文件:原始文件及指向它的符号链接。
# if store slave file use symbol link
# default value is false
# since V4.01
store_slave_file_use_link = false

# 是否定期轮转error log,目前仅支持一天轮转一次
# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = true

# error log定期轮转的时间点,只有当rotate_error_log设置为true时有效
# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00

# error log按大小轮转
# 设置为0表示不按文件大小轮转,否则当error log达到该大小,就会轮转到新文件中
# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0

# 是否使用连接池
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false

# 连接的空闲时间超过这个时间将被关闭,单位:秒
# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# ===========================HTTP 相关=================================
# HTTP port on this tracker server
# tracker server上的HTTP服务器端口号
http.server_port=8080

# 检查storage http server存活的间隔时间,单位为秒
# check storage HTTP server alive interval seconds
# <= 0 for never check
# default value is 30
http.check_alive_interval=30

# 检查storage http server存活的方式
# tcp:连接到storage server的http端口,不进行request和response。
# check storage HTTP server alive type, values are:
#   tcp : connect to the storge server with HTTP port only, 
#        do not request and get response
#   http: storage check alive url must return http status 200
# default value is tcp
http.check_alive_type=tcp

# 检查storage http server是否alive的uri/url
# check storage HTTP server alive uri/url
# NOTE: storage embed HTTP server support uri: /status.html
http.check_alive_uri=/status.html

SSH服务配置监听多端口

配置sshd监听多个端口,编辑sshd_config,增加ListenAddress选项 – 指定监听的网络地址,默认监听所有地址。
opensshimg.jpg
可以使用下面的格式:

ListenAddress host|IPv4_addr|IPv6_addr
ListenAddress host|IPv4_addr:port
ListenAddress [host|IPv6_addr]:port

如果未指定 port ,那么将使用 Port 指令的值。可以使用多个 ListenAddress 指令监听多个地址。

vi /etc/ssh/sshd_config
增加
ListenAddress 0.0.0.0:22
ListenAddress 0.0.0.0:18929
ListenAddress 0.0.0.0:10761

即监听22, 18929, 10761 (Port选项的端口也要加上)

/etc/init.d/sshd restart

重启服务生效。

CentOS 7.x安装配置VNC Server及桌面环境

CentOS 7.4安装配置VNC Server及桌面环境步骤如下:
vnc.png
安装vnc、Gnome桌面

yum groupinstall "GNOME Desktop" "Graphical Administration Tools" -y
yum groupinstall "X Window System" "Desktop" -y
yum install tigervnc tigervnc-server -y

配置VNC
将/lib/systemd/system/vncserver@.service文件复制一份

cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@.service

将vncserver@.service文件中得<USER>修改为VNC Client连接的账号,这里修改为root了,PIDFile也需要修改下,文件内容如下:

cat /etc/systemd/system/vncserver@.service
[Unit]
Description=Remote desktop service (VNC)
After=syslog.target network.target

[Service]
Type=forking
User=root


ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
ExecStart=/usr/sbin/runuser -l root -c "/usr/bin/vncserver %i -geometry 1280x1024"
PIDFile=/root/.vnc/%H%i.pid
ExecStop=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'

[Install]
WantedBy=multi-user.target

设置VNCServer密码:

vncpasswd

启动并设置VNCServer为开机自启动 :1 使当参数启动服务器,表示启动第一个界面

systemctl start vncserver@:1

systemctl enable vncserver@:1

如果启动报错:

Job for vncserver@:1.service failed because the control process exited with error code. See 
"systemctl status vncserver@:1.service" and "journalctl -xe" for details.

直接删除/tmp/.X11-unix/目录后,重新启动服务即可

rm /tmp/.X11-unix/ -rf

CentOS7防火墙规则:

firewall-cmd --permanent --add-service="vnc-server" --zone="public"
firewall-cmd --reload

CentOS7.x开机自动进入图形化界面:

systemctl set-default graphical.target(图形界面模式)
reboot(重启系统)

如果不设置进入图形界面,VNC连接以后容易卡住·
附:开机启动字符界面模式

systemctl set-default multi-user.target(字符界面模式)
reboot(重启系统)

查看默认模式:

systemctl get-default

最新

分类

归档

评论

其它