Kallithea Performance Tuning
Tim Downey
timothy.downey at gmail.com
Mon Jan 26 16:01:30 EST 2015
Hi folks,
I'm running a Kallithea instance with about 100 repositories and healthy
amount of CI activity across several branches on each repository. I'm
looking for some tips on tuning performance. I'm running using waitress
(at least I think I am) and see what look like several knobs in the
production.ini file that can be turned, but I'm not really sure what I'm
looking at.
See below for my settings. Can anyone point towards what I can be looking
at or changing? I appear to be CPU bound, not io or mem. Most of my
activity is the result of my CI activity, not users through the web
interface.
Here's what top typically looks like. Sorry for the formatting. As you
can see, the single python process tends to chew up what it can get. I'm
not sure if there's anything that I can do to produce another python worker
since the process using the resources is not using both of the available
cpus fully.
top - 15:59:12 up 98 days, 6:23, 2 users, load average: 1.13, 1.27, 1.47
Tasks: 123 total, 2 running, 121 sleeping, 0 stopped, 0 zombie
Cpu(s): 79.1%us, 4.4%sy, 0.0%ni, 5.0%id, 11.4%wa, 0.0%hi, 0.2%si,
0.0%st
Mem: 4047932k total, 1442812k used, 2605120k free, 29784k buffers
Swap: 4192252k total, 326916k used, 3865336k free, 368712k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
25761 kallithe 20 0 1419m 811m 2960 S 166 20.5 40107:15 python
1325 root 20 0 88568 1280 972 S 1 0.0 104:15.11 vmtoolsd
1142 rabbitmq 20 0 1044m 9792 432 S 0 0.2 86:58.08 beam.smp
1 root 20 0 24348 732 276 S 0 0.0 0:03.57 init
2 root 20 0 0 0 0 S 0 0.0 0:00.52 kthreadd
Here's a bit from my production.ini. I can produce the whole file if there
are other things that I should be looking at.
[server:main]
## PASTE ##
#use = egg:Paste#http
## nr of worker threads to spawn
#threadpool_workers = 5
## max request before thread respawn
#threadpool_max_requests = 10
## option to use threads of process
#use_threadpool = true
## WAITRESS ##
use = egg:waitress#main
## number of worker threads
threads = 5
## MAX BODY SIZE 100GB
max_request_body_size = 107374182400
## use poll instead of select, fixes fd limits, may not work on old
## windows systems.
#asyncore_use_poll = True
Thank you for any help!
Tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sfconservancy.org/pipermail/kallithea-general/attachments/20150126/ca8377c4/attachment.html>
More information about the kallithea-general
mailing list