Investigating Kallithea performance issues
Marcin Kuzminski
marcin at python-blog.com
Wed Mar 22 09:22:53 UTC 2017
I think a product like appenlight.com might come in handy. We use it
together with RhodeCode to monitor instance performance, and it works
great. It gives you a performance breakdown on each of application layer,
db, rendering-templates, etc. It also can notify you about slow requests,
exception or log.errors as they occur.
It works with any python application and works really good specially with
pylons and pyramid. So i believe it would work fine with Kallithea. Here
are instructions on how to connect it to pylons app:
https://getappenlight.com/page/python/pylons-exception-logging.html
[https://getappenlight.com/page/python/pylons-exception-logging.html]
You can either connect for free to the SaaS version (appenlight.com) or
download VM image including the open-source free version to host it on your
premises.
Best,
-- Marcin Kuzminski RhodeCode, Inc. On Wed, Mar 22, 2017 at 8:01, Jan
Heylen <heyleke at gmail.com> wrote:
Hi,
On Tue, Mar 21, 2017 at 11:11 AM, Adi Kriegisch < adi at cg.tuwien.ac.at
[adi at cg.tuwien.ac.at] > wrote:
Hi!
> Sometimes the performance of our Kallithea instance takes a hit, such
that
> page loading can take many many seconds, and it is unclear what causes
this.
>
> Could you suggest ways to investigate this?
Step 1: get data. :-)
Actually one of the reasons I use uwsgi for deployment is that I get
execution
times 'for free': 'generated 309 bytes in 268 msecs' or 'generated 436
bytes
in 56 msecs' including the uri and all kinds of useful information.
That kind of monitoring is indeed what we do now, with uwsgi: {address
space usage: 1769336832 bytes/1687MB} {rss usage: 1298124800 bytes/1237MB}
[pid: 52669|app: 0|req: 349/9564] 1 35.252.28.229 () {40 vars in 1686
bytes} [Wed Mar 22 07:39:38 2017] GET
/review/ext/cvpsw-review/pull-request/15006/_/c vp80 => generated 76111
bytes in 924 msecs (HTTP/1.1 200) 5 headers in 149 bytes (1 switches on
core 0) {address space usage: 1687105536 bytes/1608MB} {rss usage:
1218387968 bytes/1161MB} [pid: 49555|app: 0|req: 209/9565] 35.252.28.229 ()
{42 vars in 1591 bytes} [Wed Mar 22 07:39:40 2017] GET
/js/graph.js?ver=0.2.1 => generated 0 bytes in 156 msecs (HTTP/1.1 304) 3
headers in 98 bytes (1 switches on core 0) 2017-03-22 07:40:09.168 INFO
[kallithea.RequestWrapper] IP: 135.224.206.40 Request to
/review/ms/sw-review/branches-tags time: 66.745s
But it are these kind of peeks we want to understand on what Kallithea was
exactly waiting upon, is it pure disk/io, is it a qeury to the database
that took long (postgresql with monitoring with pg_activity) Is there
anything Kallithea debug info or pythong profiler can provide us (without
overloading kallithea itself with all the debug) that we today are not
aware of?
Our database is 'only' 270MB big, so besides inefficient queries, I cannot
understand if it would be database access that is causing hickups of more
then a minute. Either we configured our database wrong, or I would think
postgresql can keep 270MB in memory (we might need to force it to keep it
in memory, even if other processes consume alot of memory), the server has
125GB of memory, with a 'free' count (even with caches) of 20GB (let me be
clear, the sever is used for other applications as well).
What I do see as queries running long on the database are these, (But I
haven't correlated these directly with the above peeks yet (but it could
well be).) 47362 kallithea kallithea 127.0.0.1 0.0 0.0 0.00B 0.00B 00:47.10
N N SELECT cache_invalidation.cache_id AS cache_invalidation_cache_id,
cache_invalidation.cache_key AS cache_invalidation_cache_key,
cache_invalidation.cache_args AS cache_invalidation_cache_args,
cache_invalidation.cache_active AS cache_invalidation_cache_active FROM
cache_invalidation WHERE cache_invalidation.cache_key =
'devws048-32241review/ms/sw-review'
As you see, that query runs for 47 seconds at the moment I catch it.
Anybody who can explain what this 'triggers' in the database, as that table
itself is quite small. public | cache_invalidation | table | kallithea |
992 kB | public | cache_invalidation_cache_id_seq | sequence | kallithea |
8192 bytes |
I haven' looked googled yet what this could mean, so forgive me if I'm
asking basic questions ;-)
My setup is similar to this:
https://lists.sfconservancy.org/pipermail/kallithea-general/2015q1/000130.html
[https://lists.sfconservancy.org/pipermail/kallithea-general/2015q1/000130.html]
You may even monitor execution times or graph them or do some statistics
work
on them or -- simply put -- correlate higher latency and execution time
with
system or I/O load (even simple tools like sysstat/sadc/sar are sufficient
to
gain insights in this regard).
I will already have a look at these simple ones, I must admit I didn't use
these until now (I did use iowait/top/...)
http://www.thegeekstuff.com/2011/03/sar-examples/?utm_source=feedburner
[http://www.thegeekstuff.com/2011/03/sar-examples/?utm_source=feedburner]
Hope, this helps...
All the best,
Adi
_______________________________________________
kallithea-general mailing list
kallithea-general at sfconservancy.org [kallithea-general at sfconservancy.org]
https://lists.sfconservancy.org/mailman/listinfo/kallithea-general
[https://lists.sfconservancy.org/mailman/listinfo/kallithea-general]
Thx!
Jan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sfconservancy.org/pipermail/kallithea-general/attachments/20170322/b7ca2792/attachment-0001.html>
More information about the kallithea-general
mailing list