Invitation to "adopt pytest month"

Brianna Laugher brianna.laugher at
Mon Apr 6 19:15:41 EDT 2015

Hi, sorry I got distracted by chocolate eggs.

I gave the pytesters directions to this discussion/list and I hope
they will introduce themselves soon. :)


On 6 April 2015 at 20:57, Thomas De Schampheleire
<patrickdepinguin at> wrote:
> On Wed, Apr 1, 2015 at 9:09 PM, Thomas De Schampheleire
> <patrickdepinguin at> wrote:
>> Hi Brianna,
>> On Tue, Mar 31, 2015 at 12:07 AM, Mads Kiilerich <mads at> wrote:
>>> On 03/29/2015 09:43 PM, Thomas De Schampheleire wrote:
>>>>>> I don't think there is a CI server currently.
>>>>>> Mads: are you running one privately?
>>>>> No, nothing continuous. I run the tests locally before deploying
>>>>> internally
>>>>> or pushing upstream. That works ok. Right now I don't see a big need for
>>>>> CI;
>>>>> I don't think it would solve a "real" problem.
>>>> The recent failures of two tests, caused by the Dulwich bump but only
>>>> visible when you actually deploy this new Dulwich, could have been
>>>> detected by a CI server (if they'd run 'python develop' each
>>>> time, that is).
>>>> But it's a cornercase, that's true.
>>> Yes. That push was premature and unintended.
>>> I think the right way to prevent it from happening again is to make sure we
>>> just follow the usual and intended procedure and finish testing and burn-in
>>> before pushing.
>>> Still, no amount of testing can make sure that all combinations of
>>> "supported" versions of external dependencies works correctly. (That is why
>>> I prefer to keep constraints on the dependency versions so we at least know
>>> we have done some amount of testing with all versions.) In this case we had
>>> to "rush" an update of dulwich ... but it should still have been tested more
>>> thorougly before pushing.
>>>>> I can imagine some valuable milestones / deliverables:
>>>>> * be able to run the existing tests with pytest instead of nosetests
>>>>> * be able to run individual tests - that is currently tricky because
>>>>> nosetest and/or the way our tests are structured
>>>> Can you give some examples here?
>>>> When you know the names of the test, for example from a previous run,
>>>> you can simply run something like:
>>>> nosetests
>>>> kallithea/tests/functional/
>>> I have not been able to reliably found a way to construct such magic strings
>>> by looking at a test failure output ... but I assume there is a way. Please
>>> consider updating docs/contributing.rst with some hints ... and perhaps also
>>> move some prose away from kallithea/tests/ .
>>>>> * test coverage profiling ... and eventually tests for the areas without
>>>>> coverage
>>>>> * pull requests is one area with bad coverage ... and it is tricky
>>>>> because
>>>>> we want tests for many different repo scenarios with common and special
>>>>> cases
>>>> I would really like more tests in this area too
>>> Absolutely. I don't have much experience with thorough testing or TDD but it
>>> do not seem feasible with the current test framework ... or with my current
>>> level of familiarity with it.
>> To get started, it could be interesting to have a brief presentation
>> of the three pyTest-contributors?
>> Each of them could maybe have a look at the ideas in this mail thread
>> and on the wiki and give their thoughts?
>> Then the first step would be the change of test runner, something that
>> could be sent as a first patch I guess?
>> Looking forward to the testing improvements this month...
> Friendly bump after the mailing list problems...

They've just been waiting in a mountain for the right moment:

More information about the kallithea-general mailing list