Invitation to "adopt pytest month"
Thomas De Schampheleire
patrickdepinguin at gmail.com
Mon Apr 6 14:57:50 EDT 2015
On Wed, Apr 1, 2015 at 9:09 PM, Thomas De Schampheleire
<patrickdepinguin at gmail.com> wrote:
> Hi Brianna,
> On Tue, Mar 31, 2015 at 12:07 AM, Mads Kiilerich <mads at kiilerich.com> wrote:
>> On 03/29/2015 09:43 PM, Thomas De Schampheleire wrote:
>>>>> I don't think there is a CI server currently.
>>>>> Mads: are you running one privately?
>>>> No, nothing continuous. I run the tests locally before deploying
>>>> or pushing upstream. That works ok. Right now I don't see a big need for
>>>> I don't think it would solve a "real" problem.
>>> The recent failures of two tests, caused by the Dulwich bump but only
>>> visible when you actually deploy this new Dulwich, could have been
>>> detected by a CI server (if they'd run 'python setup.py develop' each
>>> time, that is).
>>> But it's a cornercase, that's true.
>> Yes. That push was premature and unintended.
>> I think the right way to prevent it from happening again is to make sure we
>> just follow the usual and intended procedure and finish testing and burn-in
>> before pushing.
>> Still, no amount of testing can make sure that all combinations of
>> "supported" versions of external dependencies works correctly. (That is why
>> I prefer to keep constraints on the dependency versions so we at least know
>> we have done some amount of testing with all versions.) In this case we had
>> to "rush" an update of dulwich ... but it should still have been tested more
>> thorougly before pushing.
>>>> I can imagine some valuable milestones / deliverables:
>>>> * be able to run the existing tests with pytest instead of nosetests
>>>> * be able to run individual tests - that is currently tricky because
>>>> nosetest and/or the way our tests are structured
>>> Can you give some examples here?
>>> When you know the names of the test, for example from a previous run,
>>> you can simply run something like:
>> I have not been able to reliably found a way to construct such magic strings
>> by looking at a test failure output ... but I assume there is a way. Please
>> consider updating docs/contributing.rst with some hints ... and perhaps also
>> move some prose away from kallithea/tests/__init__.py .
>>>> * test coverage profiling ... and eventually tests for the areas without
>>>> * pull requests is one area with bad coverage ... and it is tricky
>>>> we want tests for many different repo scenarios with common and special
>>> I would really like more tests in this area too
>> Absolutely. I don't have much experience with thorough testing or TDD but it
>> do not seem feasible with the current test framework ... or with my current
>> level of familiarity with it.
> To get started, it could be interesting to have a brief presentation
> of the three pyTest-contributors?
> Each of them could maybe have a look at the ideas in this mail thread
> and on the wiki and give their thoughts?
> Then the first step would be the change of test runner, something that
> could be sent as a first patch I guess?
> Looking forward to the testing improvements this month...
Friendly bump after the mailing list problems...
More information about the kallithea-general