Upcoming 7.2.0: review which addons to move to core

classic Classic list List threaded Threaded
39 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Moritz Lennert
On 29/09/16 23:49, Blumentrath, Stefan wrote:

> Hi,
>
>
>
> This discussion is actually a bit old, but maybe it is not too late to
> consider adding selected addons to trunk?
>
>
>
> From my personal user point of view the r.streams.* modules and
> r.geomorphon are indeed top candidates for inclusion in core!
>
>
>
> However, also:
>
>
>
> i.segment.hierarchical (though manual is not yet complete)

I've been working on trying to understand the exact functioning of the
module and on writing some documentation on this, but this has been
side-tracked by the many other priorities at work...


> v.class.mlpy (drawback: requires mlpy library) or v.class.ml and
>
> r.randomforest
>
> could nicely complement the image classification tools in GRASS.

+1

In the same line, it might be nice to add:

i.segment.stats and r.object.geometry.

And possibly also v.class.mlR and i.segment.uspo...

Moritz
_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

ychemin
Hi,

added my feelings (biased towards remote sensing, I admit)

+1 => r.streams.*
+1 => r.geomorphon
+0 => i.segment.hierarchical (+1 if manual complete)
+0 => v.class.mlpy
+1 => v.class.ml
+1 => r.randomforest
+1 => i.segment.stats
+1 => r.object.geometry
+0 => v.class.mlR
+0 => i.segment.uspo (but +1 if r.neighborhoodmatrix is included in core)
+1 => i.landsat8.*
+1 => i.spec.sam
+1 => i.edge
+1 => i.histo.match



On 30 September 2016 at 09:19, Moritz Lennert <[hidden email]> wrote:
On 29/09/16 23:49, Blumentrath, Stefan wrote:
Hi,



This discussion is actually a bit old, but maybe it is not too late to
consider adding selected addons to trunk?



From my personal user point of view the r.streams.* modules and
r.geomorphon are indeed top candidates for inclusion in core!



However, also:



i.segment.hierarchical (though manual is not yet complete)

I've been working on trying to understand the exact functioning of the module and on writing some documentation on this, but this has been side-tracked by the many other priorities at work...


v.class.mlpy (drawback: requires mlpy library) or v.class.ml and

r.randomforest

could nicely complement the image classification tools in GRASS.

+1

In the same line, it might be nice to add:

i.segment.stats and r.object.geometry.

And possibly also v.class.mlR and i.segment.uspo...

Moritz

_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev



--
Yann Chemin
Skype/FB: yann.chemin


_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

NikosAlexandris
* Yann Chemin <[hidden email]> [2016-09-30 10:14:39 +0200]:

>Hi,
>
>added my feelings (biased towards remote sensing, I admit)
>
>+1 => r.streams.*
>+1 => r.geomorphon
>+0 => i.segment.hierarchical (+1 if manual complete)
>+0 => v.class.mlpy
>+1 => v.class.ml
>+1 => r.randomforest
>+1 => i.segment.stats
>+1 => r.object.geometry
>+0 => v.class.mlR
>+0 => i.segment.uspo (but +1 if r.neighborhoodmatrix is included in core)
>+1 => i.landsat8.*
>+1 => i.spec.sam
>+1 => i.edge
>+1 => i.histo.match

i.histo.match deserves a fix to account for floats.
Too many To Dos, too little time.

Nikos

[rest deleted]
_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Sören Gebbert
In reply to this post by Markus Neteler
Hi,
I would strongly suggest to move only those addons into core, that have good documentation, maintainable code and python tests that run in the gunittest framework.

Just my 2c
Sören

2016-07-03 20:09 GMT+02:00 Markus Neteler <[hidden email]>:

Hi,

we may consider to move a few (!) mature addons to core.

Thoughts?

Markus


_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev


_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
SBL
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

SBL

Sounds fair enough as requirements for new core modules. “Maintainable code” would in praxis mean “the module has undergone a code review by a core developer”?

Those requirements would add to Markus requirement of “maturity”, which I would interpret like “the module has been tested in praxis and options and flags are consolidated” (so no major changes are expected / planned)...?

 

I am afraid, it seems only very few of the suggested modules are covered with unit tests. Most of them have a good documentation. No idea about the maintainability of the code...

 

How should we proceed with this topic? Should the named modules (and from my point of view Moritz OBIA modules would be very welcome too) be considered as a kind of “wish list” from the community? Probably more voices would be needed, as we currently have no “download statistics” or similar measures which may tell us something about the popularity or wide spread application of a module that would give reason to integrate it into core...

Where should such wishes be collected? A wiki page? Knowing of such interest might be an incentive for an addon-developer to write a test or to improve documentation...

 

Identified candidates could be added to core once they fulfill the requirements above. Would that happen only in minor releases or would that also be possible in point releases?

 

Or is that already too much formality and if someone wishes to see an addon in core that is simply discussed on ML?

 

Cheers

Stefan

 

From: grass-dev [mailto:[hidden email]] On Behalf Of Sören Gebbert
Sent: 30. september 2016 22:29
To: Markus Neteler <[hidden email]>
Cc: GRASS developers list <[hidden email]>
Subject: Re: [GRASS-dev] Upcoming 7.2.0: review which addons to move to core

 

Hi,

I would strongly suggest to move only those addons into core, that have good documentation, maintainable code and python tests that run in the gunittest framework.

 

Just my 2c

Sören

 

2016-07-03 20:09 GMT+02:00 Markus Neteler <[hidden email]>:

Hi,

we may consider to move a few (!) mature addons to core.

Thoughts?

Markus


_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev

 


_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Moritz Lennert
On 01/10/16 21:25, Blumentrath, Stefan wrote:

> Sounds fair enough as requirements for new core modules. “Maintainable
> code” would in praxis mean “the module has undergone a code review by a
> core developer”?
>
> Those requirements would add to Markus requirement of “maturity”, which
> I would interpret like “the module has been tested in praxis and options
> and flags are consolidated” (so no major changes are expected /
> planned)...?
>
>
>
> I am afraid, it seems only very few of the suggested modules are covered
> with unit tests. Most of them have a good documentation. No idea about
> the maintainability of the code...
>
>
>
> How should we proceed with this topic? Should the named modules (and
> from my point of view Moritz OBIA modules would be very welcome too)

They definitely do not meet the enounced criteria, yet. No tests and
AFAIK, most of them have only been used inhouse by my colleagues.

So, I'm happy to have them live addons for now.

This said, I think the requirement of tests is something I would like to
see discussed a bit more. This is a pretty heavy requirement and many
current core modules do not have unit tests...

One thing we could think about is activating the toolbox idea a bit more
and creating a specific OBIA toolbox in addons.

> Identified candidates could be added to core once they fulfill the
> requirements above. Would that happen only in minor releases or would
> that also be possible in point releases?

Adding modules to core is not an API change, so I don't see why they
can't be added at any time. But then again, having a series of new
modules can be sufficient to justify a new minor release ;-)

> Or is that already too much formality and if someone wishes to see an
> addon in core that is simply discussed on ML?

Generally, I would think that discussion on ML is the best way to handle
this.

Moritz

_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

NikosAlexandris
* Moritz Lennert <[hidden email]> [2016-10-02 13:24:41 +0200]:

>On 01/10/16 21:25, Blumentrath, Stefan wrote:
>> Sounds fair enough as requirements for new core modules. “Maintainable
>> code” would in praxis mean “the module has undergone a code review by a
>> core developer”?
>>
>> Those requirements would add to Markus requirement of “maturity”, which
>> I would interpret like “the module has been tested in praxis and options
>> and flags are consolidated” (so no major changes are expected /
>> planned)...?
>>
>>
>>
>> I am afraid, it seems only very few of the suggested modules are covered
>> with unit tests. Most of them have a good documentation. No idea about
>> the maintainability of the code...
>>
>>
>>
>> How should we proceed with this topic? Should the named modules (and
>> from my point of view Moritz OBIA modules would be very welcome too)
>
>They definitely do not meet the enounced criteria, yet. No tests and
>AFAIK, most of them have only been used inhouse by my colleagues.
>
>So, I'm happy to have them live addons for now.
>
>This said, I think the requirement of tests is something I would like to
>see discussed a bit more. This is a pretty heavy requirement and many
>current core modules do not have unit tests...

On the long run, GRASS-GIS modules deserve unit tests.  I think we
should invest efforts in this direction.

In this sense, I will try to integrate unit tests for every, hopefully,
useful code I share in form of a module.

Nikos


>One thing we could think about is activating the toolbox idea a bit more
>and creating a specific OBIA toolbox in addons.
>
>> Identified candidates could be added to core once they fulfill the
>> requirements above. Would that happen only in minor releases or would
>> that also be possible in point releases?
>
>Adding modules to core is not an API change, so I don't see why they
>can't be added at any time. But then again, having a series of new
>modules can be sufficient to justify a new minor release ;-)
>
>> Or is that already too much formality and if someone wishes to see an
>> addon in core that is simply discussed on ML?
>
>Generally, I would think that discussion on ML is the best way to handle
>this.
>
>Moritz
>
>_______________________________________________
>grass-dev mailing list
>[hidden email]
>http://lists.osgeo.org/mailman/listinfo/grass-dev

--
Nikos Alexandris | Remote Sensing & Geomatics
GPG Key Fingerprint 6F9D4506F3CA28380974D31A9053534B693C4FB3
_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Sören Gebbert
In reply to this post by SBL
Hi,
In my humble opinion we should accept only new modules in core, that are covered by gunittets and this should not only be related to addons. Every new module must have tests.

The consequence in moving addons into core is that the "core" developers have to maintain those modules. If modifications are performed in the core C- or Python libraries, then all modules have to be tested against these changes.

2016-10-01 21:25 GMT+02:00 Blumentrath, Stefan <[hidden email]>:

Sounds fair enough as requirements for new core modules. “Maintainable code” would in praxis mean “the module has undergone a code review by a core developer”?


Code review by developers is a good idea. Suggestion, only modules positively reviewed by two developers, should be added to core. This will enhance the code quality of new modules and will give the addon developers the opportunity to show their skills in developing good code.

IMHO, to keep GRASS maintainable, we have to issue this kind restrictions. There is already plenty of hard to maintain code in GRASS, we don't need more of this. 

However, there is still the very nice and comfortable way to use g.extension to install addons.

Best
Sören
 

Those requirements would add to Markus requirement of “maturity”, which I would interpret like “the module has been tested in praxis and options and flags are consolidated” (so no major changes are expected / planned)...?

 

I am afraid, it seems only very few of the suggested modules are covered with unit tests. Most of them have a good documentation. No idea about the maintainability of the code...

 

How should we proceed with this topic? Should the named modules (and from my point of view Moritz OBIA modules would be very welcome too) be considered as a kind of “wish list” from the community? Probably more voices would be needed, as we currently have no “download statistics” or similar measures which may tell us something about the popularity or wide spread application of a module that would give reason to integrate it into core...

Where should such wishes be collected? A wiki page? Knowing of such interest might be an incentive for an addon-developer to write a test or to improve documentation...

 

Identified candidates could be added to core once they fulfill the requirements above. Would that happen only in minor releases or would that also be possible in point releases?

 

Or is that already too much formality and if someone wishes to see an addon in core that is simply discussed on ML?

 

Cheers

Stefan

 

From: grass-dev [mailto:[hidden email]] On Behalf Of Sören Gebbert
Sent: 30. september 2016 22:29
To: Markus Neteler <[hidden email]>
Cc: GRASS developers list <[hidden email]>
Subject: Re: [GRASS-dev] Upcoming 7.2.0: review which addons to move to core

 

Hi,

I would strongly suggest to move only those addons into core, that have good documentation, maintainable code and python tests that run in the gunittest framework.

 

Just my 2c

Sören

 

2016-07-03 20:09 GMT+02:00 Markus Neteler <[hidden email]>:

Hi,

we may consider to move a few (!) mature addons to core.

Thoughts?

Markus


_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev

 


_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev


_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Sören Gebbert
In reply to this post by Moritz Lennert


2016-10-02 13:24 GMT+02:00 Moritz Lennert <[hidden email]>:

>
> On 01/10/16 21:25, Blumentrath, Stefan wrote:
>>
>> Sounds fair enough as requirements for new core modules. “Maintainable
>> code” would in praxis mean “the module has undergone a code review by a
>> core developer”?
>>
>> Those requirements would add to Markus requirement of “maturity”, which
>> I would interpret like “the module has been tested in praxis and options
>> and flags are consolidated” (so no major changes are expected /
>> planned)...?
>>
>>
>>
>> I am afraid, it seems only very few of the suggested modules are covered
>> with unit tests. Most of them have a good documentation. No idea about
>> the maintainability of the code...
>>
>>
>>
>> How should we proceed with this topic? Should the named modules (and
>> from my point of view Moritz OBIA modules would be very welcome too)
>
>
> They definitely do not meet the enounced criteria, yet. No tests and AFAIK, most of them have only been used inhouse by my colleagues.
>
> So, I'm happy to have them live addons for now.
>
> This said, I think the requirement of tests is something I would like to see discussed a bit more. This is a pretty heavy requirement and many current core modules do not have unit tests...
You are very welcome to write the missing tests for core modules.

However, i don't understand the argument that because many core modules have no tests, therefore new modules don't need them. If developers of addon module are serious about the attempt to make their modules usable and maintainable for others, then they have to implement tests. Its and integral part of the development process and GRASS has a beautiful test environment hat makes writing tests easy. Tests and documentation are part of coding and not something special. I don't think this is a hard requirement.

There is a nice statement that is not far from the truth: Untested code is broken code.

Best
Sören

>
> One thing we could think about is activating the toolbox idea a bit more and creating a specific OBIA toolbox in addons.
>
>> Identified candidates could be added to core once they fulfill the
>> requirements above. Would that happen only in minor releases or would
>> that also be possible in point releases?
>
>
> Adding modules to core is not an API change, so I don't see why they can't be added at any time. But then again, having a series of new modules can be sufficient to justify a new minor release ;-)
>
>> Or is that already too much formality and if someone wishes to see an
>> addon in core that is simply discussed on ML?
>
>
> Generally, I would think that discussion on ML is the best way to handle this.
>
> Moritz
>
>
> _______________________________________________
> grass-dev mailing list
> [hidden email]
> http://lists.osgeo.org/mailman/listinfo/grass-dev
_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Markus Metz-3
On Sun, Oct 2, 2016 at 9:43 PM, Sören Gebbert
<[hidden email]> wrote:

>
>
> 2016-10-02 13:24 GMT+02:00 Moritz Lennert <[hidden email]>:
>>
>> On 01/10/16 21:25, Blumentrath, Stefan wrote:
>>>
>>> Sounds fair enough as requirements for new core modules. “Maintainable
>>> code” would in praxis mean “the module has undergone a code review by a
>>> core developer”?
>>>
>>> Those requirements would add to Markus requirement of “maturity”, which
>>> I would interpret like “the module has been tested in praxis and options
>>> and flags are consolidated” (so no major changes are expected /
>>> planned)...?
>>>
>>>
>>>
>>> I am afraid, it seems only very few of the suggested modules are covered
>>> with unit tests. Most of them have a good documentation. No idea about
>>> the maintainability of the code...
>>>
>>>
>>>
>>> How should we proceed with this topic? Should the named modules (and
>>> from my point of view Moritz OBIA modules would be very welcome too)
>>
>>
>> They definitely do not meet the enounced criteria, yet. No tests and
>> AFAIK, most of them have only been used inhouse by my colleagues.
>>
>> So, I'm happy to have them live addons for now.
>>
>> This said, I think the requirement of tests is something I would like to
>> see discussed a bit more. This is a pretty heavy requirement and many
>> current core modules do not have unit tests...
>
> You are very welcome to write the missing tests for core modules.
>
> However, i don't understand the argument that because many core modules have
> no tests, therefore new modules don't need them. If developers of addon
> module are serious about the attempt to make their modules usable and
> maintainable for others, then they have to implement tests. Its and integral
> part of the development process and GRASS has a beautiful test environment
> hat makes writing tests easy. Tests and documentation are part of coding and
> not something special. I don't think this is a hard requirement.
>
> There is a nice statement that is not far from the truth: Untested code is
> broken code.

these gunittests only test if a module output stays the same. This
does not mean that a module output is correct. Tested code means first
of all that the code has been tested with all sorts of input data and
combinations of input data and flags. All these tests, e.g. what I did
for i.segment or r.stream.* (where I am not even the main author)
should IMHO not go into a gunittest framework because then running
gunittests will take a very long time. In short, simply adding
gunittests to addon modules is not enough, code needs to be tested
more thoroughly during development than what can be done with
gunittests.

My guess for the r.stream.* modules is at least 40 man hours of
testing to make sure they work correctly. That includes evaluation of
float usage, handling of NULL data, comparison of results with and
without the -m flag. Testing should be done with both high-res (LIDAR)
and low-res (e.g. SRTM) DEMs.

my2c

Markus M

>
> Best
> Sören
>
>>
>> One thing we could think about is activating the toolbox idea a bit more
>> and creating a specific OBIA toolbox in addons.
>>
>>> Identified candidates could be added to core once they fulfill the
>>> requirements above. Would that happen only in minor releases or would
>>> that also be possible in point releases?
>>
>>
>> Adding modules to core is not an API change, so I don't see why they can't
>> be added at any time. But then again, having a series of new modules can be
>> sufficient to justify a new minor release ;-)
>>
>>> Or is that already too much formality and if someone wishes to see an
>>> addon in core that is simply discussed on ML?
>>
>>
>> Generally, I would think that discussion on ML is the best way to handle
>> this.
>>
>> Moritz
>>
>>
>> _______________________________________________
>> grass-dev mailing list
>> [hidden email]
>> http://lists.osgeo.org/mailman/listinfo/grass-dev
>
> _______________________________________________
> grass-dev mailing list
> [hidden email]
> http://lists.osgeo.org/mailman/listinfo/grass-dev
_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Martin Landa
In reply to this post by Sören Gebbert
Hi,

2016-10-02 21:27 GMT+02:00 Sören Gebbert <[hidden email]>:
> In my humble opinion we should accept only new modules in core, that are
> covered by gunittets and this should not only be related to addons. Every
> new module must have tests.

we should have definitely some official procedure (requirements) for
"graduating" module being part of core. Ideally written as a new RFC.
Does it make sense to you, any volunteer to start working on a draft
of RFC #6?

Martin

[1] https://trac.osgeo.org/grass/wiki/RFC

--
Martin Landa
http://geo.fsv.cvut.cz/gwiki/Landa
http://gismentors.cz/mentors/landa
_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Martin Landa
In reply to this post by Markus Metz-3
Hi Markus,

2016-10-04 16:13 GMT+02:00 Markus Metz <[hidden email]>:
> My guess for the r.stream.* modules is at least 40 man hours of
> testing to make sure they work correctly. That includes evaluation of
> float usage, handling of NULL data, comparison of results with and
> without the -m flag. Testing should be done with both high-res (LIDAR)
> and low-res (e.g. SRTM) DEMs.

about r.stream modules, ASAIR the major blocker for these modules to
be moved to core is problem is memory mode, right? I don't remember if
there is any ticket about that. Martin

--
Martin Landa
http://geo.fsv.cvut.cz/gwiki/Landa
http://gismentors.cz/mentors/landa
_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Helmut Kudrnovsky
Martin Landa wrote
Hi Markus,

2016-10-04 16:13 GMT+02:00 Markus Metz <[hidden email]>:
> My guess for the r.stream.* modules is at least 40 man hours of
> testing to make sure they work correctly. That includes evaluation of
> float usage, handling of NULL data, comparison of results with and
> without the -m flag. Testing should be done with both high-res (LIDAR)
> and low-res (e.g. SRTM) DEMs.

about r.stream modules, ASAIR the major blocker for these modules to
be moved to core is problem is memory mode, right? I don't remember if
there is any ticket about that. Martin
at least as a comment in this ticket:

https://trac.osgeo.org/grass/ticket/2237#comment:1
best regards
Helmut
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Sören Gebbert
In reply to this post by Markus Metz-3
Hi,

>
> You are very welcome to write the missing tests for core modules.
>
> However, i don't understand the argument that because many core modules have
> no tests, therefore new modules don't need them. If developers of addon
> module are serious about the attempt to make their modules usable and
> maintainable for others, then they have to implement tests. Its and integral
> part of the development process and GRASS has a beautiful test environment
> hat makes writing tests easy. Tests and documentation are part of coding and
> not something special. I don't think this is a hard requirement.
>
> There is a nice statement that is not far from the truth: Untested code is
> broken code.

these gunittests only test if a module output stays the same. This
 
This is simply wrong, please read the gunittest documentation.
 
does not mean that a module output is correct. Tested code means first
of all that the code has been tested with all sorts of input data and
combinations of input data and flags. All these tests, e.g. what I did

The gunittest framework is designed to do exactly that.
It has plenty of methods to validate the output
of modules, ranging from key/value validation, over statistical analysis of the output, to md5
checksum validation for raster, 3D raster, vector and binary/text file output.
It can test floating point output to a specific precision to avoid rounding errors or to consider the
variability of a random number based algorithm like random forest or boosted regression trees.
 
for i.segment or r.stream.* (where I am not even the main author)
should IMHO not go into a gunittest framework because then running
gunittests will take a very long time. In short, simply adding
gunittests to addon modules is not enough, code needs to be tested
more thoroughly during development than what can be done with
gunittests.

The gunittest for the v.stream.order addon is an example how its done:

You can write gunittests that will test every flag, every option, their combination and any output of a module. I have implemented plenty of tests, that check for correct error handling. Writing tests is effort, but you have to do it anyway. Why not implementing a gunittest for every feature while developing the module? 

My guess for the r.stream.* modules is at least 40 man hours of
testing to make sure they work correctly. That includes evaluation of
float usage, handling of NULL data, comparison of results with and
without the -m flag. Testing should be done with both high-res (LIDAR)
and low-res (e.g. SRTM) DEMs.
 
Tests can be performed on artificial data that tests all aspects of the algorithm. Tests that show the correctness of the algorithm for specific small cases should be preferred. However, large data should not be an obstacle to write a test.

Best regards
Soeren


my2c

Markus M

>
> Best
> Sören
>
>>
>> One thing we could think about is activating the toolbox idea a bit more
>> and creating a specific OBIA toolbox in addons.
>>
>>> Identified candidates could be added to core once they fulfill the
>>> requirements above. Would that happen only in minor releases or would
>>> that also be possible in point releases?
>>
>>
>> Adding modules to core is not an API change, so I don't see why they can't
>> be added at any time. But then again, having a series of new modules can be
>> sufficient to justify a new minor release ;-)
>>
>>> Or is that already too much formality and if someone wishes to see an
>>> addon in core that is simply discussed on ML?
>>
>>
>> Generally, I would think that discussion on ML is the best way to handle
>> this.
>>
>> Moritz
>>
>>
>> _______________________________________________
>> grass-dev mailing list
>> [hidden email]
>> http://lists.osgeo.org/mailman/listinfo/grass-dev
>
> _______________________________________________
> grass-dev mailing list
> [hidden email]
> http://lists.osgeo.org/mailman/listinfo/grass-dev


_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Markus Metz-3
On Tue, Oct 4, 2016 at 5:42 PM, Sören Gebbert
<[hidden email]> wrote:

> Hi,
>>
>>
>> >
>> > You are very welcome to write the missing tests for core modules.
>> >
>> > However, i don't understand the argument that because many core modules
>> > have
>> > no tests, therefore new modules don't need them. If developers of addon
>> > module are serious about the attempt to make their modules usable and
>> > maintainable for others, then they have to implement tests. Its and
>> > integral
>> > part of the development process and GRASS has a beautiful test
>> > environment
>> > hat makes writing tests easy. Tests and documentation are part of coding
>> > and
>> > not something special. I don't think this is a hard requirement.
>> >
>> > There is a nice statement that is not far from the truth: Untested code
>> > is
>> > broken code.
>>
>> these gunittests only test if a module output stays the same. This
>
>
> This is simply wrong, please read the gunittest documentation.

but then why does
>
> The gunittest for the v.stream.order addon is an example how its done:
> https://trac.osgeo.org/grass/browser/grass-addons/grass7/vector/v.stream.order/testsuite/test_stream_order.py

assume certain order numbers for features 4 and 7? What if these order
numbers are wrong?

Recently I fixed bugs in r.stream.order, related to stream length
calculations which are in turn used to determine stream orders. The
gunittest did not pick up 1) the bugs, 2) the bug fixes.

>
> You can write gunittests that will test every flag, every option, their
> combination and any output of a module. I have implemented plenty of tests,
> that check for correct error handling. Writing tests is effort, but you have
> to do it anyway. Why not implementing a gunittest for every feature while
> developing the module?
>>
>>
>> My guess for the r.stream.* modules is at least 40 man hours of
>> testing to make sure they work correctly. That includes evaluation of
>> float usage, handling of NULL data, comparison of results with and
>> without the -m flag. Testing should be done with both high-res (LIDAR)
>> and low-res (e.g. SRTM) DEMs.
>
>
> Tests can be performed on artificial data that tests all aspects of the
> algorithm. Tests that show the correctness of the algorithm for specific
> small cases should be preferred. However, large data should not be an
> obstacle to write a test.

I agree, for tests during development, not for gunittests.

From the examples I read, gunittests expect a specific output. If the
expected output (obtained with an assumed correct version of the
module) is wrong, the gunittest is bogus. gunittests are ok to make
sure the output does not change, but not ok to make sure the output is
correct. Two random examples are r.stream.order and r.univar.

Markus M

>
> Best regards
> Soeren
>
>>
>> my2c
>>
>> Markus M
>>
>> >
>> > Best
>> > Sören
>> >
>> >>
>> >> One thing we could think about is activating the toolbox idea a bit
>> >> more
>> >> and creating a specific OBIA toolbox in addons.
>> >>
>> >>> Identified candidates could be added to core once they fulfill the
>> >>> requirements above. Would that happen only in minor releases or would
>> >>> that also be possible in point releases?
>> >>
>> >>
>> >> Adding modules to core is not an API change, so I don't see why they
>> >> can't
>> >> be added at any time. But then again, having a series of new modules
>> >> can be
>> >> sufficient to justify a new minor release ;-)
>> >>
>> >>> Or is that already too much formality and if someone wishes to see an
>> >>> addon in core that is simply discussed on ML?
>> >>
>> >>
>> >> Generally, I would think that discussion on ML is the best way to
>> >> handle
>> >> this.
>> >>
>> >> Moritz
>> >>
>> >>
>> >> _______________________________________________
>> >> grass-dev mailing list
>> >> [hidden email]
>> >> http://lists.osgeo.org/mailman/listinfo/grass-dev
>> >
>> > _______________________________________________
>> > grass-dev mailing list
>> > [hidden email]
>> > http://lists.osgeo.org/mailman/listinfo/grass-dev
>
>
_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Anna Petrášová
On Tue, Oct 4, 2016 at 4:22 PM, Markus Metz
<[hidden email]> wrote:

> On Tue, Oct 4, 2016 at 5:42 PM, Sören Gebbert
> <[hidden email]> wrote:
>> Hi,
>>>
>>>
>>> >
>>> > You are very welcome to write the missing tests for core modules.
>>> >
>>> > However, i don't understand the argument that because many core modules
>>> > have
>>> > no tests, therefore new modules don't need them. If developers of addon
>>> > module are serious about the attempt to make their modules usable and
>>> > maintainable for others, then they have to implement tests. Its and
>>> > integral
>>> > part of the development process and GRASS has a beautiful test
>>> > environment
>>> > hat makes writing tests easy. Tests and documentation are part of coding
>>> > and
>>> > not something special. I don't think this is a hard requirement.
>>> >
>>> > There is a nice statement that is not far from the truth: Untested code
>>> > is
>>> > broken code.
>>>
>>> these gunittests only test if a module output stays the same. This
>>
>>
>> This is simply wrong, please read the gunittest documentation.
>
> but then why does
>>
>> The gunittest for the v.stream.order addon is an example how its done:
>> https://trac.osgeo.org/grass/browser/grass-addons/grass7/vector/v.stream.order/testsuite/test_stream_order.py
>
> assume certain order numbers for features 4 and 7? What if these order
> numbers are wrong?
>
> Recently I fixed bugs in r.stream.order, related to stream length
> calculations which are in turn used to determine stream orders. The
> gunittest did not pick up 1) the bugs, 2) the bug fixes.
>
>>
>> You can write gunittests that will test every flag, every option, their
>> combination and any output of a module. I have implemented plenty of tests,
>> that check for correct error handling. Writing tests is effort, but you have
>> to do it anyway. Why not implementing a gunittest for every feature while
>> developing the module?
>>>
>>>
>>> My guess for the r.stream.* modules is at least 40 man hours of
>>> testing to make sure they work correctly. That includes evaluation of
>>> float usage, handling of NULL data, comparison of results with and
>>> without the -m flag. Testing should be done with both high-res (LIDAR)
>>> and low-res (e.g. SRTM) DEMs.
>>
>>
>> Tests can be performed on artificial data that tests all aspects of the
>> algorithm. Tests that show the correctness of the algorithm for specific
>> small cases should be preferred. However, large data should not be an
>> obstacle to write a test.
>
> I agree, for tests during development, not for gunittests.
>
> From the examples I read, gunittests expect a specific output. If the
> expected output (obtained with an assumed correct version of the
> module) is wrong, the gunittest is bogus. gunittests are ok to make
> sure the output does not change, but not ok to make sure the output is
> correct. Two random examples are r.stream.order and r.univar.


I am not sure why are we discussing this, it's pretty obvious that
gunittests can serve to a) test inputs/outputs b) catch changes in
results (whether correct or incorrect) c) test correctness of results.
It just depends how you write them, and yes, for some modules c) is
more difficult to implement than for others.

Anna

>
> Markus M
>
>>
>> Best regards
>> Soeren
>>
>>>
>>> my2c
>>>
>>> Markus M
>>>
>>> >
>>> > Best
>>> > Sören
>>> >
>>> >>
>>> >> One thing we could think about is activating the toolbox idea a bit
>>> >> more
>>> >> and creating a specific OBIA toolbox in addons.
>>> >>
>>> >>> Identified candidates could be added to core once they fulfill the
>>> >>> requirements above. Would that happen only in minor releases or would
>>> >>> that also be possible in point releases?
>>> >>
>>> >>
>>> >> Adding modules to core is not an API change, so I don't see why they
>>> >> can't
>>> >> be added at any time. But then again, having a series of new modules
>>> >> can be
>>> >> sufficient to justify a new minor release ;-)
>>> >>
>>> >>> Or is that already too much formality and if someone wishes to see an
>>> >>> addon in core that is simply discussed on ML?
>>> >>
>>> >>
>>> >> Generally, I would think that discussion on ML is the best way to
>>> >> handle
>>> >> this.
>>> >>
>>> >> Moritz
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> grass-dev mailing list
>>> >> [hidden email]
>>> >> http://lists.osgeo.org/mailman/listinfo/grass-dev
>>> >
>>> > _______________________________________________
>>> > grass-dev mailing list
>>> > [hidden email]
>>> > http://lists.osgeo.org/mailman/listinfo/grass-dev
>>
>>
> _______________________________________________
> grass-dev mailing list
> [hidden email]
> http://lists.osgeo.org/mailman/listinfo/grass-dev
_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Sören Gebbert
In reply to this post by Markus Metz-3


2016-10-04 22:22 GMT+02:00 Markus Metz <[hidden email]>:
On Tue, Oct 4, 2016 at 5:42 PM, Sören Gebbert
<[hidden email]> wrote:
> Hi,
>>
>>
>> >
>> > You are very welcome to write the missing tests for core modules.
>> >
>> > However, i don't understand the argument that because many core modules
>> > have
>> > no tests, therefore new modules don't need them. If developers of addon
>> > module are serious about the attempt to make their modules usable and
>> > maintainable for others, then they have to implement tests. Its and
>> > integral
>> > part of the development process and GRASS has a beautiful test
>> > environment
>> > hat makes writing tests easy. Tests and documentation are part of coding
>> > and
>> > not something special. I don't think this is a hard requirement.
>> >
>> > There is a nice statement that is not far from the truth: Untested code
>> > is
>> > broken code.
>>
>> these gunittests only test if a module output stays the same. This
>
>
> This is simply wrong, please read the gunittest documentation.

but then why does
>
> The gunittest for the v.stream.order addon is an example how its done:
> https://trac.osgeo.org/grass/browser/grass-addons/grass7/vector/v.stream.order/testsuite/test_stream_order.py

assume certain order numbers for features 4 and 7? What if these order
numbers are wrong?

The checked order numbers are validated by hand. The test example is based on artificial data, that i have created, for which i know what the correct order numbers are. Hence i can test if certain features have specific order numbers, since i know the correct solution.

Recently I fixed bugs in r.stream.order, related to stream length
calculations which are in turn used to determine stream orders. The
gunittest did not pick up 1) the bugs, 2) the bug fixes.

Then better test implementations are required that checks for correct output. If a bug was found a test should be written to check the bugfix.
Have a look at this commit that adds two new tests to validate the provided bugfix:


A one line bugfix and 50 lines of test code. :)
 

>
> You can write gunittests that will test every flag, every option, their
> combination and any output of a module. I have implemented plenty of tests,
> that check for correct error handling. Writing tests is effort, but you have
> to do it anyway. Why not implementing a gunittest for every feature while
> developing the module?
>>
>>
>> My guess for the r.stream.* modules is at least 40 man hours of
>> testing to make sure they work correctly. That includes evaluation of
>> float usage, handling of NULL data, comparison of results with and
>> without the -m flag. Testing should be done with both high-res (LIDAR)
>> and low-res (e.g. SRTM) DEMs.
>
>
> Tests can be performed on artificial data that tests all aspects of the
> algorithm. Tests that show the correctness of the algorithm for specific
> small cases should be preferred. However, large data should not be an
> obstacle to write a test.

I agree, for tests during development, not for gunittests.

From the examples I read, gunittests expect a specific output. If the
expected output (obtained with an assumed correct version of the
module) is wrong, the gunittest is bogus. gunittests are ok to make
sure the output does not change, but not ok to make sure the output is
correct. Two random examples are r.stream.order and r.univar.

I don't understand your argument here or i have a principal problem in understanding the test topic.

You have to implement a test that checks for correct output, this is the meaning of a test. You have to design a test scenario from which you know what the correct solution is and then you test for the correct solution. What is with r.univar? Create a test map with a specific number of cells with specific values and test if r.univar is able to compute the correct values that you have validated by hand.

-- what is the mean, min and max of 10 cells each with value 5? Its 5! -- 

The most simple check for that is the raster range check in gunittest. If you know what the range of the resulting raster map has to be, then you can test for it. If this is not enough then you can check against the uni-variate statistic output of the raster map, since you know for sure what the result is for min, mean, median, max and so on. If this is not sufficient use r.out.ascii and check against the correct solution that you have created beforehand. If this is not sufficient then use pygrass and investigate each raster cell of the resulting output map.

Best regards
Soeren


Markus M

>
> Best regards
> Soeren
>
>>
>> my2c
>>
>> Markus M
>>
>> >
>> > Best
>> > Sören
>> >
>> >>
>> >> One thing we could think about is activating the toolbox idea a bit
>> >> more
>> >> and creating a specific OBIA toolbox in addons.
>> >>
>> >>> Identified candidates could be added to core once they fulfill the
>> >>> requirements above. Would that happen only in minor releases or would
>> >>> that also be possible in point releases?
>> >>
>> >>
>> >> Adding modules to core is not an API change, so I don't see why they
>> >> can't
>> >> be added at any time. But then again, having a series of new modules
>> >> can be
>> >> sufficient to justify a new minor release ;-)
>> >>
>> >>> Or is that already too much formality and if someone wishes to see an
>> >>> addon in core that is simply discussed on ML?
>> >>
>> >>
>> >> Generally, I would think that discussion on ML is the best way to
>> >> handle
>> >> this.
>> >>
>> >> Moritz
>> >>
>> >>
>> >> _______________________________________________
>> >> grass-dev mailing list
>> >> [hidden email]
>> >> http://lists.osgeo.org/mailman/listinfo/grass-dev
>> >
>> > _______________________________________________
>> > grass-dev mailing list
>> > [hidden email]
>> > http://lists.osgeo.org/mailman/listinfo/grass-dev
>
>


_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Markus Metz-3
On Tue, Oct 4, 2016 at 11:02 PM, Sören Gebbert
<[hidden email]> wrote:
>
>
> 2016-10-04 22:22 GMT+02:00 Markus Metz <[hidden email]>:
>>
>> Recently I fixed bugs in r.stream.order, related to stream length
>> calculations which are in turn used to determine stream orders. The
>> gunittest did not pick up 1) the bugs, 2) the bug fixes.

sorry for the confusion, r.stream.order does not have any testsuite,
only v.stream.order has.

>>
>> I agree, for tests during development, not for gunittests.
>>
>> From the examples I read, gunittests expect a specific output. If the
>> expected output (obtained with an assumed correct version of the
>> module) is wrong, the gunittest is bogus. gunittests are ok to make
>> sure the output does not change, but not ok to make sure the output is
>> correct. Two random examples are r.stream.order and r.univar.
>
>
> I don't understand your argument here or i have a principal problem in
> understanding the test topic.
>
> You have to implement a test that checks for correct output, this is the
> meaning of a test.

Exactly. During development, however, you need to run many more tests
until you are confident that the output is correct. Then you submit
the changes. My point is that modules need to be tested thoroughly
during development (which is not always the case), and a testsuite to
make sure that the output matches expectations is nice to have. In
most cases,

<module_name> <args>
if [ $? -ne 0 ] ; then
  echo "ERROR: Module <module_name> failed"
  exit 1
fi

should do the job

no offence :-;

> You have to design a test scenario from which you know
> what the correct solution is and then you test for the correct solution.
> What is with r.univar? Create a test map with a specific number of cells
> with specific values and test if r.univar is able to compute the correct
> values that you have validated by hand.
>
> -- what is the mean, min and max of 10 cells each with value 5? Its 5! --

what is the correct standard deviation? sqrt((1/n) * SUM(x - mu)) or
sqrt((1/(n - 1)) * SUM(x - mu))?

r.univar uses sqrt((1/n) * SUM(x - mu)) but sqrt((1/(n - 1)) * SUM(x -
mu)) might be more appropriate because you could argue that raster
maps are always a sample. Apart from that, r.univar uses a one-pass
method to calculate stddev which is debatable.

Markus M

>
> The most simple check for that is the raster range check in gunittest. If
> you know what the range of the resulting raster map has to be, then you can
> test for it. If this is not enough then you can check against the
> uni-variate statistic output of the raster map, since you know for sure what
> the result is for min, mean, median, max and so on. If this is not
> sufficient use r.out.ascii and check against the correct solution that you
> have created beforehand. If this is not sufficient then use pygrass and
> investigate each raster cell of the resulting output map.
>
> Best regards
> Soeren
>
>>
>> Markus M
>>
>> >
>> > Best regards
>> > Soeren
>> >
>> >>
>> >> my2c
>> >>
>> >> Markus M
>> >>
>> >> >
>> >> > Best
>> >> > Sören
>> >> >
>> >> >>
>> >> >> One thing we could think about is activating the toolbox idea a bit
>> >> >> more
>> >> >> and creating a specific OBIA toolbox in addons.
>> >> >>
>> >> >>> Identified candidates could be added to core once they fulfill the
>> >> >>> requirements above. Would that happen only in minor releases or
>> >> >>> would
>> >> >>> that also be possible in point releases?
>> >> >>
>> >> >>
>> >> >> Adding modules to core is not an API change, so I don't see why they
>> >> >> can't
>> >> >> be added at any time. But then again, having a series of new modules
>> >> >> can be
>> >> >> sufficient to justify a new minor release ;-)
>> >> >>
>> >> >>> Or is that already too much formality and if someone wishes to see
>> >> >>> an
>> >> >>> addon in core that is simply discussed on ML?
>> >> >>
>> >> >>
>> >> >> Generally, I would think that discussion on ML is the best way to
>> >> >> handle
>> >> >> this.
>> >> >>
>> >> >> Moritz
>> >> >>
>> >> >>
>> >> >> _______________________________________________
>> >> >> grass-dev mailing list
>> >> >> [hidden email]
>> >> >> http://lists.osgeo.org/mailman/listinfo/grass-dev
>> >> >
>> >> > _______________________________________________
>> >> > grass-dev mailing list
>> >> > [hidden email]
>> >> > http://lists.osgeo.org/mailman/listinfo/grass-dev
>> >
>> >
>
>
_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
Reply | Threaded
Open this post in threaded view
|

Re: Upcoming 7.2.0: review which addons to move to core

Sören Gebbert


2016-10-06 21:26 GMT+02:00 Markus Metz <[hidden email]>:
On Tue, Oct 4, 2016 at 11:02 PM, Sören Gebbert
<[hidden email]> wrote:
>
>
> 2016-10-04 22:22 GMT+02:00 Markus Metz <[hidden email]>:
>>
>> Recently I fixed bugs in r.stream.order, related to stream length
>> calculations which are in turn used to determine stream orders. The
>> gunittest did not pick up 1) the bugs, 2) the bug fixes.

sorry for the confusion, r.stream.order does not have any testsuite,
only v.stream.order has.

>>
>> I agree, for tests during development, not for gunittests.
>>
>> From the examples I read, gunittests expect a specific output. If the
>> expected output (obtained with an assumed correct version of the
>> module) is wrong, the gunittest is bogus. gunittests are ok to make
>> sure the output does not change, but not ok to make sure the output is
>> correct. Two random examples are r.stream.order and r.univar.
>
>
> I don't understand your argument here or i have a principal problem in
> understanding the test topic.
>
> You have to implement a test that checks for correct output, this is the
> meaning of a test.

Exactly. During development, however, you need to run many more tests
until you are confident that the output is correct. Then you submit
the changes. My point is that modules need to be tested thoroughly
during development (which is not always the case), and a testsuite to
make sure that the output matches expectations is nice to have. In
most cases,

Implement all tests while developing a module as gunittests and you will have
a testsuite in the end. You have to implement the tests anyway, so why not using gunittests from the beginning,
as part of the development process?

If you implement a Python library, then use doctests to document and check functions and classes while
developing them. These doctests are part of the documentation and excellent examples howto use
a specific function or class. And by pure magic, you will have a testsuite in the end.
Have a look at PyGRASS, tons of doctests that are code examples and validation tests
at the same time. 

<module_name> <args>
if [ $? -ne 0 ] ; then
  echo "ERROR: Module <module_name> failed"
  exit 1
fi

should do the job

Nope. 

no offence :-;

> You have to design a test scenario from which you know
> what the correct solution is and then you test for the correct solution.
> What is with r.univar? Create a test map with a specific number of cells
> with specific values and test if r.univar is able to compute the correct
> values that you have validated by hand.
>
> -- what is the mean, min and max of 10 cells each with value 5? Its 5! --

what is the correct standard deviation? sqrt((1/n) * SUM(x - mu)) or
sqrt((1/(n - 1)) * SUM(x - mu))?

If you decide to use the first version, then implement tests for the first version.
If you decide to use the second version, then ... .
If you decide to support both versions, then implement tests for both versions.
 
r.univar uses sqrt((1/n) * SUM(x - mu)) but sqrt((1/(n - 1)) * SUM(x -
mu)) might be more appropriate because you could argue that raster
maps are always a sample. Apart from that, r.univar uses a one-pass
method to calculate stddev which is debatable.

If you decide to implement a specific version of stddev, then write a test for it.
Debating which version is more appropriate has nothing to do with the actual software development process.

Best regards
Soeren
 

Markus M

>
> The most simple check for that is the raster range check in gunittest. If
> you know what the range of the resulting raster map has to be, then you can
> test for it. If this is not enough then you can check against the
> uni-variate statistic output of the raster map, since you know for sure what
> the result is for min, mean, median, max and so on. If this is not
> sufficient use r.out.ascii and check against the correct solution that you
> have created beforehand. If this is not sufficient then use pygrass and
> investigate each raster cell of the resulting output map.
>
> Best regards
> Soeren
>
>>
>> Markus M
>>
>> >
>> > Best regards
>> > Soeren
>> >
>> >>
>> >> my2c
>> >>
>> >> Markus M
>> >>
>> >> >
>> >> > Best
>> >> > Sören
>> >> >
>> >> >>
>> >> >> One thing we could think about is activating the toolbox idea a bit
>> >> >> more
>> >> >> and creating a specific OBIA toolbox in addons.
>> >> >>
>> >> >>> Identified candidates could be added to core once they fulfill the
>> >> >>> requirements above. Would that happen only in minor releases or
>> >> >>> would
>> >> >>> that also be possible in point releases?
>> >> >>
>> >> >>
>> >> >> Adding modules to core is not an API change, so I don't see why they
>> >> >> can't
>> >> >> be added at any time. But then again, having a series of new modules
>> >> >> can be
>> >> >> sufficient to justify a new minor release ;-)
>> >> >>
>> >> >>> Or is that already too much formality and if someone wishes to see
>> >> >>> an
>> >> >>> addon in core that is simply discussed on ML?
>> >> >>
>> >> >>
>> >> >> Generally, I would think that discussion on ML is the best way to
>> >> >> handle
>> >> >> this.
>> >> >>
>> >> >> Moritz
>> >> >>
>> >> >>
>> >> >> _______________________________________________
>> >> >> grass-dev mailing list
>> >> >> [hidden email]
>> >> >> http://lists.osgeo.org/mailman/listinfo/grass-dev
>> >> >
>> >> > _______________________________________________
>> >> > grass-dev mailing list
>> >> > [hidden email]
>> >> > http://lists.osgeo.org/mailman/listinfo/grass-dev
>> >
>> >
>
>


_______________________________________________
grass-dev mailing list
[hidden email]
http://lists.osgeo.org/mailman/listinfo/grass-dev
12