shared API for double-entry accounting, treating it as a 'math' library.

Bradley M. Kuhn bkuhn at sfconservancy.org
Thu Nov 14 09:02:21 EST 2013


Chris Travers wrote at 05:49 (EST):

> 1. Complexity of input data. Financial transactions are complex, and
> they get more complex in certain kinds of environments. In the
> for-profit world, I would expect a grocery store receipt to add up to
> maybe a hundred of gl line items.

So, I agree that the overall picture of the financial transactions are
complex, but think about how much of that is *not* the actual
double-entry accounting data.  Considering your grocery store receipt
example, if we ignore the other parts of the question, it's just (I
apologize for the Ledger-CLI-y syntax):

2013-11-14 John Q. Customer
     Income:Gross Sales                  $-2.99
     Income:Sales Tax Collected          $-0.26
     Expenses:Sales Tax                   $0.26
     Inventory                           1 BoxedCereal {=$2.99}

...repeated for every item bought....

This is a lot of data, no question, but it's not complex.  Obviously,
I've eliminated much of the detail that might want to be kept.  That
John Q. Customer maybe should be a customer id that (in SQL land) would
index to a record with lots more data.  BoxedCereal might be an
inventory code instead of a moniker like it is in Ledger-CLI.

The point is that the *just* the computation of double-entry accounting
can be separated into something more basic.


> 2.  Pervasiveness of data. Financial transactions touch everything
> and they tend to be connected to everything. Managing that
> connectedness is a significant challenge of a sort of loosely coupled
> approach. An API would need to be able to handle that.

I'm not convinced on that.  Plenty of places, these things would be
identifiers that would have to index somewhere else, but the double
entry library just doesn't care about that.

I realize that the annoying nature of having to talk through an API to
get to double-entry data....

> 3. Integration of data. Often times it is important to be able to do
> complex reporting Often times one needs to tie line items of a
> transaction back to some other data. For example, you might want to
> find out "how has our spending on scalpals we ship to clinics in
> Africa compared to our spending on insulin?"

... and this problem is a real one.  If you "know" it's a database
underneath, why not just to a join instead of going through the API.
So, I can imagine that keeping people "to" the API would be the toughest
problem.

 
> Having said all of this, if you had an API that submitted a JSON
> document and received one back with line identifiers, etc. populated,
> then it should be manageable enough. There are naturally limits to
> this kind of modularity due to the complexity of the data being stored
> and retrieved on both sides.

See, I think the issue is *volume*, not complexity.  You can do this
complex queries, get a big-honking-JSON back, and then parse through it.
Queries that would take seconds now take minutes.  That's the real issue
I see with the idea.  I don't have an idea of how to solve that.
-- 
Bradley M. Kuhn, Executive Director, Software Freedom Conservancy


More information about the npo-accounting mailing list