Here are my comments after a surface scan (I did not test):
1/ First, thanks for restoring the compatibility with the original 7.0 API. However the way price_get_multi now works is not equivalent to the original code. As I understand it, price_get_multi() was working as follows:
- IN: `pricelist_ids`: list of pricelists to compute
- IN: `products_by_qty_by_partner`: list of (product,qty,partner) to compute, with each product supposed to be provided only once
- OUT: { prod_id1: {pricelist_id1: price, pricelist_id2: price}, prod_id2: {pricelist_id1: price, pricelist_id2: price},
(...) 'item_id': id_of_last_pricelist_line_matched }
where the returned prices will match the qty and partner that was provided *in the same tuple*.
Yes this API is nonsense (especially the 'item_id') and due to debatable past refactorings, but that's how it is :-(
With the current state of this merge proposal it seems that `products_by_qty_by_partner` is only used to get the list of products to compute, while the qty and partner are forced to be the ones provided as keyword arguments. So it will work only if partner and qty are the same for all products being computed.
I understand that it is hard to optimize the query for fetching the `product.pricelist.item`s if the qty and partner can be different for each product, so maybe we should not do it that way.
Could we instead do the following:
- the price_get_batch() method becomes responsible for computing the price for several products for a single `qty` and `partner`, and all the code moves in it. The signature would be:
def price_get_batch(self, cr, uid, ids, prod_ids, qty, partner=None, context=None)
- the price_get_multi() method keeps the original 7.0 signature (no extra parameter), but it groups all entries from `products_by_qty_by_partner` with the same `qty` and `partner` and passes all the groups to price_get_batch().
- this way the API still works exactly as before and the optimization works too because the POS will call it with qty=1 and partner=None for all products, so there will only be one group to pass to price_get_batch().
2/ what is the point of adding the extra argument "pricelist_version_ids" to price_get_batch() and price_get_multi()? It forces you to copy the search() for versions in product._product_price() and in pricelist.price_get(). On the other hand it seems to be only computed once, so it could still be done at the beginning of price_get_multi(). Why does price_get() now ignore the `date` in context?
4/ the big `if` with the recursive query should probably be moved to a separate private method that abstracts the logic of computing res_per_prod. i.e.:
res_per_prod = self._pricelist_table(self, cr, uid, pricelist_ids, products, partner, qty, ...)
The _pricelist_table() method could then be split into 2 methods _pricelist_table_pg83() and _pricelist_table_pg84() that implement each variant of the query.
5/ (not in your code, was already present) l.433-449: the logic to find the proper supplier for a given product should not be hardcoded, the product.product model has a `seller_id` function field that provides this info, so it should be taken from it instead of recomputed. BTW the `uom_id` (l.435) should also come from the `products_dict` cache rather than read() it again.
6/ l.530-557: I'm not sure why select=1 was added to the following field definitions: `price_discount`, `price_min_margin`, `price_max_margin`, `base_pricelist_id`. They don't seem to be used in search() domains or in WHERE clauses, so the index is only adding maintenance cost without any visible benefit. On the other hand it seems that `categ_id` and `min_quantity` could be indexed, but I'm really not sure it would make any difference. It's also possible to use a composite or partial index for criterions that are often used together - but that's probably out of scope here.
7/ l.635: product.price_get(): adding the extra `products` parameter is ok but the code is now very strange as `product` can refer to a browse_record or to a dict. This is very error-prone and forces careful handling of all m2o values. This *will* cause bugs in the future. You should perhaps replace the browse() by a read() and always work with dicts. Also be sure to add a big warning in the code to avoid errors in the future - developers will tend to rewrite it with browse_records.
8/ Finally, rewriting this algorithm would be so much easier if we had a unit test that calls price_get_multi with a list of various (product,qty,partner) tuples and checks that the correct pricelist item is used for computing the results. Too bad we don't have that :-( Perhaps you would save some time by writing a quick Python or YAML test for that?
Misc nit-picking:
- l.109: leftover debugging import
- l.123: leftover docstring change
- l.179: that `for` loop would be much more readable as:
product_ids = [p[0] for p in products_by_qty_by_partner]
- l.200: the type-testing on product_data seems useless as `products` is always a list?
Here are my comments after a surface scan (I did not test):
1/ First, thanks for restoring the compatibility with the original 7.0 API. However the way price_get_multi now works is not equivalent to the original code. As I understand it, price_get_multi() was working as follows: by_qty_ by_partner` : list of (product, qty,partner) to compute, with each product supposed to be provided only once
prod_ id2: {pricelist_id1: price, pricelist_id2: price},
'item_ id': id_of_last_ pricelist_ line_matched }
- IN: `pricelist_ids`: list of pricelists to compute
- IN: `products_
- OUT: { prod_id1: {pricelist_id1: price, pricelist_id2: price},
(...)
where the returned prices will match the qty and partner that was provided *in the same tuple*.
Yes this API is nonsense (especially the 'item_id') and due to debatable past refactorings, but that's how it is :-(
With the current state of this merge proposal it seems that `products_ by_qty_ by_partner` is only used to get the list of products to compute, while the qty and partner are forced to be the ones provided as keyword arguments. So it will work only if partner and qty are the same for all products being computed.
I understand that it is hard to optimize the query for fetching the `product. pricelist. item`s if the qty and partner can be different for each product, so maybe we should not do it that way.
Could we instead do the following: batch(self, cr, uid, ids, prod_ids, qty, partner=None, context=None) by_qty_ by_partner` with the same `qty` and `partner` and passes all the groups to price_get_batch().
- the price_get_batch() method becomes responsible for computing the price for several products for a single `qty` and `partner`, and all the code moves in it. The signature would be:
def price_get_
- the price_get_multi() method keeps the original 7.0 signature (no extra parameter), but it groups all entries from `products_
- this way the API still works exactly as before and the optimization works too because the POS will call it with qty=1 and partner=None for all products, so there will only be one group to pass to price_get_batch().
2/ what is the point of adding the extra argument "pricelist_ version_ ids" to price_get_batch() and price_get_multi()? It forces you to copy the search() for versions in product. _product_ price() and in pricelist. price_get( ). On the other hand it seems to be only computed once, so it could still be done at the beginning of price_get_multi(). Why does price_get() now ignore the `date` in context?
4/ the big `if` with the recursive query should probably be moved to a separate private method that abstracts the logic of computing res_per_prod. i.e.: _table( self, cr, uid, pricelist_ids, products, partner, qty, ...) table_pg83( ) and _pricelist_ table_pg84( ) that implement each variant of the query.
res_per_prod = self._pricelist
The _pricelist_table() method could then be split into 2 methods _pricelist_
5/ (not in your code, was already present) l.433-449: the logic to find the proper supplier for a given product should not be hardcoded, the product.product model has a `seller_id` function field that provides this info, so it should be taken from it instead of recomputed. BTW the `uom_id` (l.435) should also come from the `products_dict` cache rather than read() it again.
6/ l.530-557: I'm not sure why select=1 was added to the following field definitions: `price_discount`, `price_min_margin`, `price_max_margin`, `base_pricelist _id`. They don't seem to be used in search() domains or in WHERE clauses, so the index is only adding maintenance cost without any visible benefit. On the other hand it seems that `categ_id` and `min_quantity` could be indexed, but I'm really not sure it would make any difference. It's also possible to use a composite or partial index for criterions that are often used together - but that's probably out of scope here.
7/ l.635: product. price_get( ): adding the extra `products` parameter is ok but the code is now very strange as `product` can refer to a browse_record or to a dict. This is very error-prone and forces careful handling of all m2o values. This *will* cause bugs in the future. You should perhaps replace the browse() by a read() and always work with dicts. Also be sure to add a big warning in the code to avoid errors in the future - developers will tend to rewrite it with browse_records.
8/ Finally, rewriting this algorithm would be so much easier if we had a unit test that calls price_get_multi with a list of various (product, qty,partner) tuples and checks that the correct pricelist item is used for computing the results. Too bad we don't have that :-( Perhaps you would save some time by writing a quick Python or YAML test for that?
Misc nit-picking: by_qty_ by_partner]
- l.109: leftover debugging import
- l.123: leftover docstring change
- l.179: that `for` loop would be much more readable as:
product_ids = [p[0] for p in products_
- l.200: the type-testing on product_data seems useless as `products` is always a list?