Updating bulk of code in query
To reduce the amount of memory used by peewee when iterating over a query, use the When iterating over a large number of rows that contain columns from multiple tables, peewee will reconstruct the model graph for each row returned. For example, if we were selecting a list of tweets along with the username and avatar of the tweet’s author, Peewee would have to create two objects for each row (a tweet and a user).
In addition to the above row-types, there is a fourth method which will contain the count of tweets for each user.
JPA 2.1 added a list of nice features to the specification.
One of them is the support for bulk update and delete operations in the Criteria API.
From my point of view this is a small but great enhancement that allows us to use the Criteria API in even more situations. If you want to try it yourself, you can use any JPA 2.1 implementation like Hibernate or Eclipse Link .
You can find the source code of the examples in my github repo.
SQLite in particular typically has a limit of 999 variables-per-query (batch size would then be roughly 1000 / row length).
You can write a loop to batch your data into chunks (in which case it is strongly recommended you use a transaction): Do not do this!
Therefore the persistence context is not synchronized with the result of this operation.The example of a Criteria Delete operation looks similar to the usage of Criteria Query known from JPA2.0 and the Criteria Update operation described above: OK, I don’t think this needs any explanation …The new Criteria Update and Criteria Delete interfaces add the missing bulk update and delete operations to the Criteria API.The Criteria Update interface can be used to implement bulk update operations.But be careful, these operations are directly mapped to database update operations.