I am pleased to announce that cl-mysql 0.2 is ready for use!
Here are some of the highlights of v0.2
- Connection pooling – Thread safe allocation and release of connections from a central pool.
- Use result/Store result – Ability to use mysql_use_result as well as mysql_store_result. This means that CL-MYSQL should be able to handle the processing of very large datasets without running out of memory.
- Convenience functions/macros – with-rows / nth-row
The main difference between v0.1 and v0.2 is that version 0.1 didn’t really manage its connections. I decided that allowing the user to choose between pooled and non-pooled connections is a hassle. Much better then to allow the user to create as many connection pools as they want and allow them to specify the maximum and minimum number of connections that the pool can hold. After all, a single connection is simply a special case of a pool with only one connection.
However, in theory this could hurt performance when attempting to do large number of INSERT/UPDATE’s because every call would require the connection pool to be locked and a connection to be aquired. This could be overcome though by making use of the fact that CL-MYSQL will correctly pass multiple statements to the server so you could concatenate a large string of updates and execute them all at once.
The good news though is that the API has changed only very slightly in the optional arguments it accepts. However I have changed the way the result data comes back from query. Because CL-MSQL returns multiple result sets it’s necessary to place all of them into a sequence. Additionally, I did not like the way I was placing the column headers into the first item of the result data. It means you always have to allow for it. I considered doing it the way that CLSQL does it by returning the column data in a value struct but I find this awkward to manage. This is because every layer of the API (and client code) must multiple-value-bind the columns out and either repackage them as a sequence or create a new value structure to pass them up the call-chain.
Therefore I have changed the result sequence structure to be as follows:
query-result ::= (<result-set>*) result-set ::= (<result-data> <column-data>) result-data ::= (<row>*) | <rows-affected> row ::= (<column>*) column-data ::= ((<column-name> <column-type> <column-flags>)*)
I appreciate that this is a little complex, I did consider turning the result data into a struct but this complicates how the user processes the data. For this reason I have added: with-rows and nth-row to simplify the processing of this result data.
Finally, the whole thing is still only SBCL/x86 Linux compatible, that might change :-).
More information is available here. As always, any feedback is appreciated.