Categories
lisp

cl-mysql on github

I’ve seen the future and it is git. Some believe Mercurial is the way because it is more accessible, but I’m not so sure.

We’re programmers, we love this esoteric shit. The more hardcore the better. IMHO, the force of numbers, and its pedigree, will probably make the big fat git wade through the opposition like the monster it is.

Anyway, for a few months now I’ve been seeing this github thing linked here and there, I figured it was about time I checked it out.

I went, I saw, and it was good. Fast, easy-to-use and beautifully in the spirit of free-software (you only have to pay to use it for private repositories).

So in the spirit of free lovecode I’ve added cl-mysql.

Categories
lisp

Embedding Lisp in Mono

I have been thinking about a new project that I wanted to write with Mono. One of the things that I realised I would need is the ability to execute arbitrary code (user or wizard generated) that might be stored in a database.

Code … data … code.

A very dim light went on in my head telling me that that sounded very much like a Lispy problem.

However, I particularly wanted to use Mono (and not Java, say) for the solution and so I started writing my own interpreter. After about 4 hours of toil I realised that maybe I should finish reading Lisp in Small Pieces before getting carried away. But the urge to create was large, so I went looking for a Scheme interpreter written in .NET and found S#.

10 minutes of fiddling in MonoDevelop later and I had an R4RS interpreter running in .NET on Linux. I heart open source …

stkni@ubuntu:~/source/SSharp-Mono-0.38a/SSharp.Console/bin/Debug$ ./SSharp.Console.exe 

 This is: SharpScheme v0.38 - R4RS compatible Scheme interpreter.
 A pure C# port of Peter Norvig's Scheme for the .NET Framework.
 License here: http://www.norvig.com/license.html 

 Running on: Unix  

>>> (+ 1 2)                   
3
Categories
database lisp

cl-mysql v0.2

I am pleased to announce that cl-mysql 0.2 is ready for use!

Here are some of the highlights of v0.2

  • Connection pooling – Thread safe allocation and release of connections from a central pool.
  • Use result/Store result – Ability to use mysql_use_result as well as mysql_store_result. This means that CL-MYSQL should be able to handle the processing of very large datasets without running out of memory.
  • Convenience functions/macros – with-rows / nth-row

The main difference between v0.1 and v0.2 is that version 0.1 didn’t really manage its connections. I decided that allowing the user to choose between pooled and non-pooled connections is a hassle. Much better then to allow the user to create as many connection pools as they want and allow them to specify the maximum and minimum number of connections that the pool can hold. After all, a single connection is simply a special case of a pool with only one connection.

However, in theory this could hurt performance when attempting to do large number of INSERT/UPDATE’s because every call would require the connection pool to be locked and a connection to be aquired. This could be overcome though by making use of the fact that CL-MYSQL will correctly pass multiple statements to the server so you could concatenate a large string of updates and execute them all at once.

The good news though is that the API has changed only very slightly in the optional arguments it accepts. However I have changed the way the result data comes back from query. Because CL-MSQL returns multiple result sets it’s necessary to place all of them into a sequence. Additionally, I did not like the way I was placing the column headers into the first item of the result data. It means you always have to allow for it. I considered doing it the way that CLSQL does it by returning the column data in a value struct but I find this awkward to manage. This is because every layer of the API (and client code) must multiple-value-bind the columns out and either repackage them as a sequence or create a new value structure to pass them up the call-chain.

Therefore I have changed the result sequence structure to be as follows:

query-result ::= (<result-set>*)
result-set ::= (<result-data> <column-data>)
result-data ::= (<row>*) | <rows-affected>
row ::= (<column>*)
column-data ::= ((<column-name> <column-type> <column-flags>)*)

I appreciate that this is a little complex, I did consider turning the result data into a struct but this complicates how the user processes the data. For this reason I have added: with-rows and nth-row to simplify the processing of this result data.

Finally, the whole thing is still only SBCL/x86 Linux compatible, that might change :-).

More information is available here. As always, any feedback is appreciated.

Categories
lisp

Announce: cl-mysql

After some deliberation I decided to try out what I was talking about in The Not So Super Super API. The idea of being able to hook a low-level API, so that it’s functionality could be tweaked later seemed quite appealing to me. In reality, whilst what I had intended was indeed achievable the performance sucked so hard as to make me rethink what I had done.

I did though, in the process, create an alternate library to using CLSQL which has support for stored procedures that return result sets, better (IMHO) type inference and is far simpler to deploy. So it wasn’t totally in vain. I intend to extend the library a little to work on the performance and provide a more faithful Common Lisp implementation in the near-future.

Full details of the cl-mysql library are available on this page.

Thanks to the power of Lisp’s macros I am able to make the low-level API hooking dependent upon a compile-time special variable. That way I can generate the additional code for the API hooking, if I want to, or leave it out whilst I’m working on other aspects of the library. Don’t you just love Lisp?! Hopefully we’ll revisit this topic some time later too.

Finally, I realise now that announcing anything on April Fool’s day is possibly a mistake. It seems that the signal-to-noise ratio in the world has gone down quite significantly in the last-few-hours :-). I guess that makes me the fool then.

Categories
programming

The Not So Super, Super API

Now and again you come across the ‘one size fits all’ of computer programming the ‘Super API’. It derives its name from the fact that it tries to act as a superset of a bunch of other platform-specific APIs. One good example of this is JDBC/ODBC, but there are many others. JDBC/ODBC allow access to many many database servers because they’ve normalised the API so that you can write code that doesn’t care which database it is connected to.

Now and again, though, I find myself requiring features that are not exposed in a Super API. They’re not exposed because the feature is either too niche or because it is not standard across implementations. Creating a Super API is in some ways like plastering. Once you’ve created that nice even finish across an entire wall you might find that you’re missing a few light switches.

I was reminded of this the other day when one particular super SQL API: CLSQL uses another super API: UFFI, what they do and how they do it is largely irrelevant. The important point is that I’m only going to be concerned about the operation of one of the combination of technologies (MySQL with SBCL:SB-ALIEN). I’m concerned because there is a particular SBCL feature that I want CLSQL to make use of when it calls SBCL:LOAD-SHARED-OBJECT from via UFFI. Confused? Don’t worry, it’s not that important.

In the end I found a (most heinous) way to do what I want but it occurred to me that there has to be a better way. I appreciate that there is no way that the library writer could know what I want before I want it and neither could they potentially anticipate future underlying API changes to provide for them. However, what if the library writer gave me the option of registering a per-API first-chance callback function? This function could then perform custom processing for my platform combination and then yield control back to the library when it was completed. That way I could inject any code I wanted at potentially any supported depth.

I don’t doubt that it would be difficult to produce an API like this. Certain classes of API (non reentrant APIs being one that jumps to mind) could potentially break in new and unforseen ways and library maintainers would have to be creative about how they detect and cope with errors happening in and around user injected code.

But if the injected code was simply a one-line replacement for the one-line that would otherwise have be called then it might work. Perhaps it might even be fun to try it.