I am currently dealing with a lot of libraries at work. Both third party as well as libraries written or being currently in process of being written by me. I absolutely love writing and working with libraries. Especially if they present or bring me to either a new or different approach to solve a problem. Or at least provide a different view.
Over time I noticed however that quite regulary we had to decide that we cannot use a third party library. Often it is the usual reason. It does not reach our technical bar of quality. Overly wasteful with or does not provide enough control over resources. Bad or overly complicated APIs. Added programmer complexity ontop of inherent problem complexity. Requires use of libc, STL or even worse boost. Wrong license. (May be even worth it to write a little bit about this as well at some point).
But sometimes you encounter a library that fulfills all your set quality standards. Still almost everytime we cannot use this library. Why? Well the main reason always given is 'not made for our use cases/ requirements'. So I always write a custom version for our requirements.
But here is the thing in 100% of all cases this argument is not true. Most of the time these libraries could easily fullfil even more advanced requirements. Most of what we need does already exist inside these library. The problem always lies in its API design and to an extend the resulting library architecture. API design is still considered a black art of programming, even today. Only little information can be found and mostly you just find small tips I hope every programmer already knows. For example things like "Don't use bools for parameter use enums instead". [1] These are mostly fine grained advices and quite low-level in the design process.
However most are missing the way more important high level design characteristics [2], which basically make or break an API. These are in no particular order:
- Granularity (A or BC)
- Redundancy (A or B)
- Coupling (A implies B)
- Retention (A mirrors B)
- Flow Control (A invokes B)
I will not go into detail here and describe and explain each of them at once, instead I will focus on Granularity here which at least for me seems like the biggest issue I encountered so far in libraries. If there is any interest I may write a little bit more about each in a similar post.
Granularity is at its core a simple concept. For every high-level API there needs to be a low-level API to achieve the same goal with more user control. Casey at this point talks about functions instead of APIs but I try to keep it more abstract.
A different way to look at granularity is a concept of diagonality vs. orthogonality which is almost better fitting in a way than high- vs low-level. API Orthogonality focuses on combining multiple multi-purpose primitives to reach a desired goal. Diagonality on the other hand allows to reach a desired goal in a direct way. Another phrase to summarize these concepts is 'make easy things easy to do and hard things possible'.
In general the two main characteristics of APIs are control and ease of use: For orthogonal APIs you preferably want:
- Unabstracted control over resources for library user: By Unabstracted I mean does not rely on abstraction to provide resources (templates, allocators, file abstractions,callbacks,...). Instead resources are either provided beforehand or can only be requested, but never taken.
- Simple but not neccessaringly easy to use Simple and easy are probabily one of the most misused words. Simple is an absolute term of how intertwined your design is and easy as an relative term of how much work it takes to get things running with your library.
While for diagonal APIs it is more preferable to have:
- library controls resources Control over resources ranges from abstracted to completly taken away from library user.
- Easy to use but not neccessaringly simple Main focus is in making things as easy as possible for new users. Basically make easy things easy to do.
Important to note here. What I just described is not absolute. Instead it depends heavily on the problem you are solving. So it is more like a pointer with some compromises to be taken. Furthermore neither orthogonal or diagonal APIs are "bad" or "good" in any sense of the imagination. They just have different use cases and goals.
To bring granular/orthogonal API design to its core is about providing combineable primitives. So it almost always make sense to write the orthogonal API first and just wrap up combinations of orthogonal primitives, add additional abstraction and bundle it into a diagonal API for ease of use.
Since most of what I wrote so far is rather theoretical and descriptive I will provide two actual examples I encountered at work.
First up is a zip archive unpack API. There are lots of them floating around but in general most provide an API roughly like this (does not matter if OO or imperative):
struct unzip {...};
int unzip_open(struct unzip*, const char *path);
int unzip_list(struct unzip*, struct unzip_file_info *array_to_fill);
int unzip_stat(struct unzip*, const char *file, struct unzip_file_info *info);
int unzip_extract(struct unzip*, const char *file, void *mem, size_t size);
int unzip_close(struct unzip*);
I hope most of these functions are self-explanatory. You have a function to open and close a zip file. Functions to query file information and to extract a file. You could also add some more functionality like open from memory and extracting a file to an OS path and other extracting variants. You furthermore could provide abstractions for file input and memory management and error handling. But I want to keep it simple for now.
I would proclaim that this is a good API. I bet that every somewhat proficent programmer could understand and use this just fine. So what is the problem here? I mean there must be a reason why I took this example. For me this is a high-level or diagonal API. It at least fits the description I gave at the top of this post.
First of resource control is completly taken or abstracted away from the user of this API. Both file handling and memory management. Another questions especially important today is multithreading. Does this library support multithreading and how easy is it to implement. So most of these problems are more advanced and often require a complete rewrite of a library. But it does not have to be that way. Next up is my interpretation of a orthogonal API. Once again this orthogonal API is not meant to replace the diagonal API. Instead it provides the basis or foundation to implement it. So enough talk here is my low-level API:
union unzip_request {
int type;
struct file {
int type;
size_t offset;
size_t available;
} file;
struct toc {
int type;
size_t size;
size_t alignment;
} toc;
struct memory {
int type;
size_t size;
} mem;
};
struct unzip {
enum unzip_status err;
struct file {
void *mapped;
size_t offset;
size_t available;
size_t total_size;
} file;
struct toc {
void *addr;
size_t alignment;
size_t size;
} toc;
struct memory {
void *addr;
size_t size;
} mem;
};
int unzip_init(struct unzip*, size_t total_size);
int unzip_init_with_toc(struct unzip*, void *toc, size_t size);
int unzip(struct unzip*, union unzip_request*, int file_index, void **mem, size_t *size, struct unzip_file_info *info);
First things first. This API is obviously a lot harder to understand than
the first API. While it has fewer functions it has a more complex data
structure and function arguments. Let me try to explain it. Basically this is
what I would describe as an request based API (also known as coroutine,
state machine or push/pull APIs). You repeatedly call unzip
and each time the API will
return to you what is currently needed to process the zip archive.
So if you call unzip
the first time you would get an request back for file access.
So struct request
would contain a file offset and the number of
bytes to be mapped written in 'request.file'. The user fills out 'unzip.file'
and provides either exactly what was previously requested or more. Finally you call function
unzip
again.
So lets take a look how function 'unzip_open' could be implemented that way (another version here [3]):
struct unzip un;
byte *zip_file_memory = ...
size_t zip_file_size = ...
union unzip_request req;
unzip_init(&un, zip_file_size);
while (unzip(&un, &req, 0, 0, 0, 0);) {
switch (req.type) {
case UNZIP_REQUEST_FILE_MAPPING: {
/* request file mapping */
un.file.offset = req.file.offset;
un.file.available = req.file.available;
un.file.mapped = zip_file_memory + req.file.offset;
} break;
case UNZIP_REQUEST_TOC_MEMORY: {
/* request memory for table of contents */
free(un.toc.addr);
un.toc.addr = malloc(req.toc.size);
un.toc.alignment = req.toc.alignment;
un.toc.size = req.toc.size;
} break;}
}
assert(!un.err);
This is a lot of code on the library user side. But like I said low-level or orthogonal code is simpler but not necessarily easier. But the user has total control over all resources. In this case both memory as well as file resources. The library only knows about memory and it does not care where it came from. Which in term is a simpler design but not necessarily easier to use.
But what about multithreading. Well as soon as the above code run correctly the table of contents will be inside the 'toc' memory block. You can take this memory block and initialize another unzip struct by calling 'unzip_init_with_toc' and load files in parallel. Furthermore you can even store the toc to disk or send it over sockets. You could even directly store it in a file and load it at runtime and just access everything in a multithreaded fashion from the beginning.
All the other functions declared in the high-level/diagonal API can be implemented with this low-level/orthogonal API. But depending on your use cases it makes more sense to use either one. But you can choose or even transistion from high-level API to low-level API by just replacing a single function with its orthogonal counter part implementation. For bigger libraries it is common or should be not uncommon to have a lot more than just two different APIs with different granularity but multiple. Each providing a tradeoff between control and ease of use, with an option to transistion between.
Another example are most text file formats like JSON or XML. Most library APIs for these two formats look generally similar to this:
int json_load(struct json*, const char *path);
int json_load_string(struct json*, const char *str);
struct json_value *json_query(struct json*, const char *path);
Once again this is a good, easy to understand and use API. Like with the previous example you could provide additional abstraction. Abstract files and memory management. Good error handling. Finally few more utility and "redundant" functions.
So what are the problems with this API. I don't always want to create a DOM. I often just want to directly parse the file. You could even go as far as we did by writing a parser generator which hand parses the file and fills our program specific data structures with data. This simply is not possible with the above API. I am now forced to rewrite the whole library because of API and therefore design decisions. Once again this is not that these diagonal APIs are bad in any way. But only providing one either orthogonal or diagional API is. Most of the time however what is written is only the diagonal API for ease of use. Rarely you get a orthogonal API as well.
Enough talk lets look on what I would propose for a orthogonal API:
enum json_token_type;
struct json_token {
enum json_token_type type;
const char *str;
size_t size;
};
struct json {
const char *stream;
size_t available;
};
int json_read(struct json*, struct json_token*);
This is a simple streaming API which just reads in one token after the other and does not care for memory or file resources as long as it has a stream of text to process. A simple use case example would be:
void *file_memory = ...;
size_t file_size = ...;
struct json js;
js.stream = file_memory;
js.available = file_size;
struct json_token tok;
while (json_read(&js, &tok)) {
if (tok.type == JSON_OBJECT_BEGIN) {
/* ... */
} else if (tok.type == JSON_OBJECT_END) {
/* ... */
} else if (...)
}
Once again this orthogonal API is not as easy to use as a simple call to 'json_load'. But it provides a lot more control and allows for different use cases while it also can be used to actually implement the diagonal API from above.
To sum this all up I hope I could provide a small overview over granularity, diagonality and orthogonality in API design. Nothing I proposed here invalidates or diminishes any already existing library. Yet it is a huge pain to rewrite a otherwise well written library just because abstractions are to hastely applied and granularity is not taken into account. Any kind of feedback is welcome and if there is some interest I may write a few more of these on API design.
[1] https://wiki.qt.io/API_Design_Principles
[2] https://www.youtube.com/watch?v=ZQ5_u8Lgvyk (Casey Muratori - Designing and Evaluating Reusable Components - 2004)
[3] https://gist.github.com/pervognsen/d57cdc165e79a21637fe5a721375afba (alternative API - Per Vognsen)