I have tried using Haskell to various smaller projects, such as wishsys and a game that I never got really far into making. But learning a new programming language through the means of hobby projects only work as long as the project is contained and small. For my part, most hobby projects start out with great ideas and grand designs, but end up as a mess since I am unfamiliar with the programming language.
When using a new programming language, time is spent learning the language rather than developing the project. This in turn means that I end up learning the bare minimum to get the job done. And this defeats the purpose of using a project to learn a new language. If the goal is to finish the project, you should have used something you know well and feel most productive with. If the goals is to learn a programming language, you should start out with a small project instead.
For me, project euler is a great way to learn Haskell, because it contains a lot of problems that Haskell (and functional languages in general) is the perfect tool for solving. The projects I mentioned above involves using databases, multiple threads and other scary real world stuff, but I just wanted to learn Haskell. And better yet, once you have solved a problem, chances are you can find someone with an even more elegant solution written in the same programming language you are using. A great way to learn!
This time I thought I'd share our dinner plans for the next week. We take turns creating dinner list every week, and next week is my turn!
- Monday: Tortellini with sun dried tomatoes and mozzarella
- Tuesday: Fish with avocado, ruccola salad and hot mustard
- Wednesday: Fennel soup with chicken
- Thursday: Albondigas
- Friday: Salmon with fennelrisotto
- Saturday: Breaded cod with salad and potatoes
- Sunday: Home made tomato soup
Lets hope it tastes as good as I think it does.
I have started to learn myself haskell using the book named "Real world Haskell". I have so far only come to chapter 4, but I am already in love with some of the features:
Its strict static type system, which makes it easy to understand what a function does. Moreover, it allows you to think through what your code is going to do as well as make the decisions of what to do for special cases up front. The following is a definition of a function which compares the length of two lists, and returns their order (==, <, >). The definition clearly states that it operates on two lists of any type, and returns a value of type Ordering. Crystal clear!
listCmp :: [a] -> [a] -> Ordering
Partially due to the above point, one can avoid unpleasant bugs later on, because you chose to postpone your decision on what to do with your input.
Pattern matching. I came across this in the Oz programming language when I was in university, but I didn't really understand how powerful and readable everything becomes until using it in Haskell. The following function takes a separator and a list of lists as argument, and combines the lists using the separator:
intersperse :: a -> [ [a] ] -> [a] intersperse sep  =  intersperse sep (x:) = x intersperse sep (x:xs) = x ++ [sep] ++ (intersperse sep xs)
I love how you can just look at the patterns to see what cases is covered by the function, rather than nesting into some complex if sentence.
Readability when using 'where' syntax. This is the implementation of the listCmp function:
listCmp lhs rhs | lengthLhs < lengthRhs = LT | lengthLhs > lengthRhs = GT | otherwise = EQ where lengthLhs = (length lhs) lengthRhs = (length rhs)
What I like about it is that you can separate the logic performed on values from the function calls, so that when you read the code, you see the actual computation done by the function in the different cases. You can also do this with the let syntax, but I think the above reads really well.
For a while now, I have been using Ubuntu Linux on my desktop, and it as worked really well. In fact, I even installed Windows 7 on my media center (replacing Linux) just to stop bothering with configuring my system all the time. Since I started working at Yahoo!, I did not really feel like having to do extra work at home in order for my computer to function properly. Moreover, I did not have much time left to work on FreeBSD, so I simply reinstalled my desktop with Linux, and that has been working well for almost a year now.
But recently I have sort of missed working on FreeBSD, so I decided to give it at try again from a user perspective. Many of the things I feel was lacking is still there. However, the things that were good, are still good. So far, I have been able to install all software that I wanted to install, but I still feel that we need something better on top of ports in order to make it easier for users. Hopefully, some of the initiatives that I have seen on the mailing list will not die any time soon. Apart from ports, many of the common tasks are pretty manual too. Configuring the system should be more straightforward than having to guess and edit what should be in /etc/rc.conf. Though many of the issues I encounter comes from the fact that FreeBSD has a very small userbase, and is simply not prioritized by many companies, there are a lot of things that can be improved irregardless of that. If i start doing any more FreeBSD work, it is most likely to be in the "make-it-less-painful-to-use"-department.
I just bought two Western Digital 2 TB disks the other day in order to increase storage capacity. I was planning on putting a ZFS mirror on them. The other day I discovered that the disks uses a new drive format called "Advanced Disk Format". This format basically extends the sector size from 512 to 4096 bytes.
The problem is that the disks report their sector size to be 512 rather than 4096 in order for them to work well with existing operating systems. The issues with these disks are discussed here and here.
To summarize, this results in two main problems:
Partitioning tools operate on 512 bytes "logical" sectors, which may result in a partition starting at a non-aligned (compared to 4096 bytes) physical sector. If using partitioning tools that are not updated to align partitions to 4k, a request may cause a write to more than one sector.
File systems/disk consumers think the underlying device has a 512 byte sector size, and issues requests that are below 4096 bytes. For a write request, this is catastrophic, because in order to write only parts of a block, the disk will have to read the block and modify the part that changed, before writing it back to disk (Read-modify-write).
Dag Erling Smørgrav made a tool to benchmark disk performance using aligned and misaligned writes (mentioned in his post above (svn co svn://svn.freebsd.org/base/user/des/phybs). Here are the results:
nobby# ./phybs -w /dev/gpt/storage0 count size offset step msec tps kBps 131072 1024 0 4096 131771 16 994 131072 1024 512 4096 136005 16 963 65536 2048 0 8192 74762 14 1753 65536 2048 512 8192 71407 15 1835 65536 2048 1024 8192 73432 15 1784 32768 4096 0 16384 20710 130 6328 32768 4096 512 16384 61987 43 2114 32768 4096 1024 16384 62719 43 2089 32768 4096 2048 16384 61089 44 2145 16384 8192 0 32768 14238 245 9205 16384 8192 512 32768 53348 65 2456 16384 8192 1024 32768 52868 66 2479 16384 8192 2048 32768 50914 68 2574
Clearly, using < 4k blocks results in bad performance. Using blocks larger than 4k results in a 3x speedup.
The way I solved this in FreeBSD was to partition the disk manually with gpart and set the partition start to a multiple of 8 (8 * 512 = 4096). All partitions on the disk should start at a sector number that is a multiple of 8.
ZFS uses variable block sizes for its requests, which can pose a problem when the underlying provider reports a sector size of 512 bytes. In order to override this, I used gnop(8), which can create a provider on top of another provider with different characteristics: gnop create -o 4096 -S 4096
The -o parameter makes sure that the new provider does not conflict with the original provider when ZFS tries to detect any filesystems on the disk. The second parameter sets the sector size of the new parameter to 4096, which makes sure that all requests going to the disk from ZFS will be in 4k blocks.
For UFS, the default fixed block size is 16k, so there should be no worries about it using lower block sizes. Moreover, newfs provides a -S parameter, which overrides the sector size of the underlying provider. I have not tried using UFS on these disks, but I don't see any reason for it not working.
After looking for a long time as to why my default locale in gnome changed after a recent upgrade, I finally found out where to change the locale setting. The problem was that gnome did not seem to pick up my system locale settings, and the norwegian characters in my terminal came up as question marks.
As the gnome login manager (gdm) got rewritten, there is now no way to change this locale at the login screen unless it was picked up by gdm. But, as always, reading the documentation helps. After reading
I discovered that I could just edit
and write this:
[Desktop] Language=en_US.UTF-8 Layout=no
to set the correct locale!
I just learned of sysutils/bsdadminscripts after my previous post about how hard it was to use packages only in FreeBSD. Well, I think I found a partial solution to my problem, as the bsdadminscripts port contains a pkg_upgrade utility, which is able to update your system without a ports tree available, as long as the INDEX file exist on the packages server.
I now use this in combination with my port tinderbox, building the packages I want for my laptop. Then I generate the INDEX file in the tinderbox ports tree, and put it into the packages folder of the tinderbox. Voila! I can now use pkg_upgrade -a, and all packages are upgraded to the latest version.
There are a few things that I think can be improved: Have the tinderbox scripts automatically generate the INDEX file and putting it into the packages directory with a simple command or just do it on an update of the ports tree. The other thing is what I mentioned in my previous post about keeping the official packages properly up to date.
I guess I'm not the typical FreeBSD user, because I do not enjoy using ports much. Mainly this is because I also use it as a desktop. On a powerful server or workstation, ports is fine. It's super flexible and everything works quite well. And kudos to all people working on updating and making improvements to it.
However, using ports on my laptop really makes me cry. Why? If I want to install a port, I have to keep a ports tree on my laptop and actually compile everything. Since I have a pretty weak laptop in terms of processing power, this takes ages. But of course, I can install packages! The thing with packages, however, is that it works really well for a release, but when upgrading later on, I always end up in trouble if I try to use the official FreeBSD packages.
First of all, the package sets following each release gets outdated quickly. Second, if I want to update my packages without using ports I get into trouble. There is no real package upgrade tool that I know of, but I can install portupgrade if I want to, because it has a fancy -PP options, telling it to use packages only. But there are issues with this: portupgrade seems to require that you have a ports tree to work. In addition, when you have the ports tree, portupgrade will look for packages matching the exact version that is in ports, and if the package server does not happen to have the same ports tree as you (only one commit updating a port can break this), it fails.
So what is the solution for me, besides writing a pkg_upgrade? Having a ports tinderbox on a different host to build packages for my laptop (I could use official 8-stable packages for instance, but there always seem to be some packages missing, and some not built). And the upgrade procedure? Move /usr/local and /var/db/pkg away, and reinstall packages. It works ok, but looking at how well this can be handled on other systems, it's a bit silly :/ So, maybe I'll just have to look closer at the pkg_upgrade idea :)
So, on to the constructive part of this rantWpost. There is no need to change everything for this to work better. A pkg_upgrade tool can probably reuse a lot from the other pkgtools, such as version checking and dependency checking. However, the hard part is knowing what version to get from the servers. Luckily, the Latest/ directory contains unversioned tarballs of packages that can be examined to get their version. But again, this requires one to get the packages first in order to examine it. Not very bandwidth-friendly. I think a simple approach would be to keep a version list together with the packages, which could be used by pkg_upgrade to check if any new version of a package exists (much like INDEX in /usr/ports I guess). I haven't thought about the hardest question yet: how to handle dependencies and package renaming, but I would think one could allow specifying this in the same file.
Update: As i was working against my local package repository, I did not notice that the official package repositories actually contains the INDEX file from the ports tree where the packages are built.
I also think the package building procedures could be changed, because somehow, there are always packages missing (at least several gnome packages last time I tried). I do not know much about this though, but I would advocate for a system where a package was rebuilt on all architectures and supported releases once a commit was made to the affecting port.
There, I feel better now :)
Last year in Japan I bought a Cowon iAudio D2 player, which have proven to be quite good. But a few days ago, I thought I'd try to upgrade the firmware of it. I then discovered that there are four different types of firmware depending on where you bought it. As I bought it in Japan, my firmware was not compatible with other firmwares. The reason for this are mostly due to small differences in hardware. In my case, I have the possiblity of watching Japanese television (not really useful in Norway).
Therefore, I thought I would try and upgrade to the european firmware (a lot more fixes get through to this firmware it seems), but I was a bit afraid I would brick it if it was the case. I looked around at the iaudiophile forums, and finally I found someone with the same attempt, and they succeeded. The procedure was easy, but to be able to use the european firmware, I had to rename them to have the same file name as the Japanese, in order for the player to pick them up. Luckily, it worked for me too. Phew!
As I usually have a few classes at school which requires special software, I wanted to be able to run some of this software on my own computer, as there are student versions of some of the software. One of these is ModelSim from MentorGraphics. ModelSim is basically a simulator for hardware designs, and I use it to simulate VHDL. Unfortunately, ModelSim only comes for Windows, Linux and Solaris. As I only run FreeBSD on my laptop, no software for me :( But wait, FreeBSD have the linuxulator!, which allows Linux binaries to be run unmodified on a FreeBSD host (It is basically an implementation of Linux syscalls within the FreeBSD kernel). The steps I needed to go through to install the Linux version of ModelSim was pretty easy.
First of all, one of the emulators/linux_base* ports needs to be installed. I chose linux_base-fc6, as I'd like the Linux 2.6 support (although I'm not sure if that is actually needed). After installing the port, a linux userland appears in /compat/linux. To make sure I don't get any problems with programs needing procfs, I mount linprocfs(5) as well.
There, easy! Ready to run linux programs. Now, ModelSim comes with its own installer, which needs a few additional files that you can get at their web-site. However, programs may depend on additional libraries, and this is IMO the most tricky part about the linuxulator. In my case, I got some errors complaining about not finding libXsomething. Luckily, there are a few ports that you can install for the most common libraries. In this case, I had to install x11/linux-xorg-libs. Although a very old version, I was able to run the ModelSim installer and the installed binaries afterwards. Awesome!
Today, I'm sitting in a café in Oslo, waiting until we're leaving for Gardermoen and our flight back to the Netherlands after one week vacation in Norway. The weather was nice, and I got to do some skiing at least. However, I was actually supposed to be in the Netherlands already. The reason that I'm not, is that we (my girlfriend and I) missed the flight on Sunday. We actually missed it by one day, as we were 100% sure that we were leaving yesterday, so when we showed up at the airport, we were shocked to learn that we were 24 hours late! This was a silly mistake, as neither of us really looked at the date, we always assumed that we were leaving on Monday. Unfortunately, to be able to board the flight that we assumed was our flight, we had to pay 3000 NOK extra per ticket! In other words, we had to find other ways of getting back. Luckily, we got to stay at my sister place last night, and got new tickets for today's flight at approximately the same price as our original tickets (700 NOK per ticket). Hopefully, we'll be back in our apartment tonight :)
After Arnar Mar Sig posted his patchset for an initial skeleton of the AVR32 port almost a year ago, things started to catch speed in the beginning of this year. The work is done in perforce, and is progressing well. Currently, the system boots and recognizes most of the hardware, but linker work is required to be able to run init.
So far, I've been working on busdma support, grabbing the source from the mips port and adjusting it as well as implementing support for cache operations on the AVR32. It seems to work for now, as Arnar was able to get the ate(4) device driver to work with it.
The last work have been to design and implement a generic device clock framework. This is supposed to be used with devices in an architecture independent way, so that devices can be associated with a clock without knowing what clock it is (assigned internally for each architecture). This is necessary for a few devices to avoid #ifdefs all over the place. For instance, the at91_mci device is identical to the one used in AVR32, and it gets the clock frequency based on at91 machine dependant defines. Another property of this would be to export clocks using this interface to userland (AVR32 have a set of generic clocks as well).
Last weekend I imported gvinum into HEAD, and I hope many users (and old users) of gvinum will try it out, as it have some nice improvements. Moving it into HEAD now, means it also will become part of 8.0-RELEASE which is coming later this year, and since it is a lot of changes, the intention is to have it in HEAD now for a while before the release process begins. Among the most interesting updates for users are:
- Support more of the old vinum command set
- Less panics :)
- Rebuilding and synchronizing plexes can be done while mounted.
- Support for growing striped or raid5 plexes while mounted, meaning that you can just add a new disk to your gvinum configuration, and grow it to cover the new disk.
Damn, reading for exams is really not my favorite thing. It's not that it's very hard material, but the motivation is the problem. I always tend to get a bit sloppy with classes where the only form of assessment is the exam, and if the class is not very interesting either, it gets hard. However, these kind of classes are typically very theoretical courses, and one way I cope with it is to make them practical. For instance, in this course there are lot of distributed algorithms that the student is expected to know. Some of them are almost several pages long, and I'm really not the type for keeping all that in my head, and if I did, it would only be because I memorized it. So instead, I tried to implement the algorithm, as it helps with understanding because you can see how it works in action! What I did in this case was to create a node abstraction/class which I could re-use in several algorithms. The nodes definition is something like this:
void send(Message, nodeid); // Send message to a single node void multicast(Message); // Multicast message to all neighbouring nodes void deliver(Message); // RMI method called by other nodes via their send method Message receive(); // Blocking receive method to fetch contents from buffer
The node creation itself adds necessary neighbours, and connections are specified at startup time. The Message class contains most info necessary, but is extended in some algorithms that need extra stuff. I implemented these algorithms using the interface:
- Ricart-Agrawala's mutex algorithm
- Maekawas mutex algorithm
- Peterson election in unidirectional ring
Some algorithms are really tricky, and I end up spending more time wondering how to implement it than actually doing it, so I guess this technique is not good always :)
Phew, the first quarter of my exchange study is almost over. So far, the stay here in the Netherlands have been very exciting. First of all, we did an awesome project creating a quad rotor controller using a joystick to fly. A demo of the previous years group can be found here. We were actually able to make it work like in this video. The hardware consists of a Xilinx Spartan 3E starter kit running the X32 CPU core developed here at TU-Delft, a PC with serial link to the FPGA board, a joystick connected to the PC, and the Quad Rotor itself connected to the FPGA board via a modified serial link. We implemented the control software, signal filtering etc on the X32 in C, and after optimizations, we had a cycle time almost half of the required, and it flew!
The other course I've been taking is a seminar on wireless sensor network, which handled nearly all aspects of this topic, having students present a paper on a certain topic each week. I presented a paper on reliable energy aware routing, which was very interesting.
Lastly, I have a course in distributed algorithms, which will finish on April the 4th with an exam. The course teaches various distributed algorithms for synchronization, global state detection, deadlock, locking etc, and goes through several P2P protocols as well.
After this quarter I'll also go home to Norway for a short vacation, finally :)
The past week I've been using some of my time to setup Ikiwiki, and I was able to import my wordpress FreeBSD blog without too much hassle. I had to manually edit some posts, but other than that, the most work was into getting the tagging stuff right.
After loader support for ZFS was imported into FreeBSD around a month ago, I've been thinking of installing a ZFS-only system on my laptop. I also decided to try out using the GPT layout instead of using disklabels etc.
The first thing I started with was to grab a snapshot of FreeBSD CURRENT. However, I discovered that the loader doesn't support ZFS, so you have to build your own FreeBSD cd in order to install a working loader! Look in src/release/Makefile and src/release/i386/mkisoimages.sh for how to do this. Since sysinstall doesn't support setting up ZFS etc, it can't be used, so one have to use the Fixit environment on the FreeBSD install cd to set it up. I started out by removing the existing partition table on the disk (just writing zeros to the start of the disk will do).
Then, the next step was to setup the GPT with the partitions that I wanted to have. Using gpt in FreeBSD, one should create one partition to contain the initial gptzfsboot loader. In addition, I wanted a swap partition, as well as a partition to use for a zpool for the whole system.
To setup the GPT, I used gpart(8) and looked at examples from the man-page. The first thing to do is to setup the GPT partition scheme, first by creating the partition table, and then add the appropriate partitions.
gpart create -s GPT ad4 gpart add -b 34 -s 128 -t freebsd-boot ad4 gpart add -b 162 -s 5242880 -t freebsd-swap ad4 gpart add -b 5243042 -s 125829120 -t freebsd-zfs ad4 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ad4
This creates the initial GPT, and adds three partitions. The first partition contains the gptzfsboot loader which is able to recognize and load the loader from a zfs partition. The second partition is the swap partition (I used 2.5 GB for swap in this case). The third partition is the partition containing the zpool (60GB). Sizes and offsets are specified in sectors (1 sector is typically 512 bytes). The last command puts the needed bootcode into ad4p1 (freebsd-boot).
Having setup the partitions, the hardest part should be done. As we are in the fixit environment, we can now create the zpool as well.
zpool create data /dev/ad4p3
The zpool should now be up and running. I then decided to create the different filesystems i wanted to have in this pool. I created /usr, /home and /var (I use tmpfs for /tmp).
Then, freebsd must be installed on the system. I did this by copying all folders from /dist in the fixit environment into the zpool. In addition, the /dev folder have to be created. For better details on this, you can follow (http://wiki.freebsd.org/AppleMacbook) At least /dist/boot should be copied in order to be able to boot.
Then, the boot have to be setup. First, boot/loader.conf have to contain:
Any additional filesystems or swap has to be entered into etc/fstab, in my case:
/dev/ad4p2 none swap sw 0 0
I also entered the following into etc/rc.conf
In addition, boot/zfs/zpool.cache has to exist in order to be able to let the zpool be imported automatically when zfs loads on system boot. To do this, I had to:
mkdir /boot/zfs zpool export data && zpool import data
In order to make /boot/zfs/zpool.cache get populated in the Fixit environment. Then, I copied zpool.cache to boot/zfs on the zpool:
cp /boot/zfs/zpool.cache /data/boot/zfs
Finally, a basic system should be installed.The last ting to do is to unmount the filesystem(s) and set a few properties:
zfs set mountpoint=legacy data zfs set mountpoint=/usr data/usr zfs set mountpoint=/var data/var zfs set mountpoint=/home data/home zpool set bootfs=data data
To get all the quirks right, such as permissions etc, you should to a real install with making world or using sysinstall when booted into the system. Reboot, and you might be as lucky as me and boot into your ZFS-only system :) For further information, take a look at:
http://wiki.freebsd.org/ZFSOnRoot which contains some information on how to use ZFS as root, but by booting from ufs and: http://wiki.freebsd.org/AppleMacbook which has a nice section on setting up the zpool in a Fixit environment.
When rebuilding FreeBSD after this type of install, it's also important that you build with LOADER_ZFS_SUPPORT=YES in order for the loader to be able to read zpools.
Finally, I have been able to resolve all current known issues with cvsmode support in csup. I just sent out a new announcement with the patch, and I hope to get some more testing and perhaps some reviews soon, but it is a big patch and few people are familiar with the code base.
Remaining issues with the patch is support for using the status file when reading (but this is not critical at all), as well as rsync support (which is only significant for a few files in the freebsd repository).
I hope as many as possible are able to test it:
The last couple of weeks I've been very busy with school (and I expected this to be a quiet semester). However, I've found some of the last few bugs lurking around in csup:
- Deltas that had a 'hand-hacked' date would have deltatexts that would be misplaced in the rcsfile.
- When adding a new diff, a '..' would be converted to '.' twice, meaning it disappeared.
Now, there are only these issues left, but I'm not sure if I really want to fix this:
- Some RCS files have an extra space between desc and the deltas. CVSup fixes this by _counting_ the lines and then write them out when writing out the RCS file. I think this is silly, since it doesn't really matter according to the RCS standard.
- Some files appear to display garbage values, such as src/share/examples/kld/firmware/fwimage/firmware.img,v This disappears for some reasons in csup, but I'm not sure how to handle this. Comments are welcome.
- It has a quite high memory usage, and this might be due to some leaks that I've been unable to find. I'll do a much better audit of the code and run valgrind to investigate this further.
- Does not support md5 of RCS stream, so it can't detect errors yet.
- Statusfile file attributes might not be correct.
- Some RCS parts such as newphrases (man rcsfile) is not supported yet.
- Some hardcoded limits that may break it.
- Things done a silly way such as sorting and comparing, which I have plans to improve later.
So, finally, you can try out patches if you'd like: http://people.freebsd.org/~lulf/patches/csup/cvsmode
Currently, I'm including the tokenizer generated by flex, since the flex file itself can't be compiled with csup.
It's been a while. Partially because I've become a FreeBSD committer and had more productive stuff to do than writing in my weblog, and partially because my account was disabled after Google Summer of Code (Also, thanks to Google for SoC).
Since last time, I've been working on getting my gvinum work of this summer into the tree, but since this has to be reviewed before going into the tree, and 7.0-RELEASE is much more important right now, it's sort of on hold. In the meantime I've been working on implementing CVSmode for csup. This is something I've been meaning to do for a long time, but never found the big motivation until my exam-period before last christmas. So, I'll tell you a bit of what I've done here. For those of you unfamiliar with cvsup, it's a network CVS file synchronization tool which is heavily used in FreeBSD. However, it is written in Modula-3 and is therefore not very easy to maintain and it doesn't integrate into the FreeBSD base system very well. So, Maxime Henrion started a C rewrite of cvsup called csup.
First, a bit on how csup works (or the cvsup protocol). The client runs three threads performing these tasks:
- The lister, which examines the clients files, and sends information about them to the server.
- The detailer, which recieves commands from server on what needs to be done ("this file needs updating, send me the details of it's revisions).
- The updater, which recieves the actual updates from the server ("add this delta to the RCSfile").
More details on how the protocol works can be found on http://www.cvsup.org/howsofast.html
So, what is CVSmode anyway? In csups normal operation, csup requests the files from a specific branch, called checkout mode. This is the typical way a user would use csup, fetching the src-tree for RELENG_7 for instance. However, a developer would often like to have the FreeBSD CVS repository on his local machine, and this is where CVSmode plays a part. CVSmode means that csup will recieve the entire CVS repository, and also fetch updates to the actual RCSfiles. So far, csup does only support the checkout mode.
So, what's needed for CVSMode to work?
- Support for the protocol, so the client is able to not only act correctly on the commands from the server, but also respond correctly. This involves modifying the detailer and the updater part of csup. This part needs to be a bit cleaned up right now, but is in a working state.
- Correctly parse RCSFiles. Firstly, I made a lexer with flex and parser with yacc. Then I found out I needed reentrancy, and started using bison. After realizing using bison for this wasn't really nice since bison wasn't in base, I rewrote the parser in C.
- The ability to update RCSfiles. This required a RCSfile interface. This interface is used by both the parser and the updater, to import and edit RCSfiles. Writing this interface is probably what has taken most of my time.
- Writing the RCSFiles out with the new updates. This is done internally by the RCSfile implementation.
So, this is what I've been working on implementing the last month or two. And I have the most parts working. What's missing is a crucial part of (4). To write out the new RCSFiles to disk, a correct algorithm to apply diffs and reverse diffs is needed. The algorithm for applying diff was already created by csups author, but the reverse diff algorithm is a bit different. The last week or so, I've been studying the algorithm used in cvsup, and I've started to implement something similar although a bit different in it's implementation. So, hopefully I'll have this work pretty soon, at least before people start switching over to some new version control system :)