Interoperability is a stone cold *expletive deleted*. When writing a PIM or any other calendar app, you will pick wrong. Just get used to it.
Here is what I mean:
In the calendar world their is only one viable standard for the exchanged info : the icalendar file format. It can be exchanged in multiple ways, though the
CalDAV protocol (Sits on Top of WebDAV which sits on HTTP) is becoming the exchange method of choice.
But whos should we use ? Lots of applications support it. But Microsoft's ICalendar format is not always in sync with Google's and so on AND it is not even anyone's fault. The areas where things fall down are often where the spec itself has issues such as recurring events and events that last all day.
When does an all day event start ? 00:00:01 or 00:00:00 ?
When does an all day event end ? 23:59:59 or 24:00:00 (oh wait, thats 00:00:00) oops.
Those questions just begin to touch on the issues.
Whichever way you handle all day events you will have to do accurate conversions to the other way or risk all sorts of "round trip translation errors". If you have synchronized your PIM with another and back again you have probably already run into this....
There are conferences and technical groups that spend days just battling with how to specify and manage recurring events. Look for "IIOP Recurring Events"
Most PIMs solve this by closely adhering to the icalendar specification for their domain model. Thats great where the spec really works but their are some key things that don''t appear to be covered or are incompletely specified. i.e. Tagging and taxonomies, the same events in multiple calendars, hierarchical tasks and events, and so on. I am not yet an expert on icalendar but I will become solidly familiar with it over the next few weeks. Luckily icalendar provides an extension mechanism so that the application specific information can be captured. Of course, other applications won't use the info and some of them won't even preserve it (Google's icalendar support seems to remove tasks from the calenar).
My project for next weekend is to try and do a full round trip between Google calendar and the Saltation domain model using Google's Calendar API and maybe using the icalendar file format.
6.21.09 I wrote this post on 6.19.09 and then posted it today. As I go back and look at what Google actually supports I am stunned. To try and push through the Google API looks like a decent amount of work for little initial return. I think I am going to work first on FULL icalendar file format support (import and export) next weekend and then look at publishing and reading from Google directly. At least debugging the process will be simpler.
The name of the blog comes from a quote I heard many years ago: "Give me a wrench". "What size?" "It doesn't matter.I am using it as a hammer" This blog is a discussion of software development tools: how to use them, when to use them, how not to use them, when not to use them, the ones I like, the ones I don't like, etc.
Sunday, June 21, 2009
Progress
I just realized that naming the PIM and the blog the same thing may have been less than optimal. Ah well.
I did a quick clean up of my build scripts for saltation.
I am still wrestling with how to present the information on SourceForge in a way that works. The gods know that I have getting annoyed often enough with OSS projects and navigating around trying to figure out what is where. I will spend some
serious time and thought to laying things out in away that works.
I did a quick clean up of my build scripts for saltation.
I am still wrestling with how to present the information on SourceForge in a way that works. The gods know that I have getting annoyed often enough with OSS projects and navigating around trying to figure out what is where. I will spend some
serious time and thought to laying things out in away that works.
Wednesday, June 17, 2009
Taking things public
About two weeks ago I realized that I was creating this project as an open source project but not taking it out into the open source community.
Anybody see a small disconnect there?
So, I have started a project (called saltation) on SourceForge and I will be checking in code within 24 hours.
And, I have already run into my first roadblock (or at least a speed bump).
Specifically, it is the build system. I use a common build framework for all of my projects which I call appropriately enough, common-build. It uses Ant and Ivy for the build and dependency management. For a little more information on that see this posting in my other blog : Ant versus Maven.
Both Maven and Ivy use an external repository for storing dependencies such as Java libraries. SourceForge still hasn't worked out all the ways to interact with this. so the bottom line is one of my first tasks is going to be making the build system require a minimum of setup (. I.e. you should be able to build right out of the box).
That will probably take me a day or two counting testing.
Till then,
- Jim
Friday, June 12, 2009
More in the world of unused code detection : Klocwork
As noted in a previous posting, I have a major refactoring task ahead of me with the code base that I am now the owner of.
Recently we had an intermittent problem that may have been the result of a resource leak. Because we were unable to reproduce it, we put some processes in place for the next time and I did what I always do. Look to see if there is some tool out there that will allow me to detect resource leaks in the code in the current branch of the code base.
The two that most people seem to recommend are Coverity and Klocwork. A number of my acquaintances have said that Klocwork was better at detecting resource leaks so I decided to try it.
Here is the good news: the tool mostly does the job.
The bad news : the company doesn't quite have it together.
In one key way it does: they are curteous, efficient and smart. The sheer efficiency and overall effectiveness highlights even more their marketing/sales deficiencies.
I think the key thing that I kept bumping into is that there were limitations and conditions that were not clearly spelled out during the purchasing process. Klocwork had a salesperson and a technician on the phone with me at their request to evaluate my needs. The decision was made for me to go with the Klocwork Solo product rather than the honking big enterprise licensed behemoth that is their main line product. When I pulled down the demo version of Klocwork Solo I discovered that the temporary license covered more than a month but that the demo version could only handle 99 files at a pop and the Solo product could only run under Windows even though it is a Java application ( it appears to spawn a Windows executable as part of its analysis process).
Neither of these facts came to the surface in the initial call.
Personally, I would recommend that when you are making available a trial application of a code analysis tool, that you make it a short period of time AND allow it to handle an enterprise class number of files. After all, you know people are going to use it on their code base, and will need to do so in order to demonstrate its worth to the powers that be.
The 99 file limitation was a pain, but I evaluated the tool enough to determine but it was probably worth the $100 it costs to get it in and try it out on a larger code base.
I purchased the $100 Solo product and discovered that it was limited to 1000 files (but apparently I can call Klocwork and get that expanded). Luckily, I was able to limit the code base I was interested in to 1000 files.
But the other thing I discovered is that the licensing tool appears to talk with the Klocwork mothership very time I start up Eclipse and the license is only valid for a year. That was another detail that was not presented upfront.
I don't want to leave you with the idea that I think that Klocwork is intentionally misleading people. I don't think that. But I do think that the management of licenses and sales are oriented toward a different scale of user than a small shop of 3 to 5 developers.
I don't think I will be recommending their tools until they rapidly revise their licensing and license management.
- Jim
Tuesday, June 2, 2009
Technology choices revisited
In one of my earlier posts Technology Choices I had looked at what user interface toolkit I should be using. What I've discovered in the intervening time is that this application is going to put a premium on flexibility in presentation. As a result, I am seriously reconsidering the user interface toolkit. The Eclipse RCP is a very powerful platform upon which to build an application. Unfortunately it does have a very strong set of metaphors upon which it is built (Views and Workspaces) which seem too rigid for what I am attempting.
So I am seriously looking at using QT with the Java bindings.
Any thoughts?
Going to and fro
The other part of the last few months has been looking at importing and exporting and synchronizing.
Spent a great deal of time looking at the iCalendar specification and all of the recurrence rules. And then I read all of the use cases for recurrence rules interoperability that have been put together by some incredibly diligent groups of people. The people that work on those specs definitely come under my heading of unsung heroes.
As often appears to be the case in the area of calendaring, there are really no clean answers. And, going back and forth between the icalender format and a more flexible internal model is going to require a great deal of detailed work. But, it is very clear that iCalendar is the de facto standard for now and I see nothing on the horizon yet that will replace it.
And personally, I think it gives the biggest bang for the buck. If I can support CalDAV and ICalendar to any significant degree I will be able to publish calendars to and from Google Calendar , as well as many others applications.
So my immediate attention is to support import/export for ICalendar , and then support using CalDAV to publish.
- Jim
When things don't fit into a neatly modeled the world
I am back after a long hiatus. A combination of work plus a course I'm teaching has kept me very busy.
But always in the background I am thinking about the PIM. That's how we referred to it in my household "The PIM". I would wonder if I'm a little obsessed except I'm having so much fun.
I've been coding some commandline utilities to test the data model and, as I expected, I have run into the limitations of my original model. I had originally started with calendars, and agendas and so on being first class objects. What I mean by that is that they have some distinct existence in the real world , and that is why I am modeling them. What I have discovered is that calendars, address books, task lists, journals and so on are all simply semantically themed collections of stuff we care about.
There are few assertions I can make about what a calendar is that are universal enough to be easily recognizable to everyone.
So I stepped back and thought about it for a bit and this is what I have come up with: we have an ocean of first-class objects that we would like to track (incense, tasks, promises, contact information, little notes) and so on. And then we spend our time trying to organize them into multiple collections they give us easy access to what we want to do when we want to do.
So I am experimenting with having any of these collections simply be taxonomies. Just like described in an earlier post. This would mean that many calendars have a relationship to each other. An example would be the calendar I maintain for my children's schedule all together which could have two related calendars ( one for Aidan and one for Trent ) . both Trent and Aidan's calendar would refer to events in the Boy Scout calendar.
The same would be true of address books. I could have an address book that references my friends and a related address book that only has the subset of my college friends.
The same appears to work well for journals, agendas, sets of conversations, and the collection of resources and notes that I refer to as Data Mines or just Mines.
Subscribe to:
Posts (Atom)