Tuesday, 18 July 2017

Goodbye to Eclipse - the end of an era as an Eclipse plugin developer.

I've been an Eclipse plug-in developer for over 7 years now, for nearly all of my software engineer career. Notably, I have been the lead developer of 3 open-source Eclipse IDEs: RustDT, Goclipse, and DDT. But with some pain, I've decided to change my career tech focus, and stop working with Eclipse IDE development altogether.

For several years now, Eclipse has been declining in popularity, and in my opinion also declining in comparative quality to other IDEs. I've seen the issues Eclipse has, both external (functionality, UI design, bugs), and internal (code debt, shortage of manpower). I was holding on to the hope it would recover, but now I don't see much of a future for Eclipse other than a legacy existence, and I realize exiting was long overdue.

I'm writing this post for two main reasons: I want to document my career path and explain the reasons for my tech change. And I want to give some clarity and thoughts to the users of my Eclipse IDEs, about the future of those projects, and IDEs in general.

A bit of background

I have roughly 6 years of cumulative FOSS work on Eclipse IDE plug-in development. I started in university with my MSc. thesis, but the bulk of this work occurred from 2011 to 2016[1]. First with DDT, later Goclipse[2], and then RustDT.

It so happened in those early times that I wanted to contribute to the IDE tooling of these new languages, because I was a massive fan of JDT (the Eclipse Java IDE) and all the cool things it could do - code completion, refactoring, etc. - and I wanted to help bring some of that functionality to these newer programming languages.

Also, I was hoping that if any of these languages took off in popularity, I would be in a position of importance of sorts, as the lead developer of a popular IDE for such a language. This was a big, unlikely gamble, I knew that from the start. And indeed it didn't materialize, although for different reasons in two different, distinct situations:

The first case was with DDT and the D language. It failed to get the popularity I was hoping for, and by 2013 or so I realized it was very unlikely D would ever gain significant adoption.[3]

I almost stopped working on IDE development then, but, at the same time a new player had emerged on the scene: Go. Go seemed to have a lot more potential to take off, so I then began to generify my DDT code and adapt it to an existing Eclipse Go IDE - Goclipse. A year later, Rust followed suit with RustDT[4]. These languages were gaining traction, and by 2016 I would say Go had clearly achieved mainstream popularity[5]. Unfortunately, by then, the Eclipse IDE had itself declined in popularity so much that having written Go's Eclipse IDE meant little in terms of influence or importance. Most users were using the Go plug-ins for IntelliJ or Visual Studio Code[6].

Moving on

Eventually I ran out of personal funds and had to go back to a normal, full-time, paid job. So in early 2017 I joined Renesas Electronics as an Eclipse plug-in developer, working on extensions to Eclipse and CDT for their embedded/hardware development tools. By then I thought it was unlikely I would do any significant work on those OSS projects again, but career-wise I still saw myself as a Eclipse plug-in developer.

As those initial months of employment passed, I started to question this. First I noticed there was a scarcity of Eclipse RCP/plug-in development jobs available in London and the UK generally. This tech was always a niche, but it looked like there were now even fewer roles in the market than 4 or 5 years ago.

In addition, Ericsson began pulling out their developers from CDT, and more significantly, Google too announced they were pulling out their developers from the core Eclipse team:
"Eclipse usage within Google has dropped to the point where it no longer makes sense to have a team of engineers contributing to the Eclipse project. This means that in the next few months my team will be dismantled and I will be reassigned to something else."
Google abandoning Eclipse was the wake-up call that prompted me to re-evaluate my career path, and it didn't take long for me to decide I too should move on entirely. I didn't know if wanted to retrain into Android development, into Java back-end work, or even JavaFX/Swing UI development, but I knew didn't want to work with Eclipse any more.


I had already been on the fence about the future of Eclipse for some years, but I guess I still hoped it would recover somehow. That wasn't happening:


They say of course, hindsight is 20/20, and one shouldn't beat oneself for mistakes one did not see coming. But I still think that at the very least in 2013 I had enough information to realize I should have seriously considered quitting. In particular, there was a big, solid sign there was trouble ahead for Eclipse. It was this:

Google announces Android Studio: An IDE built for Android (based on IntelliJ)

This was significant because, for those not familiar with the history of Android tooling, by this point Google already had a full-blown, mature IDE for Android based on Eclipse (Eclipse ADT). So what they chose to do was deprecate it and build a new one based on IntelliJ instead. For them to throw away so much existing code and effort, it must have meant IntelliJ was considered by them to be significantly superior, not just marginally so.

I mean, if Google had no previous Android IDE, and were just choosing whether to start one with Eclipse or IntelliJ, and had chosen IntelliJ - it could have been said to be a choice down to just a small margin of preference. But that wasn't the case. They were abandoning an established code base, in favour of writing a new one based on a different IDE platform. This meant the rewrite was preferable to fixing the various issues Eclipse had with platform code, API, UI, and so on... and if Google wasn't going to do it, even though they had so much invested already, who would? Nobody.

This is the point I regret not having left Eclipse earlier, saving myself three years of my professional life. And a lot of personal funds.

Where to, then?

Well, for me, since I want to work with Java and potentially other JVM languages (Scala looks nice!), the IDE choice is clear: IntelliJ IDEA.

As for Rust and Go, IntelliJ is still a solid choice, but no longer necessarily the clear winner. Visual Studio Code is quite nice and is becoming more and more popular. And particularly the Language Server Protocol is very innovative and also getting a lot of initial traction and interest.

For Scala the situation seems similar: a choice between IntelliJ and Visual Studio Code. Sure, there is also the Eclipse-based Scala IDE, which has been professionally developed and very actively maintained, but those guys too are jumping ship:
"Over the years, Eclipse turned out to be no longer the IDE we love because it doesn't help us to make us and our users happy. We therefore looked for alternative editors that could act as a new graphical frontend for Scala IDE and VSC turned out to be the most promising editor that could handle our needs."

The future

All things considered, it looks like the LSP is the future of IDEs. VSCode will likely be at the forefront of this change, but I imagine other IDEs and editors will easily pick that up and not be far behind. VSCode is also developing something similar to LSP, but for debugging: the Debug Protocol. Looks very promising.

I will give a shout-out to the work being done on Eclipse IDE support for LSP, which is nice. But ultimately I don't think it will be enough to keep Eclipse IDE relevant to a modern audience. Even if Eclipse gets widespread support for new languages through LSP, there is still a lot of fundamental problems with the Eclipse UI that at this stage are too hard to change satisfactorily - they would require a complete redesign of the UI[7].

As for IntelliJ, it will be interesting to see how LSP will affect it, since IntelliJ typically relies on its own internal IDE engine and language framework to provide language support. Sure, IntelliJ can also implement language support using LSP, but can it do so whilst providing the same level of quality and robustness that it does using its own platform? I guess we will have to wait and see. I wouldn't be surprised if JetBrains, the company behind IntelliJ eventually became partners in shaping the evolution of LSP.

[1] - As a side note, I want to point out that this FOSS development was actually full-time unpaid work. How I managed to fund myself in London (a very expensive city) is another story, but not for this post.

[2] - Unlike my other IDEs, Goclipse was not started by me, but a different developer. Rather, I took over an existing codebase once the previous developers retired from contributing (kudos to their work, BTW). However, because Goclipse was gradually rewritten to use my generic IDE code (MelnormeEclipse), after two years Goclipse's code was almost entirely rewritten/replaced.

[3] - This is a very broad subject to comment on, but in summary, in my opinion D struggled to get enough manpower to develop (to a sufficient degree for an emerging language), the compiler, core libraries, and other tools. That the main D compiler, DMD uses its own compiler backend, instead of LLVM or GCC, illustrates the general failure to integrate with the larger FOSS community and re-use more of other people's code. This approach hampered D's growth, and then when Rust emerged, that sealed D's fate - not only has Rust got vastly more resources behind it, but the language itself was built from the ground up on a superior paradigm: no garbage collection but instead ownership semantics, something that D is now playing catch up to. Good luck with that...

[4] - By now it was doable for a single developer to write a IDE for one of these newer languages because there were people already working on external tools doing code completion, find definition, etc.. In other words an IDE developer only needed to implement the IDE UI - the semantic engine itself was being developed by other people. With Goclipse and RustDT, as the UI elements were gradually made generic from DDT since they were conceptually language-agnostic, it meant it was possible for a single developer to create a sizeable amount of IDE functionality on its own. In the past, before Github and all that, this would have been very hard to achieve.

[5] - What constitutes "mainstream popularity" is imprecise and subjective, but I'm basing this threshold on seeing a small but steady flow of job ads for Go developers in London. I would say a language can roughly be considered mainstream when you can find in any major tech city a job that primarily uses that language.

[6] - Or alternatively, the various editor environments - Vim, Emacs, Sublime, etc.. But unlike IntelliJ, I would not consider those editors as actual competitors of an Eclipse IDE, because most of their users have a different editor mindset and would not be interested in using a heavyweight IDE, regardless if whether it was quality or not.

[7] - Not an extensive list, but: Package Explorer vs. Project Explorer dichotomy; Project Explorer issues with nested projects and duplicate resources; No implicit auto-save like IntelliJ; UI based on per-language perspectives instead of a more generic model; Functionality mismatch between external files and workspace files; 
And then, all the thousand paper cuts...

Thursday, 24 September 2015

The FSF/GNU ideology is silly, and we'd be better off without it.

TL;DR: The FSF/GNU ideology is silly, unjustified, and in this day and age the software community would be better off without it. Especially in the context where there are two FOSS projects, one GPL, the other one not, competing with each other. GCC vs. LLVM being a particular case I care about.

These are thoughts I've been meaning to write down for some time now. It's not a new topic for sure - discussions about Free Software have been around for a long time. But recently this topic has regained importance for me, due to my interest in the area of language toolchains. Here, two major projects - GCC and LLVM - undergo substantial development effort, yet they effectively compete with each other because of differing software licensing ideologies[1]. And this kind of competition in the FOSS world is not good: it's a waste of resources. So I'd rather see GCC wither and die, and have more people focus on LLVM, and LLVM-related work.

Before proceeding, let's look at what "Free Software" means:
Free software, software libre, or libre software is computer software that gives users the freedom to run the software for any purpose as well as to study, modify, and distribute the original software and the adapted versions.
- from the Free software Wikipedia entry.
So, we're not talking about gratis (free as in cost) software. A lot of developers are likely already familiar with this distinction.

The first point I want to put forward is: I don't have a problem with Free software itself. Quite the contrary - Free software is good, it's great to have more of that around. I myself am a big FOSS enthusiast. I've chosen to spend a lot of time - in total well over a man-year - doing unpaid development work in FOSS projects, out of my own cash reserves, whereas I could instead be earning a big full-time London salary.

What I have a problem with, is the the *ideology* of the Free Software Foundation and GNU. For those not already familiar with it, they believe that *all software* should be Free, that such is a fundamental *right* of all software users! And they are not particularly interested in compromises, nor are they friendly to projects and organizations that don't fully share their ideology. This last aspect is quite important, as one should realize it is quite different from merely supporting the idea that there should be more FOSS software around.
One unfamiliar with FSF/GNU can look into their website for details on their philosophy, and their reasoning. Here's a sample quote:

"These [software] freedoms are vitally important. They are essential, not just for the individual users' sake, but for society as a whole because they promote social solidarity—that is, sharing and cooperation."
- http://www.gnu.org/philosophy/open-source-misses-the-point.html

"they promote social solidarity", OMG...

What a subverted justification! The FSF people basically masquerade their own personal interests and values to some sort of ethical principle that should be of importance to the whole of society. The best example of such is this dystopian short story from Richard Stallman. It's a great showcase of their mindset, and mind you that short story is not hyperbole. It's the scenario they see themselves fighting against. Yet these principles are of concern to only a few computer/digital nerds, they matter very little to society at large (and not because of technical ignorance). But more on that later on.

Looking back at the articles and writings of the FSF, the more you read them, the more it becomes clear that their whole argument is basically: it would suck to live in a world with so many closed systems and closed software.

Well, I do agree that having so many closed computer systems would suck, and open alternatives would be much more preferable. But, FFS, desire is no basis for something being a right, a "freedom". Just because it sucks a lot not to have something, doesn't mean you have a right to have it. It also sucks that I don't have a fancy yacht I can sip cocktails off the coast of some Greek island. That sounds like a hyperbolic argument, doesn't it? But it actually isn't: Why is software any different from a yacht? Because software can be copied for free, unlike a physical object? But software still costs to *produce* in the first place. It doesn't grow on trees, someone has to *spend effort to design it*.

Furthermore, for the purposes of granting usage rights, even if we considered software to be different from physical objects (because software is information and can be replicated at no cost) - then why doesn't the Free software ideology apply equally to other information works: books, music, movies, etc? Why shouldn't all those be Free too? Why should copyright law apply in any different way to software, than to other information works? Someone explain this inconsistency to me... [4]

Here's another gem, a recent one, from John Sullivan, Executive Director of the FSF:
"I stay involved because I think it's one of the most important social movements in existence,"
- this opensource.com article.

Seriously?... I mean forget about racial injustice, gay rights, women's rights, free speech issues, drug and prison reform, climate change, the obesity and food crisis, campaign finance reform, gun rights/reform, banking system reform [2]... Hell, what about even other computing issues, like government surveillance, that one could argue they are separate from the cause of Free software? No, forget those, one of the most important social movements in existence is certainly making sure that everyone runs GNU (or whatever) on their computing devices... o_o'

Seriously, what sane person thinks the general public is unhappy or disconsolate in any way because they can't freely tinker with the software or hardware of their iPhone, iMac, XBox, Tivo, etc? These devices brought much joy and usefulness to their users, regardless of being closed systems. Yet GNU says "These freedoms are vitally important [...] for society as a whole". Such arrogance!

Rather, could the reality simply be that FSF zealots like RMS are a bunch of douchebag nerds with a selfish, aggrandized sense of what is important for society, based simply on their own personal values and desires, and not other people's? (Hint: yes.)

As the FSF lives on, I suspect many FOSS enthusiasts still don't fully understand the depth of the FSF ideology. For example, not that long ago, when RMS criticized Steve Jobs and his work, shortly after Jobs death, a lot of people were affronted by this. They said Stallman shouldn't have made such comments, that it hurt the Free software cause, etc. . But these people were being naive, and were fundamentally failing to understand the ideology of the Free Software Foundation. In the view of those that subscribe to it, Steve Jobs life's work has been a *substantial malignant influence* in the computing world, not a good one. So in that light, Stallman's criticism makes perfect sense.

In part, these are the people I'm trying to get my point across: The FSF sympathizers, who are not full-blown backers its ideology, but rather they just "kinda" support the FSF's efforts, because it helps bring more FOSS into the world. I'd like to see more of these people stop cajoling to the FSF and FSF backed projects. Because even just within the context of the FOSS world, the FSF can be problematic: it is unwelcome not only of proprietary software, but also of other Open-Source software and projects that do not fully cater to their ideology. The most nefarious consequence of this, is when open-source development effort gets split in two competing projects, one GPL, the other one not.

I reckon the first time this happened in a big way was Gnome vs. KDE. Likely the most well known case, it's certainly something that hampered the development of Linux-as-desktop-OS quite a bit. Still does today. Maybe not so much with regards to Gnome/KDE specifically, but the GNU zealotry still impacts other aspects of Linux. As a quick example, Ubuntu, being based on Debian (a Free software only distribution), derives its package system mainly from what the Debian maintainers provide. As a consequence, as of this date, no Java 8 package is available in the official repositories and is unlikely there will be one soon (see this email for reference). And that's even though the OpenJDK is Free software, and Java 8 being out for several months now. I suspect this is because the Debian maintainers see the Java scene as pro-corporate, pro-proprietary software, and therefore are hostile to it. I didn't seek to confirm this though - honestly, I gave up on Linux a long time ago (But that's a whole different topic...).

What I do follow, and care about, is the toolchain ecosystem of programming languages: compilers, debuggers, code analyzers, profilers, etc.. And here there is another big duo of FOSS projects battling it out: GCC and LLVM. It used to be that GCC was the king of the scene (as far as non-commercial C compilers go), but gradually LLVM picked itself up, and is now a very mature project, beyond merely the use case of Clang in OS X. For example, Rust and Crystal are languages with compiler toolchains primarily based on LLVM. Some, like Rust, have the potential to gain a lot of traction.

So, I hope more languages follow suit, and have compilers based on LLVM, not GCC.

But even if that happens, that applies only to compilers. There is still the issue that the use of the GDB debugger is prevalent. LLDB has also come a long ways, and even has work being done to support the Visual Studio debugging format natively, whereas the GCC/GDB project doesn't even provide binaries for Windows[3].

This, of course, is completely natural for a project that is hostile to non-Free OSes (OS X only being supported because it's POSIX compliant). One shouldn't expect any other attitude, which is why it's better to favor a project like LLVM that does aim to support all major OSes, proprietary or not. But the problem here, is that even with a mature LLDB, we are still stuck with graphical debugger frontends that only support GDB - Eclipse CDT being a prime example of this (especially for me). I wish quality debugger frontends for LLVM were developed (not just for Eclipse), so we could give good riddance to GDB, and then perhaps the whole GCC toolchain. But I fear that time is quite far away...

So, in conclusion, if you agree with these thoughts, show your support. When you contribute to FOSS projects, prefer permissive Free Software licenses, as opposed to copyleft ones like the GPL. When choosing projects to use (for example Linux distros), prefer those that accommodate permissive licenses. And voice these points if the issue comes up in discussions.

[1] - Yes, maybe some technical differences too, but I reckon there is a likelihood these could have been reconciled, if it wasn't for the licensing disagreements.
[2] - Yes some of these social issues only apply in a significant way to the USA, but John Sullivan is based on the USA, so I think it's valid to mention them.
[3] - And Cygwin's GDB doesn't work that well either. Rather on Windows one should use Mingw-w64 or TDM-GCC. So, alternatives are available, but they do have limitations.

[4] - Ok, so the point on copyright for non-software is addressed here: http://www.gnu.org/philosophy/misinterpreting-copyright.en.html. I disagree with several of the premises and assumptions in that article, but even setting those aside, it states: "But it may be an acceptable compromise for these freedoms to be universally available only after a delay of two or three years from the program's publication." - which from my understanding, essentially states that it could be acceptable for commercial software to exist, as long as it does so only with a relatively short copyright life span. This pretty much seems inconsistent, if not downright contradictory, with the FSFs stance as exposed in other articles.

Tuesday, 11 August 2015

For me, one disadvantage of short release iterations...

TL;DR: Short release iterations don't really make it practical to give silly nicknames to the releases of the FOSS projects I develop...

So, I'm sure every professional developer has heard of the benefits of short release iterations and all that (one of the aspects of agile development). I fully agree with said benefits, but on a personal level there's one tiny, silly disadvantage:

If you do short release iterations, then you never or rarely do a big release, and as such, you rarely get to give a big release a "nickname". ;P
Sure, this doesn't even apply to most software development scenarios... really, mostly only if one is developing applications (as opposed to libraries), and doing so in the open-source world. It's a fun thing I like to do once in a while - a way to put a small reference to something personal (a hobby or a fandom), in an otherwise purely technical or professional product.

In some cases, a developer does this by naming the project itself for the thing they like. I did this originally with DDT. DDT wasn't originally named DDT, but rather "Mmrnmhrm". No, that's not random typing on the keyboard, it's the name of a space-faring race in the amazing game Star Control II! :) But... as the project matured, I wanted it to be taken more seriously, and so it needed a serious name. And I didn't choose any other fandom reference either because I wanted Mmrnmhrm to be named in the same style as JDT or CDT. So that was that, and "DDT" was the obvious new name.

But since I continued doing so much work on this project (and nearly entirely pro bono), I still wanted to put some references once in a while, a bit of the inner geek spilling out. For some time, I did so whenever a big release was out, but as the project evolved, those became more and more rare. I mean, I always tried to do short release iterations, but sometimes a large feature cannot be broken down in smaller chunks that are actually releasable to the user. One such example, was re-writing the D parser from scratch (and adding comprehensive tests). That took quite a while. So while internally that task was still broken down in small iterations (say, a grammar rule each time), it was only fully releasable when the whole thing was complete. Until then, the older, deprecated parser was used.

But such big releases are very rare nowadays. And it doesn't feel quite right to name your release "Valar Dohaeris" or something as epic as that, if it's mostly a simple release with only some bugfixes and some minor new functionality. That said, I couldn't help but name "Candy Kingdom" the DDT/Goclipse/RustDT releases of this month, as they all share a similar big chunk of work. But I doubt that will happen again very often...

Wednesday, 9 April 2014

UI grievances in modern OSes (Death to mouse-over highlights!)

TL;DR: Had to replace aging Windows XP. Tried Windows 8 and found that it sucked (stupid Aero mouse-over highlights). Tried Linux and found that it still sucked (lack of UI polish). Went back to the idea of using Windows 7, and set it to the classic theme. Now content, but fearful for the future.

Classic theme, not even the XP look, but the older Windows 98 look!

So, recently I bought a new laptop, which I use mainly for work - software development. It replaces my 5-year old laptop, still running Windows XP. Time was very much running out on it: Microsoft is stopping support for XP in April this year. That meant, it was time to choose a new OS.

I'm a Windows guy, so I was wondering on a choice between Windows 7 and Windows 8. Windows 8 is said to have better performance and better driver support (my own anecdotal experience later on seemed to support that), yet it also has the Metro interface, woefully inadequate for a desktop - and even more so for power users. However, with utilities such as Classic Shell, Metro was something that could be easily disabled (remove the Metro home screen and restore the Start menu),  so I wasn't too worried about it. Overall, I was keen to try out Windows 8.

So I did... but things didn't work out very well. Like I mentioned, the OS seemed leaner, faster, and some drivers worked better out of the box. But crucially, Windows 8 does not have true Classic UI themes/mode, it only supports the Aero theme engine. And personally I really hate Aero themes (even the Metro version). Specifically, I hate all those stupid mouse-over highlights that happen all over the interface. By "mouse-over highlights", I'm referring here to any graphical UI change that happens just because of moving the mouse cursor around.

Objectively speaking, these effects are pure eye-candy: they offer a visual aesthetic of sorts, an artistic effect that might be pleasant to some. But they do nothing to improve the functionality of the UI. Rather, it denigrates it, because it introduces a distraction, an unnecessary cognitive load on the user. It may have appeal to users who prefer said artistry, but for those who prefer clean functionality (like myself), it is unwelcome. That said, there are some minor mouse-over highlights I can live with, if the effect is subtle. Even the Windows XP interface (Luna) had a few of these already. But the ones in the Aero interface are not subtle at all, and in particular there is one which is an absolute deal-breaker: the list selection highlight:

This highlighting features prominently in the Windows File Explorer, as well as many other applications (Eclipse for example) that use list views, tree views, etc.. This effect is annoying not just because of the distraction during the time you are moving the cursor, but even afterwards, given that the highlight color is very similar to the selection color! This makes it hard to understand at a quick glance which are the selection items and which are not. Ridiculous! :/

This can't be disabled in the Windows UI, but a custom Windows theme might be able to disable it. I searched plenty for a theme that would do that, but found none. Even the many themes I found that purpose to imitate the XP interface, aka Luna (my favorite window theme so far), failed to address that. Not to mention they often also had visual bugs, or were unpolished in certain visual aspects. Examples: this, this, and this.

No joy. My second hope was a Windows customization program: WindowBlinds from Stardock software. I gave it a try, and tried to look for themes that imitated the Luna interface, or at least, otherwise clean themes with few mouse-over highlights and other unnecessary effects. I couldn't find any like such. I then turned to trying to customize the WindowBlinds theme myself. I had to purchase the full version of WindowBlinds, so I could use SkinStudio, the skin/theme editing software. The software has a very, very extensive collection of customization points... but crucially, there was a blind spot for the customization I wanted!... Apparently it couldn't not change those mouse-over highlights, at least not in a proper way (this thread in the Stardock forums has more info).

No joy... There was still the option of trying to edit Windows 8 themes myself (using resource file hacking and whatnot), but at this point that was mounting to be an huge task. I have better things to do. But still, I was getting desperate. At this point, I even considered using Linux. It had been more than a decade (!) since I last tried using a Linux as a main OS. I would never consider it for use in a desktop/leisure machine due to all the programs (mostly games) that I want to run, but maybe for a work machine, it would be feasible. Of recent distros, I was only familiar with Ubuntu - I had a VM with Ubuntu for testing out programs in Linux. But UI wise Ubuntu was going the way of Mac OS (blergh - too much eye candy) and/or of a tablet paradigm (I'm not even sure which). No go.

Linux Mint looked promising though. I installed it and gave it a go... but still, no good. Things have definitely improved with Linux with regards to hardware setup and configuration, everything worked fine, devices auto-detected, plug-and-play, all good. It was nice to see that. But the UI was still lacking, there was still a minor but clear lack of polish. Just as an example, when I opened a menu, there would be a brief fraction of a second (almost indiscernible) where a black rectangle would be drawn in the whole area where the menu would appear, just before the actual menu would be drawn - it was a sort of flicker. So, even though I managed to find some clean themes (even a Window XP lookalike theme, including icons!), otherwise the lack of polish, some visual bugs/artifacts, or certain limited functionality in say, the file Explorer, kept me from embracing Linux usage (I'm not going into detail why, since that would be material for a whole new post). 
It was a bit disappointing, but not unexpected at all, that even after a decade, the Linux crowd still failed to get their act together, at least to the desktop environment.

So in the end I just got fed up, wiped my disk to install Windows 7, and set it up to use the classic theme (which is the Windows 98 look). I was quite glad to go back developing and not messing around with all this UI grief that I got from everywhere. My heart was not fully at ease though: while I was quite happy with the Windows 7 UI, per se, I feared for the future. I knew I would be fine for some years to come, but after that, when my hardware gets too old, when Windows 7 gets too old in terms of support and I need a new OS, where will I turn?

Saturday, 17 August 2013

The state and future of Eclipse, as an IDE and a platform.

Doug Schaefer from the Eclipse CDT team just brought up a very interesting post:
This tied in with a lot of things I was thinking recently about the state and future of Eclipse, as an IDE and as a platform. I hadn't ever brought it up before, so I took an opportunity and made a huge comment on Doug's blog about it (I actually had to split it in three posts due to size limitations). I do hope people read it, especially those influential in the Eclipse development sphere.

I wouldn't repost it here, you can read it at Doug's blog:

TL;DR: In summary, I talk a bit about:
  • How it seems like the technical quality of Eclipse (the platform, and the JDT IDE) has not only failed to evolve but even declined a bit, more or less since the inception of the 4.x series.
  • It also has not evolved much in terms of framework support for IDEs of other languages based on Eclipse (apart from the promising, but not yet very mature, DLTK project).
  • Other IDEs are catching up (IntelliJ IDEA, MonoDevelop, even Netbeans), and becoming strong competitors.

Tuesday, 15 May 2012

Hazards and pitfalls of equals() on class hierarchies

The problem

I've recently got bitten by some very nasty equals() hazards when extending classes. Here is a cautionary tale, that taught me the following: if you extend a class which implements its own equals() method, always carefully check the equals() method of that base class, and make sure to create a new equals() override in the subclass if that is required - and it will be, if you add new structural data to the subclass which affects the logical equality of instances of that class.
But even if your subclass doesn't change anything that affects the notion of logical equality, still remember to check the equals() method of the base class, to avoid any pitfalls. Merely extending a class may break the equals() contract! Recently I fell into one such big pitfall whilst working with Java on the DDT project. Let us look at the issue in detail. I had this code for the superclass:

class ScriptElementImageDescriptor
    public boolean equals(Object object) {       
        if (object == null || !ScriptElementImageDescriptor.class.equals(object.getClass())) {           
            return false;
        ScriptElementImageDescriptor_Fix other= (ScriptElementImageDescriptor_Fix)object;
        return (fFlags == other.fFlags)
            && (fSize.equals(other.fSize)) && (fBaseImage.equals(other.fBaseImage));


I created a somewhat trivial subclass of ScriptElementImageDescriptor, and the problem now was that instances of that subclass were no longer equal to any other instance whatsoever. This is due to the way that ScriptElementImageDescriptor.equals() does the check for the instance's runtime type:
    || !ScriptElementImageDescriptor.class.equals(object.getClass())
This check is not the same as the more common instanceof check. Rather, this check will fail unless the runtime type of object is exactly that of ScriptElementImageDescriptor. So subclasses will fail the check, just the same as unrelated classes, and that's what caused the bug.

And this bug was particularly dangerous because that descriptor class was not an actual domain model class for the application - it wasn't used in any direct user/application functionality, but rather it was related to performance internal functionality: it was mainly used as a descriptor for caches of image resource objects. Therefore, what this bug caused was a silent resource leak, one which was quite harder to trace than a direct application functionality bug (like a UI bug or crash). Only by sheer luck did I write some other unrelated code (code that was disposing and recreating a certain UI component 100 times more often than normal, to test flickering issues), which caused this bug to manifest clearly and early, very soon after the original change that introduced it. If this bug had shipped, it would have been very, very hard to replicate (it might take hours of normal application usage to hit the leak limits) and to diagnose (as even once the leak is hit, it wouldn't be clear at all what caused it!).

So the horridness of this bug ingrained in my mind that I should always check and remember how a superclass does equals() (and hashcode()). But my considerations didn't end with the ScriptElementImageDescriptor case - I started to think about this issue in a more general way, considering all sorts of cases. What should I do with my code (assuming I can also modify the superclass) when I create a subclass, that may or may not add structural data affecting its notion of equality (sometimes called an aspect) ?

The solutions

My first idea to address this issue was to change the superclass equals() to use the more common instanceof runtime type check, like this (I have now simplified the classes for the rest of the article):

class SuperClass {
    int fieldA;
    int fieldB;
    public boolean equals(Object object) {
        if (
!(object instanceof SuperClass))
            return false;

        SuperClass other = (SuperClass)object;
        return fieldA == other.fieldA && fieldB == other.fieldB;

Now, if SubClass doesn't modify the notion of equals, everything is done. And if it does (say, a new field is added), then a Subclass.equals() override is created which does its own checks, but calls the superclass's equals() method to check the equality specifics relating to the superclass data/state:

class SubClass extends SuperClass {
    int subclassField;

    public boolean equals(Object object) {       
        if (!(object instanceof SubClass))

            return false;
        SubClass other = (SubClass)object;

        return subclassField == other.subclassField && super.equals(other);

Hold on a second...

...is the above code correct? Well, not quite! That code was my first idea, but it was clear it had a problem: if it can be that SuperClass and SubClass objects end up being compared to each other, you will get different (and thus incorrect) results if the equals() method is called with a SuperClass receiver, with a SubClass parameter. So:
    SuperClass superClass = new SuperClass(...)
    SuperClass subClass = new SubClass(...)
may not return the same value as:
because different equals() methods will be called. Returning a different value is a clear violation of the equals() symmetry contract in Java (and in other languages it's likely to also be a contract violation, or, at the very least, an ugly and brittle design). This problem is discussed in more detail in Items 7 and 8 of Joshua Bloch's Effective Java Programming Language Guide. He mentions that: "There is simply no way to extend an instantiable class and add an aspect while preserving the equals contract.", but I am not sure that is entirely correct. For example, a potential fix to this is to change equals() so that it requires that the runtime type of the objects being compared be exactly the same. That would be the second solution:

class SuperClass {
    int fieldA;
    int fieldB;
    public boolean equals(Object object) {

        if (this == object)
            return true;
        if (object == null || this.getClass() != object.getClass())
            return false;
        return equalsOther((SuperClass)object);
    public boolean equalsOther(SuperClass other) {
        return fieldA == other.fieldA && fieldB == other.fieldB;

class SubClass extends SuperClass {
    int subclassField;

    public boolean
equalsOther(SuperClass object) {       
        SubClass other = (SubClass)object;
        return subclassField == other.subclassField && super.equalsOther(other);

This is now a correct implementation of equals(), useful without violating its contract. Also added is the if(this == object) check just as a minor optimization. But this solution still has a significant disadvantage. Because we reverted to the use of a direct class equality requirement:
    this.getClass() != object.getClass()
then other subclasses that *don't* change the notion of logical equality of their parent will not be equal to instances of their superclass, even though they should be logically equal (for those familiar with the concept, we forcibly cause *all* subclass instances to break the Liskov substitution principle). Whether this limitation is an actual problem or not, that depends on your code, and what you want to do with your subclassses, if you have any at all. This approach can be a significant problem more often than you think at first, though. For example, even if in your code there are no subclasses of such a base class, if you use ORM persistence frameworks like Hibernate, these use reflection and byte-code generation to dynamically create proxy subclasses of the classes you are trying to persist! So the above pattern totally breaks if you are using such frameworks, making it unequivocally unusable.

The final solution

There is a way to address this issue without any drawbacks (well, other than verbose code). Basically the approach I figured out uses a combination of both instanceof and class comparison. The main type check will be instanceof as in the first solution. However, after the instanceof check, we will do another type check on the other object to prevent symmetry problems: we will ask the other object what is the minimum class (most derived class) that the object can be equal to. This is done by introducing a new method called getEqualityClassRequirement(). See the code:

class SuperClass {
    int fieldA;
    int fieldB;
    public boolean equals(Object object) {
        if (this == object)
            return true;  
        if (!(object instanceof SuperClass))           
            return false;
SuperClass other = (SuperClass) object;
        return other.getEqualityClassRequirement() == this.getEqualityClassRequirement() 
            && equalsOther(other);
protected Class<?> getEqualityClassRequirement() {
    public boolean equalsOther(SuperClass other) {
        return fieldA == other.fieldA && fieldB == other.fieldB;

Then, as result, subclasses of SuperClass that modify the notion of logical equality must override getEqualityClassRequirement to specify their own class type as a requirement:

class SubClass extends SuperClass {
    int subclassField;
    protected Class<?> getEqualityClassRequirement() {
    public boolean equalsOther(SuperClass object) {       
        SubClass other = (SubClass)object;
        return subclassField == other.subclassField && super.equalsOther(other);

This is the best way I came up to fully address this issue in the Java language. If you use Eclipse JDT, this could be a good opportunity to create a code template to produce the boilerplate above.
I can see that a lot of verbosity-averse programmers will be cringing at the above code. If you have such aversions, or if you are just feeling particularly lazy whilst coding a certain class, you can still use the first solution (the lightweight version which just uses instanceof), as long as you make sure your equals() method is final. This will turn the first solution into a correct one, although with severe limitations (no subclasses will be able to change the notion of logical equality). This is actually fine for code that is not released externally (i.e., as a library component), because if you ever run into the limitation, you can just change the equals() code to one of the other solutions which allow subclasses to change the notion of equality. But this trick of using final is likely not appropriate if you release your code publicly and expect users might derive from your superclass - because obviously that prevents the users from being able to change the equals() code of your superclass if they want to remove the limitation.


It should go without saying that, if you are overriding equals(), then of course don't forget to update hashcode() as well. Fortunately, extending hashcode() is much simpler as none of the problems above arise. That's because hashcode() only deals with one object (the receiver), not two objects like equals(), and thus there are no issues with mixed types.

Other languages

I mentioned before I didn't find a sensible way to improve this pattern with the Java language. But this does leave open to consideration how this issue might be addressed and possibly improved in other languages. D in particular comes to mind, due to its meta-programming and generative code capabilities. Mixins for example could be used to alleviate some of the boilerplate that results in the Java case.
Going even further, one can bring this issue to the realm of language design. Perhaps the very way that one defines an equals/hashcode in classes could be changed in order to address this issue just as safely and comprehensively, but in a more pleasant and concise way (D follows the Java system pretty closely, as most OO languages do). But at the moment these are enough considerations for me. Thinking about this issue as applied to D or other evolving languages might be left for another article or discussion in the future, perhaps picked up further by others.

Thursday, 13 October 2011

An indignant reaction to Stallman's comments on Steve Jobs?

Whoa. It seems a lot of people have expressed indignation for Stallman's comments on Steve Jobs. See for example this post, featured in a Slashdot entry: http://www.readwriteweb.com/enterprise/2011/10/why-fsf-founder-richard-stallm.php. I'm surprised by this reaction from the community, and I find it a bit silly.

Surprised because I do not find it unexpected at all for Stallman to make such comments. In fact I would be surprised if he *hadn't* made comments similar to those. And I find it silly for the software community to expect a high level of respect from Stallman for Steve Jobs. It must be understood that in the eyes of Stallman, Jobs was a malignant influence in the software world. It does not matter whether you agree if that was the case or not. A point which seems lost on some people. For example, the post linked above states:
"He [Stallman] manages to offend common decency by celebrating the absence of a man who contributed enormously to the world of computing, and insult millions of Apple users simultaneously."
But in Stallman's view, the contribution of Jobs to computing where very unsavory ones - "evil deeds", in his own words. It wasn't a disagreement in mere preference, execution, or technical issues, but rather on fundamental ethical issues. Frankly, if I believed the same way as Stallman did about a man, I would also not shy of uttering the same words he did, political correctness be damned. (Although I can't think of any such person in the software world)

If one still has to quarrel with Stallman, it should not be with his latest comments, but with his beliefs and causes, the root cause of all his behavior. Beliefs which, of course, are the same as the FSF's - another point which seems lost on the poster, as evidenced by:
"It's time for free software to find a new voice. Once again, Free Software Foundation founder Richard Stallman is putting his feet firmly in his mouth."

Richard Stallman is still the perfect leader for the FSF, in case the poster was implying otherwise. Rather, he seems not to understand the ultimate goal of Free Software Foundation in the first place: that *all* software should be Free. Not merely some, or most of it, but *all* of it. They hold an absolute ideal that says any closed, proprietary software is evil. All that is very clearly expressed in articles by the FSF and RMS. A lot of developers seem to ignore that though, perhaps just because they like the FSF's contribution to Free/Open-Source software - they practically started the whole thing after all, a very good outcome for many people in the software world. But there is a *realm* of difference between merely wanting to have more Free/Open-Source software out there, and thinking that all software should be Free Software. One should not support the FSFs unless one supports the latter idea, simple as that.

PS: is "insulting millions of Apple users simultaneously" against "common decency"? Ehh... isn't that a bit of an exaggeration? I understand the comment might be seen as callous and insulting to Steve Jobs and perhaps those close to him, but millions of Apple users? Apple users are not "part" of Steve Jobs... as much as cult Apple fanboys might think otherwise.