How Java revolutionized the programming world
Interview with Gaetano Tonello – Datatex Senior Project Manager
Officially announced in 1995, Java was born starting from the research made in Stanford University in the early Nineties by a team of developers led by James Gosling. How the birth of this new programming language accepted at these times? When did Java start being successful?
Understanding the history of Java can help us better understand what happened in the last decades and why today we are at this point with IT. From what I can see by surfing the net, it has been decided that everything that happened before 1995 would fall into oblivion.
In the second half of the Eighties, we began to speak fully about “Artificial Intelligence” with a “discerning” and “economic” approach.
“Discerning” because – although the concept of AI had been well established in American universities since the first half of the 1970s – no one had understood which was the border between “data processing” and “artificial intelligence”.
“Economic” because beyond academic exercises, AI produced significant profits only after the year 2000.
Now I am going to tell the real story of Java.
In the second half of the Eighties Rank Xerox began to release on the market the first commercial applications developed with AI.
At the time, Xerox was a real “golden goose”. In Xero’s lab PARC (Palo Alto Research Center), which was the origin of many technological developments and innovations, the first development environment for AI applications was created.
Back then the commonly used lexicon was never about “environment”, but about “shell”; the programming language at the basis of the System was not Java – that did not exist yet – but the “GClisp” (an acronym for Golden Common Lisp), an example of the Lisp, a language created by the MIT from Boston in the second half of the Sixties and used for the first time in an Advanced Environment for military purposes developed by IBM.
We began for the first time to talk widely about object-oriented programming, interference engine, Expert systems, machine learning, knowledge engineering, and so on.
At first, the market tried to trust this revolutionary news, but then at the beginning of the Nineties, something that no one could ever forecast or frame in the right dimension happened.
The competitors of Xerox arrived, fiercer than ever.
One of the first companies was Sun Microsystems, a young rampant organization, born at the beginning of the Eighties in the Silicon Valley, in Santa Clara, that at the time was the archrival of Palo Alto.
Just to be clear, Sun was the one who offered the Unix System to the world and the broad concept of open source but tried from the start to follow the path of AI.
They created secretly a new shell, that at the beginning aimed to compete with Xerox’s one, and was then baptized as Nexpert Object. Unlike Xerox, the project has never officially been disclosed and never produced sellable products.
The programming language at the basis of it was an evolution of the “Smalltalk”, also invented in the Xerox PARC.
Sun, finding on the way that – despite having a technologically superior product – could not keep up with Xerox, took some dramatic decisions which affected irreparably the future of AI in the fore coming years.
First, Sun decided to get a newly propriety programming language, a sort of mix between “C” (for the majority) and Smalltalk (for the minority), and for this, they launched a second secret project called Oak.
But the most innovative part of the project was the presence for the first time of a “virtual machine”, something that would have allowed to execute the compiled programs through the Oak in any Operating System (or at least in any OS supported by Sun technology).
The target of the Oak compiler was no longer the Operating System of the host machine, but the virtual machine installed on it.
For the first time the principle “Write once, run everywhere” came to life.
This choice was a deadly blow to Xerox, who in its turn decided to implement and develop all its AI technology on an ad-hoc machine, the 1186 AI, a machine that – with particular hardware and processors, dedicated operating system, and compiler – cost almost 100.000 $ at the end of the Eighties.
Although “Java” was officially born from a rib of Oak in the Mid Nineties, during the second half of that decade the term Artificial Intelligence disappeared. This term came back into the collective imagination not because of some new application, but because of the famous Spielberg movie called “A.I. Artificial Intelligence” brought to cinemas in 2001.
Sun, that still had the ownership of Java before giving it up to Oracle, woke up and began to offer Java as a sort of starting point to do everything, on any kind of device and context.
To be honest, this awakening was triggered rather than by IBM competition, by the big steps forward made by robotics (industrial and not) in Japan, a country that had already made a different choice by using technologies based on a completely different programming language, Prolog.
However we want to judge it, the choice of Sun – after almost twenty years from the first significant AI applications – was a winning one even if a little bit delayed.
PS: don’t waste your time by looking for a confirmation of my story on the net, because you will not find it…
Also IBM, in the second half of the Nineties, tried to promote a new java-based solution (the so-called IBM San Francisco framework), but then decided to dismiss the project. Why? What would have happened if this project had been accomplished?
The disinvestment of Project San Francisco is still taught in American universities as a paradigm of what should not have been done in that historical context – between the Nineties and the first decades of this century – in the field of AI.
For many ex-post reviewers, the San Francisco project showed the highest level of arrogance of its creators and developers who tried in any possible way to follow their instinct aimed at categorizing anything, ignoring that some messy things in the real-world challenge real solutions, especially in complex business processes.
For others, the San Francisco project failed because IBM realized gradually that no matter the hard commitment of developers, they could never simplify anything into properties and granular subsets, from which the problem of infinite regression would start that is a series of proposals/assertions, till the infinite.
For those who, like me, developed software, it is quite clear that the infinite regression sooner or later manifests itself as a failed attempt to define everything in detail: there is always another level of granularity below any existing detail. Today with the Neural Networks approach we are trying to overstep this limit, but if we want to be honest, at present we haven’t succeeded yet.
In Datatex, we lived thru the San Francisco era with the highest level of personal involvement. We believed and invested in the project. At the end, we have had to heal our wounds and start over….
I have an idea that is slightly different from what is abovementioned.
First, is the historical context of reference.
It was between two computer events of absolute planetary importance, that is the millennium bug and the switch to Euro as the new currency in 19 European countries.
In this context, the attention and the big investments of software houses were all focused on the issues linked to these two events, that had nothing to do with the technological lineage of the new proposals.
Second, IBM – used till the first half of the Nineties to rule in the IT field – was substantially surrounded on one side by Microsoft for the micro and on the other side by Oracle for the Data base and then by Sun concerning Java.
Judging in retrospect maybe IBM overstepped with the San Francisco Project and then – as it happened a few years before with OS/2 – did not dare to have faith in its capabilities.
Why?
We may know the truth only when those who made certain decisions will finally spill the beans.
At present, I still don’t know it…
Today, Java is one of the most popular languages and is also used by Datatex programmers. Which are the features and the strengths that allowed Java to rule the market and revolutionize the world of programming?
It is not easy to answer this question.
Considering other similar and overlapping situations, I would say that this is a matter of “de facto standard”.
After all, if we think about it, the AI world – as the PCs with Windows, as the mobile devices with Android, as many other things – aimed to have a reference standard. And with Java, this reference standard has been achieved.
Java is not an easy programming language at all.
But Java, unlike the other objected-oriented languages, became the paradigm, most of the new generations of programmers know it and – last but not least – it is complete, able to give a functional answer to various needs.
How do you see the future for Java? How do you think this language will evolve during the coming years?
This is another difficult question.
The next stages of AI will be focused on neural networks and IBM is ahead of everyone else today.
I think that Java, as it is now, is meant to evolve and sooner or later doomed to extinction, leaving room for something more fit to the future needs. But it will be the standard for at least 10/15 years.
It is for more than 20 years that Datatex believes in Java and – despite the initial stall with San Francisco – has never been deceived. In ten years also Datatex will take stock of the evolution of languages and will make cutting edge choices, as it has always been in the history of Datatex.
As an ultimate risk mitigation factor we have also created ABS which automatically translates business scenario to java code, in future if and when required we will generate different more advanced code holding on to our business processes and embedded I.P within these processes.