The last difference between OpenJDK and Oracle JDK

January 18, 2013 5 comments

Recently I’ve spent a lot of time investigating font rasterization (a great topic which deserves a separate post). Most applications use font engine which is built into graphics library or widget toolkit. Only few cross-platform applications which badly need to provide consistent text layout (Acrobat Reader, for example) are using their own font engines (like Adobe CoolType). Java platform is one of such applications, since it has its own graphics library. If you are curious take a look at this article comparing font engines, including one from Java platform. From publically available information I understood that OpenJDK uses FreeType library. I thought: “That’s great, I have JDK 1.7 installed so this library must be there, let’s take a look”. But I could not find any traces of freetype.dll in JDK. I was puzzled and tried to find some answers in sources of OpenJDK. Imagine my surprize then I’ve found that Oracle JDK still uses proprietary T2K font library (located in jre/bin/t2k.dll)! Both Oracle JDK and OpenJDK are built from the same sources, and linking to external libraries happens in runtime. There is a logic which checks if JDK is running in “OpenJDK” mode or “Oracle JDK” mode, and, depending on that, loads either FreeType or T2K (see sun.font.FontScaler.java). I thought: “I was always curious about remaining differences between OpenJDK and Oracle JDK, and eventually I’ve found one!”. An interesting thing is how JDK determines in runtime if it is OpenJDK or Oracle JDK. It checks if there is a file for Lucida font in JRE. A long time ago Sun had lots of complains that “write once run anywhere” promise doesn’t actually hold, especially for look and feel. Different systems had different fonts, and sometimes even fonts with same names had different glyph sizes, leading to inconsistent text layout. To fix this Sun have licensed Lucida font for distribution with JRE and made this font default. This font is absent from OpenJDK distribution, and, as I said, this fact causes JDK to link in runtime with FreeType, and to link with T2K otherwise. I was surprized, since I’ve expected a global configuration flag, something like “isOpenJDK”. I thought: “OK, let’s take a look what other specific hacks other sub-systems use to distinguish between OpenJDK and Oracle JDK”. It turned out that there are no other parts where this difference matters. Font subsystem is a last one. So I was a lucky guy, getting directly into it.

Advertisements
Categories: Java Tags: , ,

More compact format for Java classes

October 9, 2012 Leave a comment

I’ve noticed that compiled Java code takes more place then source code. For example, a simple equation solver written by me takes 15 kbytes in source and 30 kbytes in compiled form. Source code is archived to 3 kbytes, and compiled code  to 17 kbytes. Since I have written a library for reading and manipulating .class files, I’ve decided to find out why these files are so big.

For equation solver I have following results: contstant pool takes 61% of total space, and methods take 32%. So, majority of space is taken by constants. Maybe this could be optimized? There are several types of constants which contain references to other constants. These are MethodReference, ClassReference, StringReference, FieldReference, NameTypePair. Each reference takes two bytes. My program is composed of small classes, with typical constant pools consisting of 50-80 entries. So, for such classes it is possible to store references using only one byte. I’ve invented a new file format, named “slass”, which uses such technique for saving space, and implemented it in my library. Testing with abovementioned equation solver gives 7% improvement on size of constant pool, which makes a 4% improvement on size of .class file! No, it was not worth inventing.

Categories: Uncategorized

“Definitional interpreters” by J. Reynolds

October 3, 2012 Leave a comment

I’ve just completed reading of a famous paper of J. Reynolds “Definitional interpreters for high-order languages”. This paper is so good that I re-read it every two years. First, it has obvious historical value, since it introduced very important concepts for compiler writing and programming language semantics. Even nowdays it is a great tool for undestanding these concepts. It is focused, concise, simple and still very deep (like a half of SICP comressed 10 times). This combination of depth and conciseness is very thought provoking. Each time I read this paper I do two passes. First pass is to strengthen my understanding of overall ideas, and second is for reflections and deep digging. This time I decided to write down the results of second pass. Two years later I’ll do it again and then compare.

The paper is about operational semantics of programming languages. It shows how a meaning of programming language is given by providing an interpreter for this language implemented in another, better understood language, for example an instruction set of some abstract machine. A simple purely applicative high-order language is defined by providing a full operational semantics as a set of rules in spoken language (abstract machine implied). Then author sets a task of implementing an interpreter for that language in that language. Given semantics provides us with understanding of the tool we use and of the thing we make. The purpose of investigation is to get a deep understanding of a nature of high-order functions, and an order of evaluation. There is also a third very important point: a nature of recursion.

As a first step an author provides us with “direct” interpreter, which uses high-order techniques. Unfortunatelly, there is no explanation of how this interpreter is obtained, it is just given. The most striking feature of this  interpreter is that it is meta-cirular, which means that each feature in defined language is implemented directly by the same feature of implementation language. Evaluation of application expression is performed by application, and evaluation of lambda expression produces function. This simplicity is stiking because every reader knows very well that production compilers for programming languages are big and complex. An answer is simple: our interpreter is not conserned with problems of parsing, of code generation and of optimization. However, apparent simplicity is deceaving. Detailed explanations of operational semantics are encoded using high-level techniques, which makes them less explicit and less obvious. The main mystery is an implementation of recursion, which is not revealed since letrec is implemented via letrec.On the over hand, this simple interpreter is a good exercise (or a good example) of a program written in high-level language.

As a second step, this interpreter is defunctionalized, and all high-order functions are removed. Instead, a data structures are provided. This second interpreter is much closer to operational semantics given in the beginning. So, probably a better way to arrange a paper was to start from first-order interpreter and then moving to high-order interpreter. Someone who is familiar with lambda calculus will also note that replacement of high-order functions with data structures is a cheat, because all data structures are syntaxic sugar for lambda terms. Apparent difference is that high-order interpreter produces functions which already contain computations, which are just delayed until a parameter will be supplied. First order interpreter breaks this by adding encode function (which produces a record which is, in fact, a decode function), and by moving computation later just after function which interprets record. I think that this re-arrangement of computations is a most important part.

So, high-order programming style specifies computations which are delayed until all input is available. Delayed computations are two-way abstraction facility: it is opaque for a code which will supply an input, and a delayed computation doesn’t have an access to the code which invokes it, only to supplied parameter. Defunctionalization breaks this abstraction: instead of providing a delayed computation there is an environment captured and computation representation encoded, and the code which supplies an input is now responsible for interpreting that encoded computation. Defunctionalization makes data flow of interpreted program more apparent.

Similar trick with delaying happens with recursion. The most tricky part in explanation is to show that new(e) is the same as e. This happens because instead of packaging environment as function (delayed computation with everything ready except of variable name), an environment contains a syntaxic part of declaration expression, (a “thunk”). Environment now also interpreted by function get(), which unpacks the trunk and evaluates an expression by providing the same enironment which binds name to the trunk. This means that, for example, while evaluating a following expression:

letrec fractal = lambda(x).if x == 0 then 1 else fractal(x – 1)
in fractal(200)

each time when body of lambda encounters a “fractal” name then it first discovers a trunk, then creates a new closure based on it, and then passes a closure into eval function together with same environment, so making of closure from trunk will happen again and again in a loop. This is not very efficient, but simple and clear. More efficient interpreter will use assignment. So, by delaying an evaluation of recursive definition we don’t have to use cyclic records, but somehow recursion is implemented. If get() performs actual evaluation, then recursion in defined language depends on recursion get() -> eval() -> get() -> eval() in defining language. But get() performs simplest evaluation by returning a closure, and actual interpretation of that happens in eval(), so recursion depends on eval() calling itself in a loop. Unlike high-order functions, a recursion not an introduced feature, but meta-circular. After understanding this I immediatelly undestood that this result should be expected. Loops are evaluated by evaluation loop, that’s the only way.

A great thing about second interpreter is that it makes it clear that evaluation is a manipulation on environments, and nothing more. Anyway, programs are closed lambda terms, which are just combinators.

The most difficult part of the paper for me is an introduction of continuations. To understand it, one should first clearly understand a problem. Initially we don’t need an interpreter which allows us flexibly change order of evaluation. We want interpreter which performs some declared evaluation strategy on interpreted program independently on evaluation order of defining language. So all this talk about “serious functions” is not about some abstract functions, but about functions of our interpreter itself. In fact, that we want to do is to make programs which loop forever in case of call-by-value to do it, to loop forever. Yes, we want programs which we expect to not terminate to do it.

After this text was written, I’ve discovered “Definitional Interpreters Revisited” from the same author. Better late then never.

How RMI works

July 17, 2012 Leave a comment

Sometimes the best way to teach someone how to use something is to explain how it works inside. This small explanation on Java RMI was written especially for me so I could quickly restore this knowledge in case I forget.

If you have some object and you want to make it accessible for remote parties, then you have to “export” it into RMI subsystem. RMI will generate some sort of identifier for your object and will store a binding between your object and this identifier inside some storage. When remote party wants to make a call to your object, it will make a connection to your JVM, and will send a protocol message containing object identifier, name of method and parameters in serialized form. RMI subsystem will find an object by identifier, will deserialize parameters and then will perform method invocation by using reflection.

Serialized form of parameters contain their exact class. So even if parameters are declared in method as something abstract, a server first creates instances of their exact class and only then performs upcast. This means that exact classes of parameters should be in server’s classpath.

To perform a communication a remote party should somehow obtain an identifier for exported object. This is solved by making additional lookup. A specific object named “Registry” is bound to some static identifier (let’s call it “1”). This object has his own storage and it allows mapping of other objects to strings. So to obtain a reference to “registered” remote object a client should know a string key which was used during object’s registration. A client  constructs a reference to registry using static identifier “1”, then asks it to return an identifier of registered object.

This double-referencing seems complex. However, it provides some level of protection. Registered objects are “public” and anyone who knows a name can call them. Names by which objects are registered are not secret, and you can query a registry for a list of all names. A method call to a public object may return a remote reference to some “private” object, with randomly generated id which is hard to guess, so it will be available only to method’s caller.

If a server wishes to return a remote reference to an object instead of a serialized copy, then it should export this object to RMI subsystem. The same is true for a client if it provides a callback parameter. UnicastRemoteObject is an object which automatically exports itself in a constructor.

Let’s check if we understand everything by describing a process of registering an object. Registry is often started as a standalone process. If a server wants to register an object, it should first construct a remote interface for registry. Interface itself is known (“java.rmi.Registry”) and located in runtime library. Object identifier is also known, it is static. So server should provide only host and port where RMI registry is running. A server exports his object, then invokes bind() method. RMI understands that argument to remote call was exported, so it sends object identifier and names of classes which are required for remote interface (interface itself, all super-interfaces, all declared parameter classes and all declared return classes). String key is serialized. Now serialized string, identifier of registry object and info about registered object will be sent to registry process. RMI subsystem in registry will create a remote reference with object’s identifier which implements object’s remote interface. Now RMI will locate registry object using registry’s identifier, and will invoke a method bind() to store remote reference together with key. When a client invokes lookup() it connects to registry in a same way as server, and server transfers stored remote reference to client. Now client can connect directly to server and make a call.

The bad thing with RMI is that because of serialization a server should be able to create exact classes of parameter objects, and client should be able to create exact classes of return values. Registry also should know a lot about all registered interfaces. This makes systems build on top of RMI not very flexible. However, there is a way how one side can tell to RMI classloader on the other side about location of classfiles. It is a system property “java.rmi.server.codebase”. To make things easy I’ve written a simple HTTP server which could be deployed in any application which uses RMI, so you will be sure that if it compiles, then it works.

Categories: Java, Technology Tags: ,

To SLEE or not to SLEE?

July 5, 2012 Leave a comment

I was working for a company which is making a product based on JAIN SLEE specification. I’ve developed several protocol stacks, several protocol frameworks and applications. Sometimes I’ve even studied code of SLEE implementation. However, I should admit that I had never read a specification myself. Most of my former collegues too. It is big and boring. I always had an impression that it is based on some very simple techinal ideas, and whole complexity is artificial. I felt that this specification was making life of developers harder, while it should have made it simpler. However, I understood that producing a JAIN SLEE was more a political and market-oriented decision.

A widespread approach to systems design is to compose solutions from components with well-defined interfaces. This means that each component should not know any details about other components it interacts with. Unfortunatelly a component design also often means that implementation difficulties of each component are considered to be specific to this component, and should be solved on component level. Architects are tasked in defining functional requirements to components and often don’t care about implementation difficulties. Sometimes implementations could be very much simplified if someone knows internals of communicating components. This problem is called “a cost of abstraction”. Since I have an advantage of understanding both whole picture and lots of individual parts, I’ve making this article. I claim that JAIN SLEE is a too complex solution to simple technical problems, and complexity makes it hard for people to see these simple technical problems. I claim that simple technical problems should have simple technical solutions. I also claim that SLEE is simply not enough.

The idea of this article is to show that JAIN SLEE should not be a framework or application server. Instead it should be an implementation principle of application server, covering all protocol stacks and protocol frameworks. Instead of being a layer of a solution stack it should be an approach applied to all layers of a solution stack, thus fixing problems for a whole solution.

The article is big and is split into several parts. First part is an introduction in multithreaded design. It explains how threads should be used. Second part compares synchronous and asynchronous protocol APIs. Third part introduces SLEE by combining ideas from first and second parts. Fourth part explains layered design to networking software and shows that protocol layers suffer from same problems as applications. Fifth part explains a new solution which is better than JAIN SLEE. Sixth part explains why JAIN SLEE became such a monster.

JAIN SLEE is implemented in Java, but most of the things I’ll say are not specific to Java, and applicable to any networking systems. All examples are written in Java-like pseudocode.

Remote debugging C programs with Eclipse and GDB

December 27, 2011 2 comments

On my daily job I write in Java. My last assignment, however, was to patch existing networking program written in C. Last time when I wrote something in C it was 10 years ago.

My IDE of choice is Intellij IDEA, but I’ve tried Eclipse several times, so it’s my backup IDE. I also knew from my collegues who actually practicing C that Eclipse CDT is a one of the best free IDEs for C development available. So I chose  Eclipse CDT it as my C IDE. On my desktop PC I use Windows, but the program I was modifying is Unix-only. When I’ve finished writing code I’ve uploaded it using FTP on Linux host (more on that later), and launched make. Of course I had compilation errors, but after several iterations of editing and uploading I finally got binary executable. The program is a network proxy. I’ve started it, sent some messages through it, and, as I expected, it was not working in a right way. I had to debug it to know why.

With Java it is very easy to debug programs remotelly. You ask IDE to do a remote debug, IDE will display to you which command-line options you should provide to java executable. You run your program, then you just ask IDE to attach to your program by specifying remote host and port. All debugging takes place in JVM, and your IDE is a debugging front-end. Easy and straightforward.

Not like this with Eclipse CDT. I’ve opened “Debug configurations” window, and saw “C/C++ Remote Application” option. I chose this option, but I found myself confused with fields I had to specify: location of executable and location of debugger. Even after I’ve downloaded executable from Linux to Windows, I couldn’t start remote debugging. So I decided to abandon this attempt for a while and try manual debugging with GDB.

This debugger was surprisingly easy for me to learn. It is much less cryptic than other Unix tools. Just a dozen of simple and memorable commands which you can print on one small sheet of paper for reference. GDB helped me to quickly found my errors. The only inconvenience is that you have to switch between terminal window where GDB is running and Eclipse CDT window where the code is. However, I still had an impression that remote debugging is actually possible from Eclpse CDT, so I decided to get it working, or at least to have an explanation why it is not possible.

And that’s that I’ve discovered after some long investigation. When C guys say “remote debug” they mean something very-very different from what Java guys mean. To remotelly debug C program you should link it with small chunk of code, which is able to do basic control of program execution (setting breakpoints, pausing-resuming, …) and basic program state examination (memory, stack frames, registers). This chunk of code is called “target side“. It will communicate with main part of a debugger which will be located on another machine. This main part  should be able to access a binary image of a program with all it’s symbols. This symbolic info allows debugger to perform all high-level debugging job, like showing contents of local variables, execution of code in line-by-line mode, setting breakpoint on a function. This part of a debugger is called “symbol side+UI“. When you perform a local debug both sides are in a same gdb process. You can also avoid linking with target size code by launching gdbserver which will attach to any process on a host. There is no official name for a protocol between target side and symbol side, so I’ll call it Remote Serial Protocol. It can work over serial cable or over TCP/IP.

So, difference between remote debugging in Java and remote debugging in C is that in case of Java target side and symbol side work on the same machine as program being debugged, and UI is on remote machine. In case of C only target side is located with program being debugged, and symbol side together with UI is on remote machine. Such approach allows debugging programs in cases where normal gdb will not work in local mode. For example, if you have a program for embedded device, which doesn’t have enough memory for full GDB.

But I want to remotelly debug programs with GDB in Java style. C guys will call it “remote UI for GDB”. And I believe that this is possible. Then you debug a program locally with Eclipse CDT, it creates a process for gdb, and intercepts it’s input and output. Then it says to GDB that instead of human-oriented command language it will use for communication a special machine-oriented language called MI (“machine interface”), which is easier to parse. So in order to use Eclipse CDT as a remote UI for GDB I should just launch GDB remotelly and remotelly attach to it’s input and output. I’ll try to make a plugin for Eclipse using my knowledge of Java, because a bunch of questions on StackOverflow shows that such capability is demanded.

Some links:

http://sourceware.org/gdb/current/onlinedocs/gdb/Remote-Protocol.html#Remote-Protocol

http://sourceware.org/gdb/current/onlinedocs/gdb/GDB_002fMI.html#GDB_002fMI

http://sourceware.org/gdb/current/onlinedocs/gdbint/Overall-Structure.html#Overall-Structure

http://dev.eclipse.org/mhonarc/lists/dsdp-tm-dev/msg01369.html

http://wiki.eclipse.org/TM_and_RSE_FAQ#How_can_I_do_Remote_Debugging_with_CDT.3F

http://www.ibm.com/developerworks/library/os-eclipse-cdt-debug2/index.html

Categories: Java, Technology

Haskell platform pitfalls #1

October 25, 2011 Leave a comment

I’ve started playing with Haskell platform, and immediatelly had some issues. Here are some of them, recorded so if I’ll forget I don’t need to solve them again.

  1. After installation of a platform I knew I need cabal. Haskell wiki says it is there, but I didn’t knew where to look. Finally I’ve found it: in Haskell Platform directory there is a place with path lib/extralibs/bin. It seems that I need to read full platform documentation before starting using it
  2. OK, with cabal I’ve tried to install snap framework. It stumbled on ghc-paths component, complaing that it “cannot configure”. I’ve checked and found that this component is used to find path where Haskell Platform is installed. I’ve checked my path and there were an entry for Haskell bin directory. A mystery. I’ve tried to run ghc from that command line (Far Manager, actually) which I’ve used to start cabal, and no, ghc was not found. I’ve typed ‘path’, and an entry for Haskell Platform was missing. I’ve remembered that command line was started before I’ve installed Haskell Platform, so changes in system path were not seen. Restarting command line solved that problem
  3. Next package I’ve tried to install was hxt. It complained that it cannot determine version of ghc. It took me a while to figure out that command line should be started with admin rights.
  4. I’ve tried to create a blank project with snap. Just create a folder and type ‘snap init’. Then I’ve tried to build that project with cabal. Oops, “can’t parse name”. It took me some time to guess that it doesn’t like symbol ‘-‘ in the name of that folder

So far so good.

Categories: Haskell Tags: