GraalVM is Oracle’s virtual machine effort based on the HotSpot/OpenJDK JVM/JDK and also working on other extra features. GraalVM reached production-ready status according to Oracle while GraalVM debuted this February as the latest feature release.
GraalVM 21.1 is available for download from these websites:
GraalVM Supported languages are shown in the below figure .
Why GraalVM? Why I’ll be talking about GraalVM. What exactly it can do for you. GraalVM is a universal virtual machine. Universal, because it can run a bunch of different languages. Also, it has this mission to run programs written in different languages really fast. In fact, it’s quite a big and complex project. There are many things that you can do with it. The first thing is that you can have high performance for abstractions of any language. We think it’s important because some of the languages out there do not really have this high performance. We would like to give it to them. Another thing here is that sometimes performance is considered to be a trade-off with abstractions. Meaning, if you’ll write good abstracted code, some people think that it will not be really fast. Or, if you want it to be fast, you need to write it in this performance environment. We think that it doesn’t necessarily have to be the case. You can give these optimizations to your underlying platform, a VM in our case. We can do this automatically for you. Another thing that GraalVM can do for you is introduce this new operation mode for Java and JVM applications in which you can compile them ahead of time into native binaries, so that it can start really fast and don’t need that much memory as compared to a traditional setup. We’ll be talking about these two things. Since there are multiple languages already implemented on top of the GraalVM platform, you can use different languages in your application, or in case you have your own custom in-house language, you can also implement it on top of GraalVM platform and get access to the common tooling and optimizations, and interop with other languages. It can be easily embeddable in various environments, and this way, bring these languages and features to those environments. This is a high-level overview of what GraalVM can offer for you. There are quite a lot of things happening here.
There are multiple languages. Recently, we had this poll on Twitter. We asked people, what is the most interesting and exciting GraalVM language for you? Which one of those languages do you think won that?
It’s just very popular.
Java 16 is the old release of Java and we are looking forward to providing support for the upcoming Java 17 LTS release.
Node.js included in GraalVM 21.1 has been updated to 14.16.1, which is recommended for most users.
There’s one more significant change regarding Node in GraalVM. As of GraalVM 21.1, Node.js support is no longer included in the base GraalVM download. It’s now a separate component that you can install with the
Production Ready and Editions
Production ready was a huge milestone for us. It was May last year because GraalVM originated in our R&D department, Oracle Labs. It was a massive scientific research project. Right now, it’s also production ready. You can safely use it in your production environment. Also, it comes in two editions. There is Community Edition, which is open source, free to use. Development is happening on GitHub. You can track it. See what’s under the hood. Also, we have an Enterprise Edition, which would require you to have a commercial license to run it in production. It’s also free for evaluation purposes, doing proof of concept, these things. You can get both of them from graalvm.org. That’s the main entry point for all things GraalVM related. This is quite a massive project. I wanted to also share this infographic with you. These are different repositories that contribute to this big main repository. We have a core repository, which has compiler and language implementation API, things like that. Also, we have language implementations and tools. They all contribute to this massive repository.
Talking about performance, I want to also offer one more way to look at it. From a compiler perspective, you might be looking into different things. One thing, perhaps, you always think about when you optimize for performance is big throughput. That’s the number one metric most people care about. Also, optimizing for reduced max latency. One more thing that you could be looking at is actually startup speed, because this might be important in your cloud and microservice environments. Or, for example, when you want to scale, this may be something you will be most interested in. You can also be optimizing for reduce in memory usage. Specifically, in cloud environments, it might be something your cloud vendor will be charging you based on. It’s less of a priority for server-side deployments, but also, in some cases, in some devices you might be looking at small packaging size of your application. We’ll compare how different modes of operation of GraalVM can help you optimize for either of those metrics.
Performance in Java Applications with GraalVM
Talking about performance and what it can do for your Java application with GraalVM, is the number one slide and decision that you need to make if you’re considering migrating to GraalVM. GraalVM can work with your Java application in two modes. It can do JIT, meaning dynamic compilations like you normally do with your JVM, or it can compile a Java application into a native image. This way, they will have very different performance benefits and trade-offs.
GraalVM Native Images
To closely compare those two, let’s take a look at what a GraalVM Native Image actually is. It’s your Java program that has been compiled ahead of time into a native executable. Since it no longer needs a Java runtime to run, it can have this startup and memory benefit. Also, it is compiled using the same compiler that is working in JIT mode. With this compiler, you can really do both things.
Java Dynamic Execution
How do they compare and why are they so different in terms of performance? When you run your Java application in JIT mode, meaning dynamic compilation, there is quite a lot of things happening. First, your code is loaded. Then it’s verified, interpreted, profiled, compiled. There is quite a lot of operations. Also, they’re quite expensive for your machine to run those.
Native Image Build Process
With GaalVM native image, things are slightly different. You can think of it as working with your application in two phases. First, you have the build phase, where you build your native image. Then you can run it when it’s ready. The idea here is to move all the heavy lifting, all the expensive operations to image build time. You can do it once. Do all those expensive operations. Afterwards, you can start really fast, every single time you actually run your application. How does it work inside and how is it actually possible to compile Java ahead of time? What happens is that we take your application, and when you run our native image command, we start analyzing your application, starting from its main entry point. We look for all the code that is reachable. We also run initializations. We can perform heap snapshotting. You can start with the pre-populated version of your heap. We repeat those two operations until we see that the fixed point is reached. We see all the code that needs to end up in your application for it to be possible to run. Then when we have it, we optimize it and we compile it into this native executable.
Startup Performance – AOT vs. JIT Startup Time
How does it affect startup? When you look at what happens when you first run your application in JIT mode, there is quite a lot of those operations happening. They need to happen so that your application can run. Of course, here is another trade-off. Once you’ve done all of that, your application can go really fast for quite a long time. If for your project, startup is more important, then all these warm-up operations might not be that important and that relevant for you. On the contrary, in AOT mode, what happens when you actually start the application? Your executable needs to start. Then you can immediately run this code. It was already optimized at the image build time.
AOT vs. JIT Memory Footprint
The same goes for memory. If you compare what happened to memory and how much memory you need in both of those modes, you see that inject mode, you need to keep around a lot of memory for all those structures: for the JVM executable itself, for application data, for metadata stuff. With AOT, you only need to load your application into memory and keep application data around. That’s basically where this time and memory differences are coming from. If we go back to our performance metrics, and now take a look at how those two compare, you will see that the JIT system, meaning running on HotSpot VM, or GraalVM in JIT mode, is really good for optimizing for those two things. AOT, on the contrary, is quite different. It’s a trade-off. Based on your preferences and your priorities, you can really choose for which of those you want to optimize. Based on your project goals, based on what things matter most to you, you can really optimize for either of those.
Demo: Startup and Memory Footprint
Let’s see how it works. I have a demo application here. I’m running GraalVM. First, I will run it in JIT mode. It’s a bit warmed up. I think this time it was around 3 seconds, but usually it’s about 2-something seconds in JIT mode. It’s up and running. Let’s see what it actually does. It basically returns a random conference name. That’s in JIT mode. Let me kill it, and just make sure that it’s actually dead because I want to reuse it. I actually have it compiled already. All these files, those are my compiled native executables. This one is warmed up a bit too. I tried it already. The cold start was about 100-something milliseconds. Compared to 2 or 3 seconds, that’s much faster. Let’s just double check that it does the same thing. Functionally, they are equal. There is this significance of time difference. We are talking about time to just start your application.
What if you wanted to go deeper and understand more about the startup behavior of your application? I can also measure how much resources it needs to start in every scenario. For that, I have those scripts. In JIT mode, it’s started. I spent some time to do it. In terms of CPU usage, it used this number of CPU time. If I apply the same measurement techniques to my compiled native image, hopefully, it will be much faster and wouldn’t need that much resources. It is. It also doesn’t need that much CPU because we already optimized everything at the image build time. That’s CPU.
I’ve also talked about memory. How can we compare memory usage for both of these? For that, I can use a tool called psrecord. It will observe and output CPU and memory usage. I will send three requests to this application to have data and see how it is performing in both modes. We’ve got our server up and running. We have sent those requests. Let’s see the image. What’s the time? It should be it. That’s in JIT mode. In red, this is CPU usage. The horizontal, blue line is memory usage. Up until 1.5 seconds, there is quite a lot of work happening. We’re starting our system. We are loading the code, optimizing the code, doing all of that heavy lifting.
I can just rename it. We can use the same thing, and I’ll be able to run it in AOT mode and see how different those metrics will be. Let’s run it once again. Doing the same thing, getting up and running, serving those few requests, and the image should be ready by now. If you look at both of those, here on the right, this chart is much cleaner because there isn’t that much work happening anymore. Also, if you take a closer look at this one on the left, you may notice that those operations at the very beginning, they are quite expensive. We are using a lot of resources. Afterwards, those smaller spikes are actually valuable operations, meaning, serving those requests. This overhead work compared to the actual valuable work, is so much bigger and needs much more resources. When you compare these two, you see that for a system that needs to get up and running from time to time to serve some requests and then go back down, on the right, the AOT mode might be much more suitable. That was startup and memory.
Microservice Frameworks: Startup Time
I wanted to also show you this infographic with a few microservice frameworks and how they compare in JIT and AOT mode. I just encourage you to look at this as a comparison of different operation modes, not comparing the frameworks to one another, because they have different design, and architecture, and stuff. In this light green one, that is GraalVM Native Image mode. Orange and red, these are traditional, regular JDKs. In terms of startup time, it’s basically the same. The same is also true for memory usage. With GraalVM Native Image mode, you can really reduce this time and memory requirements for getting the application up and running.
AOT vs. JIT Peak Throughput
Peak performance, what about it? Because we all know and love Java for peak performance. How do the two compare in terms of peak performance? In terms of peak performance, JIT systems really have this benefit that since they have profile information, they can really adapt to your application behavior and adapt as you go. Also, they can make those very aggressive and optimistic assumptions about your application behavior, because if those are no longer true, they can de-optimize and go back. Other than that, they can really optimize your code really well based on this information. In AOT mode, by default, that information is not available because we are doing things in advance, meaning we don’t have that information yet. What we can do for AOT mode to get it closer to this good, peak performance, is that, first of all, we do handle all the cases in machine code, because we cannot predict what will happen at runtime, by default. There are some things that can help. For example, if you’re looking at predictable performance, there are some cases in which you want to start with fairly good performance and keep up that level. You’re not interested in warming up. You want to start with good performance. You want to have the ability to predict performance. In this case, GraalVM Native Image can be really helpful for you.
Another thing that can be helpful is profile-guided optimizations. How does it work? You can observe this runtime information about your application. You can collect this profile information. If you give it to us at the image build time, we can actually build this image with this profile information in mind. This way, we can actually optimize for those cases that are the most relevant for your application behavior. If we actually do that, on this chart over here, in green, that is JDK 8, Java HotSpot VM. The yellow one is GraalVM Native Image. The red one is GraalVM Native Image with those profile-guided optimizations available. GraalVM Native Image in both cases starts with fairly high performance. Sometimes, in your case, if you want to do something quickly, that may be enough for you. If you’re really looking for peak performance, you might notice that GraalVM Native Image with PGO, the red line, is really on par with JDK HotSpot VM. Of course, it’s slightly lower, but in most cases that will be a significant difference for you. You can get fast startup time. You can get really low memory consumption. You can also optimize for very good peak performance. Currently, this comparison, JIT versus AOT is like this. We would like it to be like this for GraalVM Native Image. You can really get all the performance metrics to be on maximum, and not really have to decide on which one to focus. That’s currently something we’re working on.
GraalVM Native Image for Real-World Projects
I also want to talk about how to get started with GraalVM Native Image for a project that you may be working on, or about to start. What’s the best way to start with it? There are a few helpful things to do. First of all, things like reflection, and JNI, and so on, can be more challenging with GraalVM Native Image because it is built under a closed world assumption. Meaning that, in those cases, when you, for example, use those things, you would need to provide a configuration file for us so we can know that you want to access those things with your native image. You can do this configuration file manually, and we have instructions for that. Perhaps, the easiest way is to use our tracing agent, which can observe your application behavior and output this information for you. You don’t have to do it manually. Another helpful thing is the Maven plugin which you can use. You can read about it also in this blog post. Also, a fairly easy way to get started with GraalVM Native Image is to go with one of those frameworks. There are multiple that work with GraalVM Native Image. Here is a quick guide to either of them. You can go with Micronaut. Here is a guide for you to build your first application, if that framework is working for you. Another one you can do is Helidon. You have a guide over here. I don’t need to introduce you to Quarkus.
Spring Boot Applications as GraalVM Native Images
One more thing you might be interested in is Spring Boot applications as GraalVM Native Image. This is a screenshot from a conference talk by the team members. They presented current work in progress on running Spring Boot Applications as GraalVM Native Images. You can Google that video or type that URL. What you can do if you want to try running Spring Boot applications as GraalVM Native Images, is you can go to their GitHub repository. There is this experimental project, it’s called Spring Graal Native. You can get it. Build it. It also comes with some sample applications that you already can build and run really fast because they’re already compiled as a native image.
GraalVM Native Image vs. GraalVM JIT
Just to summarize this whole JIT compiled into AOT, when should you use either of those? Of course, that’s your decision. What we typically recommend is that you can use GraalVM Native Image if startup time and memory usage are really important for you. That could be your number one option. In case you are looking at those more long running applications where you need to really optimize for peak performance. You can compare this too. It is likely that going with GraalVM JIT might be a better choice for you, because for those long running systems, having this profile information and also dynamic compilation can be really beneficial.
Multiplicative Value-Add of GraalVM Ecosystem
Also, if you are doing R, you might be looking into GraalVM too. We have this high performance implementation. It’s called FastR. We have an implementation of Python 3. Python is more of an early stage support language on top of GraalVM. Not every single Python application out there will already run on GraalVM. If you are interested in Python specifically, you can try it and you can also talk to us about your experience if you want to share it with us.
Do Even More with GraalVM: Cross-Platform Development
We are working with this team, they’re called Gluon. You can write your application in Java, and run it on devices like iOS, and things like that. Why is this interesting? The way I understand it is that on iOS, you cannot really dynamically execute code. This static ahead of time compilation that we offer really helps to bring your Java applications on these platforms. If you are doing those polyglot applications, you might look into VisualVM as a tool to learn more about them. We also had this collaboration with NVIDIA. If you’re interested in using GPUs from your GraalVM application, that is already possible too.
If you’re evaluating GraalVM, perhaps you would be interested in how the project goes on and how different versions appear, and when you need to prepare for a newer version. For that, we have this tool. It’s called Version Roadmap. On our website, you can check when the next version is going live and which version it is. The latest one is 20.0.
For quite some time, we only had JDK 8 based builds. We also have JDK 11. If that is something you are doing, you can evaluate GraalVM too. Also, we got this request that people want to see GC improvements in native image. Right now, as of the recent release, you can also try this low latency GC in GraalVM Native Image, which should also contribute to additional performance improvements.
Contributions are Welcome
In case you are an open source contributor, or if you are looking to become one, we are happy to accept contributions. If that’s something interesting for you, here are a couple of ways in which you can contribute.
When to Consider GraalVM
Just to summarize when and what you can use GraalVM for. You can use GraalVM for high performance. If you want to have your application running really fast. You should evaluate GraalVM in JIT mode. If you want to have this fast startup and low memory footprint for your Java applications, please take a look at GraalVM Native Image. It may be just it, or one of those frameworks is supported. If you want to have convenient language interop, you also might need this one library from another language. It’s not only crazy cases like doing many languages at one time, but maybe you’d be interested in having your application written in one language. That could be a good case to use GraalVM polyglot capabilities. It can be easily embedded in various environments. You can use its languages and features in those environments.
Where to Get Started
You can download GraalVM from this website, graalvm.org. It also comes with documentation, and samples, and some helpful common line flags. If you’re looking into following the updates, perhaps the best source will be Twitter. That’s where we are the most active. If you want to get help, we have this mailing list. Perhaps, the fastest way is that we have this community Slack with our community members and team members. You can talk to us there.